EP0136538B1 - Inter-element processor in a high performance parallel vector processor - Google Patents

Inter-element processor in a high performance parallel vector processor Download PDF

Info

Publication number
EP0136538B1
EP0136538B1 EP84110262A EP84110262A EP0136538B1 EP 0136538 B1 EP0136538 B1 EP 0136538B1 EP 84110262 A EP84110262 A EP 84110262A EP 84110262 A EP84110262 A EP 84110262A EP 0136538 B1 EP0136538 B1 EP 0136538B1
Authority
EP
European Patent Office
Prior art keywords
vector
elements
stored
register
registers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP84110262A
Other languages
German (de)
French (fr)
Other versions
EP0136538A2 (en
EP0136538A3 (en
Inventor
Chuck H. Ngai
Edward R. Wassel
Gerald J. Watkins
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of EP0136538A2 publication Critical patent/EP0136538A2/en
Publication of EP0136538A3 publication Critical patent/EP0136538A3/en
Application granted granted Critical
Publication of EP0136538B1 publication Critical patent/EP0136538B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/3001Arithmetic instructions
    • G06F9/30014Arithmetic instructions with variable precision
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/80Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
    • G06F15/8053Vector processors
    • G06F15/8076Details on data register access
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/30036Instructions to perform operations on packed data, e.g. vector, tile or matrix operations

Definitions

  • EP A 0135 721 A copending European patent application No. 84109081.4 (EP A 0135 721) relates to an apparatus for rapidly processing the elements of a pair of vectors stored in a pair of vector registers.
  • the present invention pertains to a computer system, and more particularly, to an interelement processor associated with a parallel vector processor in said computer system for rapidly processing the elements of a single vector and for storing the results of said processing.
  • a typical vector processor such as that shown in FIG. 1, includes a plurality of vector registers, each vector register storing a vector.
  • the vector comprises a plurality of vector elements.
  • a pipeline processing unit is connected to a selector associated with the vector registers for receiving, in sequence, the elements of a first vector from a selected vector register and for performing an arithmetic operation on the elements of the first vector to yield a resultant vector.
  • the elements of the resultant vector may be re-stored in corresponding locations of the first vector register or in another vector register.
  • the time required to complete operations on all 256 elements of a 256-element vector is a function of the cycle time of the pipeline unit per operation on each of the elements.
  • a vector processor comprising a plurality of memory banks. Each memory bank contains portions of several vectors. The elements of the vectors in each memory bank are accessed serially via registers. Consequently, this vector processor is slow to process vectors because of the requirement to sequentially shift through the portions of several vectors contained in each memory bank in order to access the desired portion.
  • a parallel vector processor In the European patent application (EP O 135 721-A2, comprised in the State of the Art according to Article 54(3)EPC) a parallel vector processor is disclosed.
  • the vector registers are subdivided into a plurality of smaller registers.
  • a vector, stored in a vector register comprises N elements, however, each of the smaller registers stores M elements of the vector, where M is less than N.
  • An element processor is associated with each smaller register for processing the M elements of the vectors stored in the smaller register and generating results of the processing, the results being stored in one of the vector registers.
  • the smaller registers of the vector registers and their corresponding element processors are structurally configured in parallel fashion.
  • the element processor and their associated smaller registers operate simultaneously. Consequently, processing of the N element vectors, stored in the vector registers is complete in the time required to complete the processing of M elements of the N element vector.
  • the vector registers are subdivided into a plurality of smaller registers, each of which stores, for example, four elements of a 256-element vector.
  • An element processor is associated with each smaller register, the element processor performing the same function as the pipeline processing unit.
  • Each element processor, and corresponding smaller register comprises a unit. A plurality of units are connected in a parallel configuration.
  • a pipeline vector processor 10 is illustrated.
  • a plurality of vector registers 12 (VR0 through VR15) is shown, each register storing 256 elements (element 0 through element 255).
  • an element comprises a four (4) byte binary word.
  • a selector 14 is connected to each of the vector registers 12 for selecting corresponding elements from the vector registers 12 and gating the selected elements through to a pipeline processing unit 16.
  • the pipeline processing unit 16 is connected to the selector for receiving the corresponding elements and for performing selected operations on said elements, such as arithmetic operations.
  • the processing unit 16 may receive element 0 from vector register VR0 and element 1 from vector register VR0 and perform an addition operation on said elements.
  • Processing unit 16 may then receive element 2 from vector register VR0 and add this element to the previous sum, forming another sum. Processing unit 16 may continue to add the remaining elements of vector register VR0, in sequence, to the stored sum, forming a final sum of the elements constituting the vector stored in vector register VR0.
  • a result register 18 is connected to the pipeline processing unit for storing the final sum received from the pipeline processing unit. The result register 18 is connected to each of the vector registers 12 via a select gate 20 for transferring the final sum from the result register 18 to another vector register, if desired.
  • the vector processor configuration illustrated in FIG. 1 possesses certain differences with respect to the vector processor configuration of the present invention.
  • a first element is selected from register VR0 and a second element is selected from register VR0.
  • the elements are added in the above manner.
  • a third element is selected from register VR0 and added to the previous sum in the above manner.
  • Each of the 256 elements must be selected from register VR0 and added together, in sequence, in order to provide a final sum of the elements constituting the vector stored in vector register VR0.
  • the time required to complete the processing of the vector stored in vector register VR0 is a function of the number of elements per vector and the cycle time required to process one element per vector. The performance of a vector processor could be improved by decreasing the time required to process a vector stored in a vector register.
  • each of the vector registers VR0 through VR15 of FIG. 1 stores an N element vector.
  • Each of the vector registers VR0 through VR15 is subdivided into a plurality of smaller registers 12a.
  • Each of the smaller registers 12a stores M elements of the N element vector, where M is less than N.
  • M is less than N.
  • a corresponding plurality of element processors 20 is connected to the plurality of smaller registers 12a for performing processing (arithmetic) operations on the elements of a vector stored in a vector register.
  • each of the element processors 20 performs processing operations on four elements of a vector. The results of the processing operation are simultaneously produced by each element processor, in parallel, and may be stored in corresponding locations of any one of the vector registers VR0 through VR15.
  • a block diagram construction of an element processor 20 is illustrated.
  • a local storage 12a is synonymous with the vector registers 12a shown in FIG. 2.
  • a system bus 11 and 11a is connected to a driver circuit 9 on one end and to a receiver circuit 7 on the other end.
  • a first input data assembler (ASM) 13 is connected to a driver circuit 9 and to a receiver circuit 7.
  • the ASM 13 is further connected to local storage 12a and to the element processor 20.
  • the element processor 20 shown in FIG. 3 comprises a second input data assembler (ASM) 20a connected to the local storage 12a and to the first input data assembler 13.
  • ASM second input data assembler
  • a bus interface register (BIR) 15 is connected to bus 11 and bus 11a on one end and to the second input data assembler 20a on the other end.
  • a shift select register 20b and a flush select register 20c are connected to the input data assembler 20a.
  • the flush select register 20c is connected directly to a trues/complement gate 20d whereas the shift select register 20b is connected to another trues/complement gate 20e via a pre-shifter control 20f.
  • the trues/complement gates 20d and 20e are each connected to an arithmetic logic unit (ALU) 20g.
  • ALU arithmetic logic unit
  • the ALU 20g is connected to a result register 20h via a post shifter control 20i, the result register 20h being connected to the local storage 12a for storing a result therein when the element processor 20 has completed an arithmetic processing operation on the four elements of a vector stored in the smaller register 12a of a vector register 12.
  • a multiplier circuit 20j is interconnected between the input data assembler 20a and the ALU 20g. Two operands are received by the multiplier circuit 20j. A sum output and a carry output is generated by the multiplier circuit 20j, the sum and carry outputs being received by the ALU 20g.
  • each block being a unit 30.
  • each unit 30 represents, in combination, a smaller register 12a, and an associated element processor 20.
  • the units 30 are connected together in a parallel configuration, as indicated in FIG. 4, for increasing the speed by which the elements of a vector, stored in a vector register, are added together to yield a result indicative of the sum of the elements of said vector.
  • each of the blocks labelled 0 through 7 represent a unit such as 30 of FIGS. 2 or 3.
  • Blocks 0 through 7, collectively, comprise an overall unit 40.
  • a plurality of overall units 40 are connected in a parallel configuration. Four of the overall units 40 are connected together at one junction 51 and four of the remaining overall units 40 are connected together at another junction 52.
  • junction 51 is connected to a bi-directional driver 42 and junction 52 is connected to another bi-directional driver 44. Each of these drivers is connected together, at their outputs, to a further junction 53.
  • a processor interface adaptor (PIA) 50 is connected to junction 53 for directing the functional operation of the plurality of units 30 which comprise the plurality of overall units 40.
  • a storage, or main memory, 52 is connected to the PIA 50.
  • An instruction processing unit (IPU) 54 is connected to the PIA 50 and to the storage 52.
  • the PIA 50 includes a vector instruction register (VIR) 50a connected to the IPU 54 for receiving a vector instruction from the IPU and temporarily storing the vector instruction.
  • a vector data register (VDR) 50b is connected to storage 52 and to the IPU 54 for receiving data from storage 52 and temporarily storing the data.
  • a vector status register (VSR) 50c is connected to the storage 52 and to IPU 54 for receiving data from storage and for temporarily storing the data.
  • a pico control store 50d is connected to the VIR 50a for decoding the vector instruction stored in the VIR 50a and for selecting a pico control routine stored in the store 50d.
  • a command register 50e is connected to the pico control store 50d and to the element processors of units 30 via a command bus for driving the element processors.
  • a bus control 50f is connected to the VDR 50b and to the element processors of units 30 for receiving data from the VDR 50b and transmitting the data to the element processors 20 of units 30 via a data bus.
  • the bus control 50f can also steer data from one element processor to another element processor.
  • the VSR 50c is also connected to a bus control 50g via an address control 50h.
  • the address control 50h generates addresses corresponding to the data received from the VSR 50c.
  • the bus control 50g is connected to the element processors 20 of units 30 for transmitting the generated addresses to the element processors 20 of units 30 via an address bus.
  • each vector register VR0 through VR15 is 256 elements in length.
  • a 256-element vector is assumed to reside in one or more of the vector registers 12.
  • the IPU 54 instructs the PIA 50 to execute an INTERNAL ACCUMULATE instruction.
  • each of the elements stored in a smaller register 12a are added together yielding an intermediate summation element, the intermediate summation element being stored in the result register 20h associated with the corresponding processor 20 connected to the smaller register.
  • the PIA 50 instructs the element processor 20 associated with each unit 30 to retrieve the first element and the second element from its corresponding smaller register 12a. Utilizing the example, assume that four elements are stored in each smaller register 12a. The first and second elements of each smaller register 12a are passed to the corresponding processor 20.
  • the PIA 50 instructs the processors 20 to add the first and second elements and to store the sum in the result register 20h associated with each processor 20.
  • Each of the processors 20 performs the summation operation and stores the sum in the result register 20h.
  • the PIA 50 instructs each of the element processors 20 to retrieve the third element from their corresponding smaller register 12a, add the third element to the sum stored in the result register 20h, and store a further sum in the result register 20h.
  • the PIA 50 instructs each of the processors 20 to retrieve the fourth element from their corresponding smaller register 12a, add the fourth element to the further sum stored in their corresponding result register 20h, and store the sum of the four elements in the corresponding result register 20h.
  • an intermediate summation element is stored in the result register 20h of each processor 20 corresponding to each unit 30.
  • the intermediate summation element represents the sum of the four elements stored in the corresponding smaller register 12a.
  • the PIA 50 under the control of the IPU 54, directs all the processors 20, associated with units 30, to add the intermediate summation elements together, thereby producing a final total.
  • the processors 20 add the intermediate summation elements together in the following manner.
  • the intermediate summation element stored in unit 30/block 1 is added to the intermediate summation element stored in unit 30/block 0, the sum remaining in storage in unit 30/block 0.
  • the intermediate summation element stored in unit 30/block 3 is added to the intermediate summation element stored in unit 30/block 2, the sum being stored in unit 30/block 2.
  • the intermediate summation element stored in unit 30/block 5 is added to the intermediate summation element stored in unit 30/block 4, the sum being stored in unit 30/block 4.
  • the intermediate summation element stored in unit 30/block 7 is added to the intermediate summation element stored in unit 30/block 6, the sum being stored in unit 30/block 6. Therefore, subtotals are stored in units 30/blocks 0, 2, 4 and 6 of the first overall unit 40.
  • the second through the eighth overall units 40 undergo a similar functioning in that subtotals are stored in the following additional units 30: 8, 10, 12, etc. through 62.
  • the subtotal stored in unit 30/block 2 is added to the subtotal stored in unit 30/block 0, the new subtotal being stored in unit 30/block 0.
  • the subtotal stored in unit 30/block 4 is added to the new subtotal stored in unit 30/block 0, another new subtotal being stored in unit 30/block 0.
  • the subtotal stored in unit 30/block 6 is added to the another new subtotal stored in unit 30/block 0, a further subtotal being stored in unit 30/block 0.
  • the second through the eighth overall units 40 undergo a similar functioning in that further subtotals are stored in the following additional units 30: 8, 16, 24, etc. through 56.
  • the further subtotal stored in unit 30/block 8 is added to the further subtotal stored in unit 30/block 0, a first further subtotal being stored in unit 30/block 0.
  • the further subtotal stored in unit 30/block 16 is added to the first further subtotal stored in unit 30/block 0, a second further subtotal being stored in unit 30/block 0.
  • the further subtotal stored in unit 30/block 24 is added to the second further subtotal stored in unit 30/block 0, a third further subtotal being stored in unit 30/block 0.
  • the further subtotals stored in blocks 32, 40, 48 and 56 are added together in a similar fashion, a fourth further subtotal being stored in unit 30/block 32.
  • the fourth further subtotal stored in unit 30/block 32 is added to the third further subtotal stored in unit 30/block 0 via bi-directional drivers 44 and 42, a final total being stored in unit 30/block 0.
  • the final total represents the sum of all the elements constituting the 256-element vector stored in a vector register. The final total is available for use by IPU 54, when needed.
  • the functional operation of the element processor 20 shown in FIG. 3 may be subdivided into four cycles of operation: a read local storage and shift select cycle, alternatively known as a first cycle; a pre-normalize shift cycle, known as a second cycle; an ALU operation cycle, known as a third cycle; and a post-normalize shift cycle, known as a fourth cycle.
  • PIA 50 has directed each of the processors 20 to add the elements of their smaller registers together and store the results of the summation operation in result register 20h.
  • PIA 50 instructs each of the processors 20 to retrieve their respective four elements from their corresponding smaller registers 12a (from local storage) associated with vector registers 12.
  • elements 0 through 3 are received from storage 52 by receiver 7 via bus 11a and stored in local storage 12a via ASM 13.
  • the local storage 12a is synonymous with the vector register 12a shown in FIG. 2 which stores elements 0 through 3.
  • the elements 0 through 3 represent floating point element operands.
  • the PIA 50 directs element processor 0 (20) to add elements 0 through 3 together and to store the sum in the result register 20h, then, on the first cycle, the operands of the first two elements (of the four element vector) are read from the local storage 12a and are temporarily stored in the flush register 20c and the shift register 20b via the input data assembler 20a.
  • the exponents of the respective elements enter an exponent control path (not shown) where the difference in magnitude of the exponents is calculated. Therefore, the element having the smaller exponent is gated to the shift select register 20b whereas the element having the greater exponent is gated to the flush select register 20c.
  • the flush and shift select registers 20c and 20b are latched by a latch clock at the end of the first cycle.
  • a shift operation is started.
  • the element having the greater exponent, stored in the flush select register 20c is gated into one input of the arithmetic logic unit (ALU) 20g.
  • Shift control information is passed from the exponent control path (not shown) to the pre-shifter 20f wherein the element having the smaller exponent, stored in the shift select register 20b, is right-shifted by the pre-shifter 20f to align said element with the element having the greater exponent, which is currently being gated into the one input of the ALU 20g.
  • the ALU 20g is selecting the appropriate inputs from the trues/complement gates 20d and 20e for receiving the elements from the flush and shift select registers 20c and 20b via the trues/complement gates 20d and 20e, respectively.
  • the third cycle in the operation of the element processor 20 of FIG. 3, is dedicated to the functional operation of the arithmetic logic unit (ALU) 20g.
  • the ALU is an 8-byte high speed carry look ahead adder, designed with 1's complement arithmetic and with end around carry and recomplementation.
  • the ALU performs an addition operation, wherein the bits of four respective elements, in the example, elements 0 through 3 stored in the smaller registers 12a of element processor 0, associated with vector register VR0, are added together.
  • the results of the addition operation are ultimately stored in the local storage 12a.
  • a post-normalization step must take place during the fourth cycle.
  • post-normalization in data processing terms, comprises the steps of detecting leading zero hexadecimal digits in the results produced by the ALU, and left shifting the results in accordance with the number of zero digits detected. The results exponent must be adjusted by decrementing the exponent by a value of 1 for each digit shifted. Digits of the output of the ALU 20g are examined by the post shifter 20i for their zero state, and the results of the ALU output are left shifted in accordance with the number of zero digits detected. The left shifted results of the ALU output are passed to the result register 20h for temporary storage therein.
  • the exponent control path increments or decrements the exponent value of the result element (output from the ALU) so that a correct final exponent value is gated to the result register 20h.
  • a result element is stored in the result register 20h, the operand of which is left shifted a proper amount in accordance with the number of zero digits detected in the ALU output, the exponent of which is the correct final exponent value.
  • the result element is passed to the local storage 12a for storage therein.
  • the functional operation repeats with respect to the remaining two elements to yield a sum of the four elements stored in result register 20h.
  • the sum of the four elements is the intermediate summation element.
  • the present invention utilizes this principle to optimize the performance of the vector processor portion of a computer system, and in particular, to optimize said performance relating to the summation of the elements of a single vector stored in a vector register.

Description

    Cross Reference to related application
  • A copending European patent application No. 84109081.4 (EP A 0135 721) relates to an apparatus for rapidly processing the elements of a pair of vectors stored in a pair of vector registers.
  • BACKGROUND OF THE INVENTION Field of the Invention
  • The present invention pertains to a computer system, and more particularly, to an interelement processor associated with a parallel vector processor in said computer system for rapidly processing the elements of a single vector and for storing the results of said processing.
  • Description of the Prior Art
  • A typical vector processor, such as that shown in FIG. 1, includes a plurality of vector registers, each vector register storing a vector. The vector comprises a plurality of vector elements. A pipeline processing unit is connected to a selector associated with the vector registers for receiving, in sequence, the elements of a first vector from a selected vector register and for performing an arithmetic operation on the elements of the first vector to yield a resultant vector. The elements of the resultant vector may be re-stored in corresponding locations of the first vector register or in another vector register.
  • However, with this configuration, it is necessary to perform operations on each of the elements of a vector in sequence. The time required to complete operations on all 256 elements of a 256-element vector is a function of the cycle time of the pipeline unit per operation on each of the elements.
  • With increasing sophistication of computer systems, it has become increasingly evident that there is a need to increase the performance of the vector processor portion of the computer system by decreasing the time required to process or perform arithmetic operations on each of the elements of a vector or on each of the corresponding elements of a plurality of vectors stored in the vector registers within the computer system.
  • In the European patent application (EP O 053 457-A1) a vector processor is disclosed comprising a plurality of memory banks. Each memory bank contains portions of several vectors. The elements of the vectors in each memory bank are accessed serially via registers. Consequently, this vector processor is slow to process vectors because of the requirement to sequentially shift through the portions of several vectors contained in each memory bank in order to access the desired portion.
  • In P.M. Kogge: "The Architecture of pipelined computers", 1981, pages 207-208, Hemisphere Publishing Corp., it is disclosed that a pipelined vector processor is not the only possible realisation and the basic structure of a parallel vector processor is described.
  • Following Part of Description not valid for contracting state IT.
  • In the European patent application (EP O 135 721-A2, comprised in the State of the Art according to Article 54(3)EPC) a parallel vector processor is disclosed. In order to increase the performance of the parallel vector processor, the time required to process a pair of vectors stored in a pair of vector registers is decreased. The vector registers are subdivided into a plurality of smaller registers. A vector, stored in a vector register comprises N elements, however, each of the smaller registers stores M elements of the vector, where M is less than N. An element processor is associated with each smaller register for processing the M elements of the vectors stored in the smaller register and generating results of the processing, the results being stored in one of the vector registers. The smaller registers of the vector registers and their corresponding element processors, are structurally configured in parallel fashion. The element processor and their associated smaller registers operate simultaneously. Consequently, processing of the N element vectors, stored in the vector registers is complete in the time required to complete the processing of M elements of the N element vector.
  • SUMMARY OF THE INVENTION
  • Accordingly, it is a primary object of the present invention to increase the performance of the vector processor portion of a computer system by decreasing the time required to process the elements of a vector stored in a vector register comprised in said vector processor portion of the computer system.
  • It is a further object of the present invention to increase the performance of the vector processor portion of the computer system by subdividing the plurality of vector registers into a plurality of smaller registers, and processing in parallel each of the elements of a vector stored in the smaller registers of a vector register.
  • It is still a further object of the present invention to further increase the performance of the vector processor portion of the computer system by connecting a plurality of units, comprising the plurality of smaller registers, in a parallel configuration and processing in parallel the elements of a vector stored in the smaller registers of a vector register in an improved manner
  • These and other objects are accomplished, in accordance with the present invention, as claimed, by reducing the time required to complete processing operations on all elements of a single vector. The vector registers are subdivided into a plurality of smaller registers, each of which stores, for example, four elements of a 256-element vector. An element processor is associated with each smaller register, the element processor performing the same function as the pipeline processing unit. Each element processor, and corresponding smaller register, comprises a unit. A plurality of units are connected in a parallel configuration. With this configuration, when an arithmetic operation is performed on the elements of a single vector, stored in a vector register, the arithmetic operation is completed in approximately the same time which would be required to complete an arithmetic operation on four elements of the vectors utilizing the typical vector processor of FIG. 1. As a result, the performance of the vector processor is improved substantially as a result of utilization of the present invention.
  • Brief Description of the Drawings
  • A full understanding of the present invention will be obtained from the detailed description given hereinbelow and the accompanying drawings, which are given by way of illustration only, and thus are not limitative of the present invention, and wherein:
    • FIG. 1 illustrates a conventional vector processor;
    • FIG. 2 illustrates a parallel vector processor of the present invention wherein the vector registers of FIG. 1 are subdivided into a plurality of smaller registers, and an element processor is associated with each smaller register for performing processing operations on the elements of the vector stored in the smaller register;
    • FIG. 3 illustrates a detailed construction of an element processor shown in FIG. 2;
    • FIG. 4 illustrates a plurality of smaller registers, and their corresponding element processors, connected in a parallel configuration, according to the present invention, for performing a processing operation on the elements of a vector stored in the smaller registers of a vector register; and
    • FIG. 5 illustrates the construction of the Processor Interface Adaptor of FIG. 4.
    Detailed Description of the Preferred Embodiment
  • Referring to FIG. 1, a pipeline vector processor 10 is illustrated. A plurality of vector registers 12 (VR0 through VR15) is shown, each register storing 256 elements (element 0 through element 255). In the preferred embodiment, an element comprises a four (4) byte binary word. A selector 14 is connected to each of the vector registers 12 for selecting corresponding elements from the vector registers 12 and gating the selected elements through to a pipeline processing unit 16. The pipeline processing unit 16 is connected to the selector for receiving the corresponding elements and for performing selected operations on said elements, such as arithmetic operations. For example, the processing unit 16 may receive element 0 from vector register VR0 and element 1 from vector register VR0 and perform an addition operation on said elements. Processing unit 16 may then receive element 2 from vector register VR0 and add this element to the previous sum, forming another sum. Processing unit 16 may continue to add the remaining elements of vector register VR0, in sequence, to the stored sum, forming a final sum of the elements constituting the vector stored in vector register VR0. A result register 18 is connected to the pipeline processing unit for storing the final sum received from the pipeline processing unit. The result register 18 is connected to each of the vector registers 12 via a select gate 20 for transferring the final sum from the result register 18 to another vector register, if desired.
  • However, the vector processor configuration illustrated in FIG. 1 possesses certain differences with respect to the vector processor configuration of the present invention. Utilizing the example, a first element is selected from register VR0 and a second element is selected from register VR0. The elements are added in the above manner. A third element is selected from register VR0 and added to the previous sum in the above manner. Each of the 256 elements must be selected from register VR0 and added together, in sequence, in order to provide a final sum of the elements constituting the vector stored in vector register VR0. As a result, the time required to complete the processing of the vector stored in vector register VR0 is a function of the number of elements per vector and the cycle time required to process one element per vector. The performance of a vector processor could be improved by decreasing the time required to process a vector stored in a vector register.
  • Referring to FIG. 2, a parallel vector processor is illustrated. In FIG. 2, each of the vector registers VR0 through VR15 of FIG. 1 stores an N element vector. Each of the vector registers VR0 through VR15 is subdivided into a plurality of smaller registers 12a. Each of the smaller registers 12a stores M elements of the N element vector, where M is less than N. For example, if vector registers VR0 through VR15 each store a 256-element vector, a smaller register 12a may store four elements of the 256-element vector. A corresponding plurality of element processors 20 is connected to the plurality of smaller registers 12a for performing processing (arithmetic) operations on the elements of a vector stored in a vector register. Utilizing the example, each of the element processors 20 performs processing operations on four elements of a vector. The results of the processing operation are simultaneously produced by each element processor, in parallel, and may be stored in corresponding locations of any one of the vector registers VR0 through VR15.
  • Referring to FIG. 3, a block diagram construction of an element processor 20 is illustrated. In FIG, 3, a local storage 12a is synonymous with the vector registers 12a shown in FIG. 2. A system bus 11 and 11a is connected to a driver circuit 9 on one end and to a receiver circuit 7 on the other end. A first input data assembler (ASM) 13 is connected to a driver circuit 9 and to a receiver circuit 7. The ASM 13 is further connected to local storage 12a and to the element processor 20. The element processor 20 shown in FIG. 3 comprises a second input data assembler (ASM) 20a connected to the local storage 12a and to the first input data assembler 13. A bus interface register (BIR) 15 is connected to bus 11 and bus 11a on one end and to the second input data assembler 20a on the other end. A shift select register 20b and a flush select register 20c are connected to the input data assembler 20a. The flush select register 20c is connected directly to a trues/complement gate 20d whereas the shift select register 20b is connected to another trues/complement gate 20e via a pre-shifter control 20f. The trues/ complement gates 20d and 20e are each connected to an arithmetic logic unit (ALU) 20g. The ALU 20g is connected to a result register 20h via a post shifter control 20i, the result register 20h being connected to the local storage 12a for storing a result therein when the element processor 20 has completed an arithmetic processing operation on the four elements of a vector stored in the smaller register 12a of a vector register 12. A multiplier circuit 20j is interconnected between the input data assembler 20a and the ALU 20g. Two operands are received by the multiplier circuit 20j. A sum output and a carry output is generated by the multiplier circuit 20j, the sum and carry outputs being received by the ALU 20g.
  • Referring to FIG. 4, according to the present invention, blocks 0 through 63 are illustrated, each block being a unit 30. Furthermore, each unit 30 represents, in combination, a smaller register 12a, and an associated element processor 20. The units 30 are connected together in a parallel configuration, as indicated in FIG. 4, for increasing the speed by which the elements of a vector, stored in a vector register, are added together to yield a result indicative of the sum of the elements of said vector.
  • For ease of description, refer to FIG. 2 or 3 and note that an element processor 20, in association with its corresponding smaller register 12a, collectively comprise one unit, identified by feature numeral 30. Referring to FIG. 4, each of the blocks labelled 0 through 7 represent a unit such as 30 of FIGS. 2 or 3. Blocks 0 through 7, collectively, comprise an overall unit 40. A plurality of overall units 40 are connected in a parallel configuration. Four of the overall units 40 are connected together at one junction 51 and four of the remaining overall units 40 are connected together at another junction 52. In addition, four of the overall units 40 correspond to a 128-element vector stored in the smaller registers 12a of blocks 0 through 31, and four of the remaining overall units 40 correspond to another 128-element vector stored in the smaller registers 12a of blocks 32 through 63. Therefore, the configuration of FIG. 4 is capable of providing a sum total of the elements of a 256-element vector. Junction 51 is connected to a bi-directional driver 42 and junction 52 is connected to another bi-directional driver 44. Each of these drivers is connected together, at their outputs, to a further junction 53. A processor interface adaptor (PIA) 50 is connected to junction 53 for directing the functional operation of the plurality of units 30 which comprise the plurality of overall units 40. A storage, or main memory, 52 is connected to the PIA 50. An instruction processing unit (IPU) 54 is connected to the PIA 50 and to the storage 52.
  • Referring to FIG. 5, the construction of the PIA 50 is illustrated. The PIA 50 includes a vector instruction register (VIR) 50a connected to the IPU 54 for receiving a vector instruction from the IPU and temporarily storing the vector instruction. A vector data register (VDR) 50b is connected to storage 52 and to the IPU 54 for receiving data from storage 52 and temporarily storing the data. A vector status register (VSR) 50c is connected to the storage 52 and to IPU 54 for receiving data from storage and for temporarily storing the data. A pico control store 50d is connected to the VIR 50a for decoding the vector instruction stored in the VIR 50a and for selecting a pico control routine stored in the store 50d. A command register 50e is connected to the pico control store 50d and to the element processors of units 30 via a command bus for driving the element processors. A bus control 50f is connected to the VDR 50b and to the element processors of units 30 for receiving data from the VDR 50b and transmitting the data to the element processors 20 of units 30 via a data bus. The bus control 50f can also steer data from one element processor to another element processor. The VSR 50c is also connected to a bus control 50g via an address control 50h. The address control 50h generates addresses corresponding to the data received from the VSR 50c. The bus control 50g is connected to the element processors 20 of units 30 for transmitting the generated addresses to the element processors 20 of units 30 via an address bus.
  • The functional operation of the interelement processor in accordance with the present invention will be described in the paragraphs below with reference to FIGS. 3 through 5.
  • Assume that the IPU 54 has already directed the PIA 50 to load data from storage 52 into one or more of vector registers CR0 through VR15 via element processors 20. Assume further that each vector register VR0 through VR15 is 256 elements in length. As a result, a 256-element vector is assumed to reside in one or more of the vector registers 12. When a 256-element vector is stored in one or more vector registers 12, the IPU 54 instructs the PIA 50 to execute an INTERNAL ACCUMULATE instruction. When the INTERNAL ACCUMULATE instruction is executed, each of the elements stored in a smaller register 12a are added together yielding an intermediate summation element, the intermediate summation element being stored in the result register 20h associated with the corresponding processor 20 connected to the smaller register. When executing the INTERNAL ACCUMULATE instruction, the PIA 50 instructs the element processor 20 associated with each unit 30 to retrieve the first element and the second element from its corresponding smaller register 12a. Utilizing the example, assume that four elements are stored in each smaller register 12a. The first and second elements of each smaller register 12a are passed to the corresponding processor 20. Under control of the IPU 54, the PIA 50 instructs the processors 20 to add the first and second elements and to store the sum in the result register 20h associated with each processor 20. Each of the processors 20 performs the summation operation and stores the sum in the result register 20h. The PIA 50, under the control of the IPU 54, instructs each of the element processors 20 to retrieve the third element from their corresponding smaller register 12a, add the third element to the sum stored in the result register 20h, and store a further sum in the result register 20h. When the processors 20 execute this further instruction, the PIA 50 instructs each of the processors 20 to retrieve the fourth element from their corresponding smaller register 12a, add the fourth element to the further sum stored in their corresponding result register 20h, and store the sum of the four elements in the corresponding result register 20h. As a result, an intermediate summation element is stored in the result register 20h of each processor 20 corresponding to each unit 30. The intermediate summation element represents the sum of the four elements stored in the corresponding smaller register 12a.
  • The PIA 50, under the control of the IPU 54, directs all the processors 20, associated with units 30, to add the intermediate summation elements together, thereby producing a final total. The processors 20 add the intermediate summation elements together in the following manner.
  • The intermediate summation element stored in unit 30/block 1 is added to the intermediate summation element stored in unit 30/block 0, the sum remaining in storage in unit 30/block 0. Simultaneously, the intermediate summation element stored in unit 30/block 3 is added to the intermediate summation element stored in unit 30/block 2, the sum being stored in unit 30/block 2. The intermediate summation element stored in unit 30/block 5 is added to the intermediate summation element stored in unit 30/block 4, the sum being stored in unit 30/block 4. The intermediate summation element stored in unit 30/block 7 is added to the intermediate summation element stored in unit 30/block 6, the sum being stored in unit 30/block 6. Therefore, subtotals are stored in units 30/ blocks 0, 2, 4 and 6 of the first overall unit 40. The second through the eighth overall units 40 undergo a similar functioning in that subtotals are stored in the following additional units 30: 8, 10, 12, etc. through 62.
  • Next, the subtotal stored in unit 30/block 2 is added to the subtotal stored in unit 30/block 0, the new subtotal being stored in unit 30/block 0. The subtotal stored in unit 30/block 4 is added to the new subtotal stored in unit 30/block 0, another new subtotal being stored in unit 30/block 0. The subtotal stored in unit 30/block 6 is added to the another new subtotal stored in unit 30/block 0, a further subtotal being stored in unit 30/block 0. The second through the eighth overall units 40 undergo a similar functioning in that further subtotals are stored in the following additional units 30: 8, 16, 24, etc. through 56.
  • The further subtotal stored in unit 30/block 8 is added to the further subtotal stored in unit 30/block 0, a first further subtotal being stored in unit 30/block 0. The further subtotal stored in unit 30/block 16 is added to the first further subtotal stored in unit 30/block 0, a second further subtotal being stored in unit 30/block 0. The further subtotal stored in unit 30/block 24 is added to the second further subtotal stored in unit 30/block 0, a third further subtotal being stored in unit 30/block 0. Simultaneously, the further subtotals stored in blocks 32, 40, 48 and 56 are added together in a similar fashion, a fourth further subtotal being stored in unit 30/block 32. The fourth further subtotal stored in unit 30/block 32 is added to the third further subtotal stored in unit 30/block 0 via bi-directional drivers 44 and 42, a final total being stored in unit 30/block 0. The final total represents the sum of all the elements constituting the 256-element vector stored in a vector register. The final total is available for use by IPU 54, when needed.
  • The functional operation of the element processor 20 shown in FIG. 3 may be subdivided into four cycles of operation: a read local storage and shift select cycle, alternatively known as a first cycle; a pre-normalize shift cycle, known as a second cycle; an ALU operation cycle, known as a third cycle; and a post-normalize shift cycle, known as a fourth cycle.
  • Assume that the PIA 50 has directed each of the processors 20 to add the elements of their smaller registers together and store the results of the summation operation in result register 20h. PIA 50 instructs each of the processors 20 to retrieve their respective four elements from their corresponding smaller registers 12a (from local storage) associated with vector registers 12. Directing attention to the functional operation of element processor 0, elements 0 through 3 are received from storage 52 by receiver 7 via bus 11a and stored in local storage 12a via ASM 13. The local storage 12a is synonymous with the vector register 12a shown in FIG. 2 which stores elements 0 through 3. Assume further that the elements 0 through 3 represent floating point element operands.
  • When the PIA 50 directs element processor 0 (20) to add elements 0 through 3 together and to store the sum in the result register 20h, then, on the first cycle, the operands of the first two elements (of the four element vector) are read from the local storage 12a and are temporarily stored in the flush register 20c and the shift register 20b via the input data assembler 20a. However, at the same time, the exponents of the respective elements enter an exponent control path (not shown) where the difference in magnitude of the exponents is calculated. Therefore, the element having the smaller exponent is gated to the shift select register 20b whereas the element having the greater exponent is gated to the flush select register 20c. The flush and shift select registers 20c and 20b are latched by a latch clock at the end of the first cycle.
  • At the beginning of the second cycle, a shift operation is started. The element having the greater exponent, stored in the flush select register 20c, is gated into one input of the arithmetic logic unit (ALU) 20g. Shift control information is passed from the exponent control path (not shown) to the pre-shifter 20f wherein the element having the smaller exponent, stored in the shift select register 20b, is right-shifted by the pre-shifter 20f to align said element with the element having the greater exponent, which is currently being gated into the one input of the ALU 20g. Concurrently, the ALU 20g is selecting the appropriate inputs from the trues/ complement gates 20d and 20e for receiving the elements from the flush and shift select registers 20c and 20b via the trues/ complement gates 20d and 20e, respectively.
  • The third cycle, in the operation of the element processor 20 of FIG. 3, is dedicated to the functional operation of the arithmetic logic unit (ALU) 20g. The ALU is an 8-byte high speed carry look ahead adder, designed with 1's complement arithmetic and with end around carry and recomplementation. The ALU performs an addition operation, wherein the bits of four respective elements, in the example, elements 0 through 3 stored in the smaller registers 12a of element processor 0, associated with vector register VR0, are added together. The results of the addition operation are ultimately stored in the local storage 12a. However, prior to this step, a post-normalization step must take place during the fourth cycle.
  • When the addition operation is completed by the ALU 20g, a post-normalization step takes place during the fourth cycle. The term "post-normalization", in data processing terms, comprises the steps of detecting leading zero hexadecimal digits in the results produced by the ALU, and left shifting the results in accordance with the number of zero digits detected. The results exponent must be adjusted by decrementing the exponent by a value of 1 for each digit shifted. Digits of the output of the ALU 20g are examined by the post shifter 20i for their zero state, and the results of the ALU output are left shifted in accordance with the number of zero digits detected. The left shifted results of the ALU output are passed to the result register 20h for temporary storage therein. The exponent control path (not shown) increments or decrements the exponent value of the result element (output from the ALU) so that a correct final exponent value is gated to the result register 20h. Thus, a result element is stored in the result register 20h, the operand of which is left shifted a proper amount in accordance with the number of zero digits detected in the ALU output, the exponent of which is the correct final exponent value. If desired, during the next cycle, following the fourth cycle, the result element is passed to the local storage 12a for storage therein. The functional operation repeats with respect to the remaining two elements to yield a sum of the four elements stored in result register 20h. The sum of the four elements is the intermediate summation element.
  • Due to the technological advance associated with integrated circuit packaging, the cost to produce integrated circuit chips is low. Therefore, the cost involved in the production of large quantities of integrated circuit chips is relatively low. As a result, it becomes economically justifiable to optimize the performance of a computer system, or the vector processor portion of the computer system, by utilizing an increased number of integrated circuit chips. The present invention utilizes this principle to optimize the performance of the vector processor portion of a computer system, and in particular, to optimize said performance relating to the summation of the elements of a single vector stored in a vector register.

Claims (4)

  1. A vector processor, having a plurality of vector registers (VR0 - VR15) wherein each vector register is subdivided into a plurality of smaller registers (12a), each of said smaller registers has a separate output, each vector register stores a vector, and each of said smaller registers stores a plurality (M) of elements of said vector, charcterized by

    a plurality of element processor means (20), each of said element processor means being dedicated and connected to the outputs of a different set of said smaller registers for processing the elements within the dedicated set, each of said sets comprising one smaller register per vector register, and wherein all the smaller registers in each set store corresponding vector elements of said vectors and

    controlling means (50) for selecting which elements in which smaller registers to process, the plurality of element processor means processing, at least partially in parallel, the selected elements of different sets of said smaller registers.
  2. A vector processor as set forth in claim 1 further characterized by:

    means for processing in a first one of said element processor means (20) a first result stored in said processor means with a second result stored in a second one of said element processor means to yield a first intermediate result, and storing said first intermediate result in said first element processor means,

    means for processing in a third one of said element processor means a third result stored in said processor means with a fourth result stored in a fourth one of said element processor means to yield a second intermediate result, and storing said second intermediate result in said third element processor means and

    means for processing in said third element processor means the second intermediate result stored in said third element processor means with the first intermediate result stored in said first element processor means to yield a fifth result, and storing said fifth result in said first element processor means.
  3. A vector processor as set forth in claim 1 wherein each of said smaller registers (12a) stores elements of only one vector.
  4. A vector processor as set forth in claim 1 wherein each set of smaller registers has a separate bus (11, 12a) connecting the outputs of said smaller registers in said set to the dedicated element processor means (20).
EP84110262A 1983-10-03 1984-08-29 Inter-element processor in a high performance parallel vector processor Expired - Lifetime EP0136538B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US53831883A 1983-10-03 1983-10-03
US538318 1983-10-03

Publications (3)

Publication Number Publication Date
EP0136538A2 EP0136538A2 (en) 1985-04-10
EP0136538A3 EP0136538A3 (en) 1988-01-27
EP0136538B1 true EP0136538B1 (en) 1992-06-24

Family

ID=24146415

Family Applications (1)

Application Number Title Priority Date Filing Date
EP84110262A Expired - Lifetime EP0136538B1 (en) 1983-10-03 1984-08-29 Inter-element processor in a high performance parallel vector processor

Country Status (4)

Country Link
EP (1) EP0136538B1 (en)
JP (1) JPS6083176A (en)
CA (1) CA1208790A (en)
DE (1) DE3485786T2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8683182B2 (en) 1995-08-16 2014-03-25 Microunity Systems Engineering, Inc. System and apparatus for group floating-point inflate and deflate operations

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0167061A3 (en) * 1984-06-20 1988-07-27 Convex Computer Corporation Vector processing computer
CA1233260A (en) 1985-03-13 1988-02-23 Chuck H. Ngai High performance parallel vector processor having a modified vector register/element processor configuration

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0135721A2 (en) * 1983-09-09 1985-04-03 International Business Machines Corporation High performance parallel vector processor

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR860001434B1 (en) * 1980-11-21 1986-09-24 후지쑤 가부시끼가이샤 Bank interleaved vector processor having a fixed relationship between start timing signals
JPS58149556A (en) * 1982-02-27 1983-09-05 Fujitsu Ltd Parallel processing device
JPS58149555A (en) * 1982-02-27 1983-09-05 Fujitsu Ltd Parallel processing device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0135721A2 (en) * 1983-09-09 1985-04-03 International Business Machines Corporation High performance parallel vector processor

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8683182B2 (en) 1995-08-16 2014-03-25 Microunity Systems Engineering, Inc. System and apparatus for group floating-point inflate and deflate operations
US8769248B2 (en) 1995-08-16 2014-07-01 Microunity Systems Engineering, Inc. System and apparatus for group floating-point inflate and deflate operations

Also Published As

Publication number Publication date
DE3485786D1 (en) 1992-07-30
DE3485786T2 (en) 1993-02-04
CA1208790A (en) 1986-07-29
EP0136538A2 (en) 1985-04-10
EP0136538A3 (en) 1988-01-27
JPS6083176A (en) 1985-05-11

Similar Documents

Publication Publication Date Title
US4888682A (en) Parallel vector processor using multiple dedicated processors and vector registers divided into smaller registers
US5442762A (en) Instructing method and execution system for instructions including plural instruction codes
EP0195245B1 (en) A high performance parallel vector processor having a modified vector register/element processor configuration
EP0135721B1 (en) High performance parallel vector processor
US6061780A (en) Execution unit chaining for single cycle extract instruction having one serial shift left and one serial shift right execution units
US5287532A (en) Processor elements having multi-byte structure shift register for shifting data either byte wise or bit wise with single-bit output formed at bit positions thereof spaced by one byte
EP0053457B1 (en) Data processing apparatus
EP0468820B1 (en) Data processing system for single-precision and double-precision data
US4876644A (en) Parallel pipelined processor
EP0755015B1 (en) Combining data values
US3943494A (en) Distributed execution processor
EP0491693B1 (en) Improved cpu pipeline having register file bypass on update/access address compare
US6446193B1 (en) Method and apparatus for single cycle processing of data associated with separate accumulators in a dual multiply-accumulate architecture
EP0126247B1 (en) Computer system
EP0201833B1 (en) Instruction processor
US4334284A (en) Multiplier decoding using parallel MQ register
US3997771A (en) Apparatus and method for performing an arithmetic operation and multibit shift
US6026486A (en) General purpose processor having a variable bitwidth
US4631672A (en) Arithmetic control apparatus for a pipeline processing system
US6092183A (en) Data processor for processing a complex instruction by dividing it into executing units
EP0136538B1 (en) Inter-element processor in a high performance parallel vector processor
US5754875A (en) Computer system with double width data bus
US4837730A (en) Linking scalar results directly to scalar operation inputs on a bidirectional databus in a computer which superpositions vector and scalar operations
Senzig et al. Computer organization for array processing
US3260840A (en) Variable mode arithmetic circuits with carry select

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19841123

AK Designated contracting states

Designated state(s): DE FR GB IT

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): DE FR GB IT

17Q First examination report despatched

Effective date: 19900509

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB IT

REF Corresponds to:

Ref document number: 3485786

Country of ref document: DE

Date of ref document: 19920730

ET Fr: translation filed
ITF It: translation for a ep patent filed

Owner name: IBM - DR. ING. FABRIZIO

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 19950726

Year of fee payment: 12

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 19950807

Year of fee payment: 12

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 19950821

Year of fee payment: 12

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Effective date: 19960829

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 19960829

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Effective date: 19970430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Effective date: 19970501

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST