WO2000070482A1 - Traitement des interruptions et des exceptions pour processeurs numeriques multitrain - Google Patents

Traitement des interruptions et des exceptions pour processeurs numeriques multitrain Download PDF

Info

Publication number
WO2000070482A1
WO2000070482A1 PCT/US2000/006621 US0006621W WO0070482A1 WO 2000070482 A1 WO2000070482 A1 WO 2000070482A1 US 0006621 W US0006621 W US 0006621W WO 0070482 A1 WO0070482 A1 WO 0070482A1
Authority
WO
WIPO (PCT)
Prior art keywords
streams
interrupt
stream
interrupts
mapping
Prior art date
Application number
PCT/US2000/006621
Other languages
English (en)
Inventor
Mario D. Nemirovsky
Adolfo M. Nemirovsky
Narendra Sankar
Original Assignee
Clearwater Networks, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/312,302 external-priority patent/US7020879B1/en
Application filed by Clearwater Networks, Inc. filed Critical Clearwater Networks, Inc.
Priority to AU38820/00A priority Critical patent/AU3882000A/en
Publication of WO2000070482A1 publication Critical patent/WO2000070482A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4812Task transfer initiation or dispatching by interrupt, e.g. masked
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3851Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution from multiple instruction streams, e.g. multistreaming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3861Recovery, e.g. branch miss-prediction, exception handling

Definitions

  • the present invention is in the field of digital processors, and pertains more particularly to such devices capable of executing multiple processing streams concurrently, which are termed multi-streaming processors in the art.
  • Multi-streaming processors capable of processing multiple threads are known in the art, and have been the subject of considerable research and development.
  • the present invention takes notice of the prior work in this field, and builds upon that work, bringing new and non-obvious improvements in apparatus and methods to the art.
  • the inventors have provided with this patent application an Information Disclosure Statement listing a number of published papers in the technical field of multi-streaming processors, which together provide additional background and context for the several aspects of the present invention disclosed herein.
  • this specification regards a stream in reference to a processing system as a hardware capability of the processor for supporting and processing an instruction thread.
  • a thread is the actual software running within a stream.
  • a multi-streaming processor implemented as a CPU for operating a desktop computer may simultaneously process threads from two or more applications, such as a word processing program and an object-oriented drawing program.
  • a multi-streaming-capable processor may operate a machine without regular human direction, such as a router in a packet switched network.
  • a router for example, there may be one or more threads for processing and forwarding data packets on the network, another for quality-of-service (QoS) negotiation with other routers and servers connected to the network and another for maintaining routing tables and the like.
  • QoS quality-of-service
  • a multi-streaming processor operating a single thread runs as a single-stream processor with unused streams idle.
  • a stream is considered an active stream at all times the stream supports a thread, and otherwise inactive.
  • superscalar processors are also known in the art. This term refers to processors that have multiples of one or more types of functional units, and an ability to issue concurrent instructions to multiple functional units.
  • CPUs central processing units
  • Most central processing units (CPUs) built today have more than a single functional unit of each type, and are thus superscalar processors by this definition. Some have many such units, including, for example, multiple floating point units, integer units, logic units, load/store units and so forth.
  • Multi-streaming superscalar processors are known in the art as well.
  • the inventors have determined that there is a neglected field in the architecture for all types of multi-streaming processors, including, but not limited to the types described above:
  • the neglected field is that of communications between concurrent streams and types of control that one active stream may assert on another stream, whether active or not, so that the activity of multiple concurrent threads may be coordinated, and so that activities such as access to functional units may be dynamically shared to meet diverse needs in processing.
  • a particular area of neglect is in mapping and handling of external and internal interrupts in the presence of multiple streams and also exception handling.
  • a multi-streaming processor system comprising a plurality of streams for executing one or more instruction threads; a set of functional resources for processing instructions from streams; and interrupt logic.
  • interrupt logic Through the interrupt logic interrupts or exceptions are detected and mapped to one or more specific streams.
  • One interrupt or exception may be mapped to two or more streams, or two or more interrupts or exceptions may be mapped to one stream.
  • mapping of interrupts to streams is static and determined at processor design. In other embodiments mapping of interrupts and exceptions is programmable, and in some cases uses a storage file or table wherein the interrupt logic refers to the data store for mapping data to relate received interrupts or exceptions to streams. In other embodiments mapping is conditional and dynamic, the interrupt logic executing an algorithm being sensitive to variables to determine the mapping. In some other embodiments, it may be a combination of any of the above. Interrupts may be external interrupts generated by sources external to the processor or , software interrupts generated by active streams. In some embodiments there is a mask for enabling/disabling recognition of mapped interrupts or exceptions.
  • the one or more streams are interrupted by the interrupt logic. Further, after every interrupted stream acknowledges the interrupt, it is vectored to a service routine by the interrupt logic. In some embodiments two or more streams are interrupted by one interrupt or exception, and the interrupt logic delays vectoring any stream to a service routine until all interrupted streams acknowledge the interrupt. In some embodiments two streams acknowledging the same interrupt are vectored to different service routines by the interrupt logic.
  • a method for processing interrupts in a multi- stream processor comprising steps of (a) detecting an interrupt or exception and passing the detected interrupt or exception to interrupt logic; and (b) mapping the interrupt or exception to one or more streams of the multi- stream processor.
  • the interrupt or exception may be mapped to two or more streams.
  • two or more interrupts or exceptions are detected, and in step (b), the two or more interrupts or exceptions are mapped to one stream.
  • This mapping of interrupts to streams may be static and determined at processor design, programmable, as referring to a map in data storage, or conditional and dynamic, the interrupt logic executing an algorithm sensitive to variables to determine the mapping.
  • the interrupts may be external interrupts generated by sources external to the processor or the software interrupts generated by active streams.
  • the one or more streams are interrupted by the interrupt logic. After acknowledgement the interrupted stream or streams are vectored to a service routine. In some cases, two or more streams are interrupted by one interrupt or exception, and the interrupt logic delays vectoring any stream to a service routine until all interrupted streams acknowledge the interrupt. Two streams acknowledging the same interrupt may be vectored to different service routines by the interrupt logic.
  • computing systems having multi-stream processors having a plurality of streams for executing one or more instruction threads; and interrupt handling logic, the systems characterized in that through the interrupt logic specific interrupts or exceptions are detected and mapped to one or more specific streams. Again one interrupt or exception may be mapped to two or more streams, two or more interrupts or exceptions may be mapped to one stream.
  • the mapping may be static, programmable or conditional and dynamic, the interrupt logic executing an algorithm sensitive to variables to determine the mapping.
  • Fig. 1A is a generalized diagram of a multi- streaming processor according to an embodiment of the present invention.
  • Fig. IB is an exemplary bitmap illustrating control authorization data according to an embodiment of the present invention
  • Fig. 1C illustrates resource assignment for streams according to an embodiment of the present invention.
  • Fig. ID illustrates priorities for a stream according to an embodiment of the present invention.
  • Fig. IE illustrates control indicators for one stream according to an embodiment of the present invention.
  • Fig. 2A is a flow chart illustrating one method whereby a thread in one stream forks a thread in another stream and later joins it.
  • Fig. 2B is a flow chart illustrating another method whereby a thread in one stream forks a thread in another stream and later joins it.
  • Fig. 3 is a flow chart illustrating a method whereby a stream in one stream forks a thread in another stream in a processor containing a special register transfer.
  • Fig. 4 is an architecture diagram illustrating interrupt mapping and processing in an embodiment of the present invention.
  • Multi-streaming processors as described in priority document S/N 09/216,017, have physical stream resources for concurrently executing two or more instruction threads, and multiple register files as well.
  • the present invention applies to all such processors and also to processors that may accomplish multi-streaming in other ways.
  • a set of editable characteristics is kept for active streams, and these characteristics regulate the forms of control that may be exercised by other active streams over that particular stream.
  • These editable characteristics may take any one of several forms in different embodiments, by convenience or for special reasons.
  • the editable characteristics are implemented in silicon on the processor chip, as this arrangement allows very quick access in operation.
  • the invention is not thus limited, and such characteristics may be stored and editable in other ways.
  • the editable characteristics may also be mapped as stream-specific or context- specific in different situations and embodiments.
  • bit-map is maintained wherein individual bits or binary values of bit combinations are associated with individual streams and assigned particular meaning relative to inter-stream communication and control, indicating such things as supervisory hierarchy among streams at any particular time, access of each stream to processor resources, and state control for Master Stream, Enable and Disable modes, and Sleep modes, which are described in further detail below.
  • some supervisory control bits regulate the forms of control that any other active stream may exercise over each individual active stream.
  • Active streams may, within carefully defined limits, set and reset their own control bits, and other active streams with appropriate permission may also do so.
  • a master thread at any point in time, may run in a stream, which is then designated a Master Stream while running a Master Thread, and a Master Stream has complete control over slave streams, and may at any time override the control bits of the slave streams. If there is more than one Master stream running, each may have different designated slave streams.
  • active streams may act as supervisors of other active streams, temporarily (typically) controlling their execution and communicating with them. Further, a Master Stream has, and supervisor streams may have, control over what processing resources active slave streams may use, either directly or by modifying a stream's priorities.
  • Fig. 1A is a generalized diagram of a multi-streaming processor according to an embodiment of the present invention, showing an instruction cache 101 providing instructions from multiple threads to four streams 103, labeled 0-3, from which an instruction scheduler dispatches instructions from active streams to functional resources 107.
  • a set of multiple register files 109 in this case four, but may be more, is shown for use in processing, such as for storing thread contexts to be associated with active streams during processing.
  • Data flows to and from register files and a data cache 111, and the functional resources may include a Register Transfer Unit (RTU) as taught in priority document S/N 09/240,012 incorporated herein by reference.
  • RTU Register Transfer Unit
  • a unique inter-stream control bit-map 115 stores individual bits, and in some cases binary values of bit combinations, associated with individual streams and assigned particular meaning relative to inter-stream communication and control, as introduced above.
  • a shared system bus 113 connects the instruction and data caches. The diagram shown is exemplary and general, and the skilled artisan will recognize there are a number of variations which may be made. The importance for the present purpose is in the multiplicity of streams adapted to support a multiplicity of threads simultaneously.
  • Inter-stream control bitmap 115 is a reference repository of control settings defining and configuring Inter-stream control.
  • this reference single bits in some instances, and binary values represented by two or more bits in other instances, define such things as priorities of an active stream for shared system resources, fixed resource assignment to particular streams, and control hierarchy among active streams. Specific control characteristics in one exemplary embodiment are described below.
  • an active stream is enabled to set and edit control reference data unique to that stream.
  • one stream may alter the control reference data for other streams.
  • each particular stream may control which other streams may edit which control data for the particular stream.
  • Fig. IB is a portion of bit map 115 of Fig. 1 A showing bits set to indicate state of authorization granted by one stream, in this case stream 0, for other streams to alter control bits associated uniquely with stream 0, or to exercise specific control functions relative to stream 0.
  • a similar bit map in this embodiment exists for each of streams 1, 2, and 3, but one instance is sufficient for this description.
  • this matrix there is a row for each of streams 1, 2, and 3, and columns for control definition. Again, these bits may be set by active stream 0, and the motivation for editing the bit map will be related to the function of stream 0, which will be defined by the nature of the thread running in stream 0.
  • the bit map is a physical resource associated with a stream in a multi-streaming processor, and exists to enable several forms and degrees of inter-stream control and cooperation.
  • the first column from the left in Fig. IB is labeled supervisory, and indicates supervisory authorization.
  • Logical 1 in the row for streams 1 and 2 indicates that stream 0 grants supervisory access to streams 1 and 2, but not to stream 3.
  • Supervisory access means that these other streams may edit the control data for the instant stream.
  • the inter-stream control data for stream 0 may be edited by streams 0 (itself) and also by streams 1 and 2, but not by stream 3. Because each active stream may edit its own control data, the authorization for streams 1 and 2 may be rescinded at any time, and re-authorized at any time, by active stream 0.
  • Stream 0 uses a bit in this column to grant permission for another stream to enable stream 0 if stream 0 is disabled. In the instant case there are no logical l's in this column, so none of streams 1, 2 or 3 may enable stream 0. There is a distinct difference between the authorization for supervisory editing access described above relative to the first column of Fig. IB, and authorization for such as enable or disable.
  • the next bit column is labeled priorities, and a logical 1 in this column for a stream indicates that stream 0 grants another stream permission to set priorities for stream 0. In the instant case stream 0 does not allow any other stream to set its priorities. Priorities are typically set in embodiments of the invention to indicate access to processor resources.
  • the next bit column is labeled interrupts, and means that another stream may interrupt stream 0. In the instant case stream 2 is granted the interrupt privilege. It should be clear to the skilled artisan, given the teachings of this specification, that there are a variety of revisions that might be made in the matrix shown, and the meaning of specific columns. It should also be clear that the matrix illustration is exemplary, and the bits described could as well be individual bits in a two-byte register, as long as the convention is kept as to which bits relate to which streams and to which control functions and resources.
  • inter-stream control is described for multi-streaming, super-scalar processors, meaning processors that have multiple streams and also multiple functional resources.
  • processors may have, for example, several integer processing units, several floating point processing units, several branch units, and so on.
  • the inter-stream control configuration indicated by bitmap 115 may be set in embodiments of the invention to reserve certain resources to certain streams, and restrict those same resources from use by other streams.
  • Fig. 1C is a bit row indicating instant resource assignment for stream 0. Again, similar resource assignment configuration may exist for all other streams as well, but one should be sufficient for illustration. In this case there are 4 integer units, 4 floating point units, and 4 branch units.
  • the instant setting shows that stream 0 has reserved access to integer units 1, 2, and 3, and to branch unit 1. Conversely, this setting means stream 0 may not access integer 4, any floating point unit, or branch units 1, 2, or 3. Because stream 0 as an active stream may set its own configuration, including granting and denying control bit setting to other streams, stream 0 (or any active stream) may reserve, in specific instances, specific resources.
  • stream 0 may reserve, in specific instances, specific resources.
  • Fig. ID illustrates a portion of bitmap 115 of Fig. 1A devoted to priority settings for stream 0.
  • priorities may vary from zero to seven, so three bits are needed for each priority level setting, with the binary value of the bits indicating priority level.
  • priorities There are in one embodiment three different sorts of priorities, which may be termed execution priority, interrupt priority and resource priority. All three types of priority are illustrated in some form in Fig. ID, although there may be more or less granularity than illustrated.
  • Fig. ID the top row indicates execution priority. This setting determines for a stream what threads may execute in that stream. That is, a thread may have inherently a certain priority or be assigned a priority, and the execution priority as shown in Fig. ID may be edited by a stream or for a stream by a supervisor active stream. Only a thread with a priority higher than the stream's execution priority may execute in that stream.
  • the concept of a thread having a priority may be implemented in different ways. In some preferred embodiments a thread has a priority by virtue of a thread context which has an assigned and alterable priority.
  • contexts when a context is loaded to a register file, that context may be assigned a priority number of predesigned or determined granularity, and the thread that is (or will be) called to a stream when the context is made active and associated with a stream may then be said to have the priority of the context stored in the register file.
  • contexts may be stored in memory other than in a register file, and be retrieved at a later time to a register file for initial or further processing.
  • the stored context may carry the priority level of the context as well.
  • the second row from the top in Fig. ID indicates interrupt priority for stream
  • the interrupt priority shown is three, which means that only an interrupt with a priority level of three or higher may interrupt stream 0. Again, the stream itself when active with a thread or an active supervisor stream may edit the interrupt priority level.
  • stream 0 has a seven (highest) priority for integer units, a priority level of four for floating point units, and a priority level of three for branch units.
  • priorities maintained for other processor resources are exemplary, and there may well be, in alternative embodiments, priorities maintained for other processor resources.
  • temporarily fixed resource assignments may be used exclusively, in others priority may be used exclusively, and in still others, a mixture of the two.
  • Resource priority means that in a case of contention for a resource, the active stream with the highest priority will claim the resource.
  • Fig. IE indicates such control bits for stream 0.
  • the control bits for stream 0 in Fig. IE indicate that stream 0 is, at the instant in time, running a Master thread, and is enabled, but is not in sleep mode. These bits are indicative, and are primarily for reference for other active streams in operation. For example, if one active stream disables another, in the process the acting stream sets the enable/disable bit for the subservient stream. If an active stream puts itself in sleep mode, it sets its sleep bit before going to the sleep mode, so active streams may know, by checking the control bits, that that stream is in sleep mode.
  • a Master stream is a Master stream by virtue of running a Master thread, and an active Master stream has complete access and control over other streams, which are slave streams to the Master. It is not necessary that any stream grant the Master stream permission to edit control configuration.
  • a Master stream may have a variety of duties, one of which, in preferred embodiments, is initial setup of a multi-streaming processor.
  • a Master stream On startup and reset in a system utilizing a processor according to an embodiment of this invention, a Master stream will typically be called at some point in the boot process, and will act for example to set initial priorities for streams, to set supervisory bits, and to start specific threads in specific streams. These duties can and will vary from system to system, as, in some cases some default settings may be made by executing specialized BIOS code, and a Master thread may be called for further setup duties, and so on.
  • a Master thread need not typically remain executing in a stream of the processor.
  • the Master stream having accomplished its ends, may set another thread to start in the stream it occupies, then retire, or may simply retire, leaving an inactive stream available for use by another active stream to execute such as an interrupt service routine, a utility function of another sort, and the like.
  • a Master thread may be recalled after retiring for a number of reasons. For example, a contention for resources may require the Master for resolution, or an interrupt or exception may require the Master stream for resolution. It will also be apparent to the skilled artisan that the Master stream in some systems may be running the Operating System or a portion thereof, or a routine loaded and active with a system BIOS, and the like.
  • all inter-stream control functions may be disabled, allowing the processor to run just as a processor without the control capabilities taught herein.
  • a processor according to the invention may be hard-wired to make one stream always the Master stream, and no other.
  • hard-wired is meant that certain functionality is preset by the hardware resources implemented in silicon devices and their connections. Specific assignments of other threads to specific streams may also be set. In such cases, specific resource priorities and/or assignments may also be set, or any other of the inter-stream functionalities taught herein.
  • Such pre-setting will be highly desirable for highly dedicated system applications, such as, for example, network routers and the like.
  • control data may be represented, stored, and accessed.
  • the illustrations provided herein are exemplary.
  • the control data map is implemented in silicon devices directly on the processor chip. This arrangement is preferred because, among other things, access to the control data is fast.
  • a control bitmap may be in any accessible memory device in a system, such as in an otherwise unused portion of RAM, or even on such as a flash card memory.
  • a Master thread may pursue such ends as initial setup and loading of threads into streams, and may return to resolve conflicts and exceptions.
  • the overall system function is to execute one or more applications.
  • a general-purpose computer there may be many applications, and the uses of the computer are similarly many.
  • One may browse the Internet, send and receive e-mails, make drawings, process photographs, compose word documents, and much more.
  • each application is dedicated to particular functions, and application threads, as applications are called, occupy one or more of the streams of the processor.
  • more dedicated systems such as, for example, a data router in a packet data network, there are relatively fewer applications, and the functions of the machine are typically ordered in some fashion other than user-initiated.
  • the functions may be called according to characteristics of data received to be processed and forwarded.
  • software is specifically enhanced to take maximum advantage of the new and unique control functions of a multi-streaming processor according to embodiments of the invention, although this is not required in all embodiments.
  • some software executing on a processor may be enhanced according to embodiments of this invention, and other software may not.
  • any one active stream may manipulate its own resource allocation and priority according to its needs, which will relate closely to the nature of the thread running in the stream, and the nature of other threads available to run or actually running in other streams.
  • an active stream may start, enable, disable, interrupt, branch and join other streams with prior knowledge of possible repercussions, because each active stream may check the control data settings for other streams.
  • the enormous advantage provided is an ability to maximize real-time efficiency rather than simply use of processor resources. That is, system goals may now be addressed. Processors have historically been designed to maximize processor resources, in an often incorrect assumption that to do so necessarily addresses system goals as well. It is easy to understand, however, that a multi- streaming processor may be fully engaged efficiently accomplishing low-priority tasks, while higher priority tasks receive inadequate attention, and therefore does not adequately address system goals.
  • one active stream running a thread (application) that may need or be enhanced by another thread running in parallel, may call the subservient thread and start it an available stream.
  • An example is a WEB browser accessing a WEB page having an MPEG video clip.
  • the browser running in one stream of a processor according to an embodiment of the present invention may call an MPEG player to run in an available stream. The state of the data control bits and values will guide the browser stream in selecting a stream for the MPEG player.
  • the browser may not, for example, co-opt an active stream running a thread having a higher priority. It may, however, co-opt a stream that has set its control data bits that it may at any time be interrupted.
  • Operation in this embodiment can be illustrated by following a sequence of operations to accomplish a typical task, such as forking a new thread.
  • Threads can fork other threads to run in different streams.
  • an operating system may wish to fork an application program, or an application may need to fork a sub-task or thread.
  • a thread encountering an exception may fork a process to handle it.
  • a preferred method in an embodiment of the invention for fork and join operations is shown in Fig. 2A. Assume that the thread in stream 1 of Fig. 1 A is forking a new thread. To do so, stream 1 as the supervisor thread requests an idle stream to use in step 201 and waits until such a stream is available in step 202. In most cases there will be no wait.
  • Stream 1 receives the number of an available stream, for example stream 2 in step 203.
  • an available stream for example stream 2 in step 203.
  • a wait limit after which, with no stream becoming available, alternate action is taken.
  • active stream 1 loads the assigned stream's program counter with the address of the first instruction in the new thread and loads other components of the new thread's context into appropriate elements of processor resources in step 204 and sets the priority map for stream 2 in step 205.
  • Stream 1 may also set supervisory control bits 107 for stream 2 in step 206. (Alternatively, the new thread, running in stream 2, may set the bits after step 208.)
  • Stream 2 must have its supervisory control bits set to allow the supervisor thread to act as its supervisor and the supervisory control bits of the supervisor must be set to allow the controlled thread to interrupt it.
  • stream 2 may be put in sleep mode, waiting on an internal or external event.
  • the new thread starts running in stream 2 in step 208.
  • steps 209 and 210 both streams run independently and concurrently until a join is required. In this example, it is assumed that the thread running in stream 1 finishes first.
  • the supervisor thread When the supervisor thread needs to join the forked thread, it checks first to see if the forked thread is still running. If so, it executes an instruction at step 211 that puts itself to sleep, setting the sleep bit in stream control bits 118, and then waits for a join software interrupt from the forked thread.
  • the forked thread sends a join interrupt in step 212 and the supervisor thread receives the interrupt and wakes in step 213. The supervisor completes the join operation in step 214. Finally the forked thread ends in step 215, freeing its stream for use by another thread.
  • Fig. 2B illustrates the companion case wherein the forked stream finishes before the supervisor stream.
  • the forked stream finishes it immediately sends the join interrupt (step 216).
  • the interrupt remains on hold until the supervisor stream finishes, then the interrupt is serviced in step 217 and the join is completed.
  • registers can be loaded and stored in the background as described in co- pending priority application filed January 27, 1999, entitled "Register Transfer Unit for Electronic Processor,” then the process of forking a new thread for which the context is not already loaded is modified from the steps shown in Fig. 2 as shown in Fig. 3.
  • step 204 for setting program counter and context is eliminated.
  • step 204 for setting program counter and context is eliminated.
  • step 301 the supervisor signals the register transfer unit (RTU) to load the context for the new stream.
  • RTU register transfer unit
  • the RTU does the context switch in step 302.
  • the RTU can make the freshly loaded register file active and start the new stream in step 208, again, independently of the supervisor stream.
  • Step 207 of Fig. 2 is thus eliminated.
  • the remaining steps are identical to Fig. 2 assuming the supervisor finishes first. Otherwise the same as Fig. 2B.
  • the register file previously used by stream 2 will be saved.
  • processors including but not limited to single-chip systems, microprocessors, controllers, routers, digital signal processors (DSPs), routing switches and other network devices, and processors designed for other special uses.
  • DSPs digital signal processors
  • the teachings of this invention may be practiced in conjunction with processors of any size, from simple one-chip complete systems to complex supercomputer processors.
  • the invention may be realized in simple and highly dedicated form for small systems or in complex, sophisticated form for large systems.
  • a processor can be dynamically configured to meet the requirements of particular software and software mixes, to meet strict timing requirements for example. Streams can, for example, be guaranteed a certain percentage of overall processor throughput, or a percentage utilization of particular resources or classes of resources.
  • the new architecture allows balancing the optimization of the execution of particular threads along with efficient use of processing resources.
  • priorities consider a router for use in a packet-data network embodying a processor according to Fig. 1A. Each stream in the router processor, except for a control program running as the master in stream 0, processes a separate flow of packets of three different types.
  • Each of the three slave streams are processing packets using different protocols that have different service guarantees related to the timely forwarding of packets. Assume that for the particular protocols and classes of service being processed, access to integer units will have a great impact on meeting service guarantees. Accordingly, the master sets the priority map of stream 1, which has the highest service guarantee, to the value 6, giving it priority access to integer units higher than the other streams (except the master). Stream 3, with the next most strict service guarantee, has a lower priority, 5, and stream 2 the lowest priority, 3. After initially setting priorities, the Master monitors throughput for each protocol and insures that service guarantees are being met, modifying priorities further as needed.
  • the supervisor can dynamically allocate resources to streams based on the current needs of the threads, modifying priorities as needed to meet service guarantees of a wide variety of protocols and classes of service. Because service guarantees are met using supervisory software and not hardware, the router can be easily upgraded as new requirements evolve.
  • priorities may also be illustrated by a data router system.
  • a data router system having four streams, wherein one or more threads are available to streams for processing data packets.
  • contexts have been loaded to register files and associated with streams to start a thread in each of the four streams to process arriving data packets.
  • each stream is said to have an execution priority, meaning that only a process with higher priority may run in that stream.
  • the execution priority for each stream of a processor is maintained as three editable bits in a portion of bit map 115 of Fig. 1A.
  • the context for the packet is loaded to a register file. This may be done in preferred embodiments by a Register Transfer Unit (RTU) according to the teachings of priority document S/N 09/240,012.
  • RTU Register Transfer Unit
  • each context is given an initial high priority. For example, on a scale of seven, each initial context will be assigned a priority of six.
  • register files are associated with streams, according to priority of the register files and the execution priority of the streams. Associating a register file with a stream starts the context thread in the stream, constituting an active stream. The stream's execution priority is now set to the high priority (in this example, six) of the context that was loaded. As processing of the newly-loaded packet begins, it may be determined that the packet is indeed a.
  • the next context started in the stream be the highest-priority- level waiting context. This is done in this example by selectively lowering the execution priority until a context loads, or the execution priority is zero. The only way zero will be reached is if there is no waiting context of any priority. In this situation the stream will remain idle until any context becomes available.
  • the execution priority is six at the end of processing a packet, the execution level is reset to five, then four, and so on, which assures that the next context loaded will be the waiting context with the highest priority level.
  • the packet is of a type that deserves an intermediate priority.
  • the thread running in the stream then lowers the execution priority to perhaps four. If there are no waiting contexts higher than priority four, the active stream continues to process the data packet to completion, and follows the example described above, wherein, upon completion the stream will set its execution priority to three, then two, and so on until a new context loads. If, however, a new packet has arrived, since new contexts are given an initial priority of six, the arrival of the new packet will force a context switch, causing the stream to cease operations on the slower packet, and to commence processing instead the new, higher-priority data packet, resetting the execution priority of the stream to six.
  • the saved context still has a priority of four, and will await opportunity for re-assignment to a stream for further processing, typically under control of the RTU, as described above.
  • the new packet is a faster packet, then system goals are enhanced. If not, then the active stream, now at priority level six again may again lower its own execution priority to potentially delay execution of the newly loaded packet, and seek again a faster packet to process.
  • the new packet for example, may be a very slow packet, requiring decryption.
  • the active stream may then lower the execution priority to two, and again force a context switch if a new packet has arrived for processing, in which case a context will be saved with a two priority for the slow packet, which will than wait for processing opportunity by virtue of priority.
  • a stream is running at an execution priority of two, no new packet (six) arrives, but there is a waiting context with a four priority, the four context will pre-empt the stream with the two execution priority, and so on.
  • packets may be processed with priority according to type, even though the type cannot be known until the context is loaded and processing has commenced on each newly-arriving data packet, providing a new way for system goals to be met in data routing applications, while also ensuring processor efficiency.
  • a single supervisory control bit for the slave stream could give the master stream a useful type of resource control, such as allowing access to a floating point unit, while allowing the slave to suspend such control during critical periods using the supervisory control bit.
  • control bits and stream control bits are not limited. A single type of control or a large number of controls may be appropriate, depending on the purpose of the system. Additional controls could regulate the sharing of global registers or global memory, memory protection, interrupt priorities, access to interrupt masks or access to a map between interrupts or exceptions and streams, for example. In a processor with one or more low power modes, access to power control may also be regulated through additional supervisory control bits and stream control bits or such control may be reserved exclusively for a stream that is running the master thread.
  • the type of control that one stream may have over another stream's resources can also take many forms.
  • a simple two-stream controller for example, to be used in a dedicated application, with a fixed master/supervisor and a fixed slave stream, a single stream control bit for the slave stream could give the master stream the ability to disable the slave during instruction sequences when the master needs full use of all resources.
  • Priorities and scheduling of any form described in priority document S/N 09/216,017 may be implemented in combination with the new teachings of the present invention. If such priorities are not implemented, then a stream could exert a simpler form of control by directly blocking another stream's access to one or more resources temporarily. In this case the supervisory control bits representing priorities would be replaced with bits representing resource control. Priority maps would be replaced with one or more control bits used to temporarily deny access to one or more resources or classes or resource. For example, if one stream needs exclusive use of a floating point unit, it can be made a supervisor of the other streams, and set resource control bits denying access to the floating point unit in each of the other streams while it needs exclusive access. If another partially blocked stream encountered a floating point instruction, the instruction scheduler would suspend execution of the instruction until the floating point resource control bit for that stream were reset by a supervisor stream.
  • the inventors recognize several types: External - the interrupt is generated by a device external to the processor, such as a printer interface, modem or other I O device.
  • interrupts Internal - special instructions are executed by the processor that intentionally generate interrupts for purposes such as quickly calling a section of code or communicating between threads. Such interrupts are also known as software interrupts.
  • Exception - a special "exceptional" event occurs during processing, caused by execution of an instruction or a hardware error. For example, an instruction may attempt to divide a number by zero, a return stack may overflow or an attempt to fetch from memory may generate a memory parity error.
  • a thread executing in one stream can interrupt another stream.
  • one (active) stream interrupts another stream, which may or may not be active.
  • This mechanism is used in embodiments of the invention to initiate processing of special events or at specific places in a code sequence.
  • an active stream can use this inter-stream interrupt capability to gain control of the processor, pre-empting and stopping execution of threads in other streams.
  • an inter-stream interrupt may be used by one stream to request some task be done by another stream to improve performance or response.
  • an active stream may pass off exception processing to another stream.
  • the stream encountering an exception interrupts the stream mapped for exception processing, and waits until the exception-processing stream finishes before continuing with its own processing. This unique capability is especially important for real-time systems so the overhead of changing contexts may be avoided. Structured exception handling could then also be implemented in hardware.
  • Fig. 4 is an architecture diagram illustrating general interrupt mapping and processing in an embodiment of the present invention.
  • streams 401 labeled 0, 1, 2 and 3 are the same four streams as streams 103 in Fig. 1A.
  • the processor includes interrupt detection logic 403 for detecting external interrupts 405 generated by devices external to the processor.
  • the interrupt detection logic communicates with interrupt logic 407.
  • Logic 407 in some embodiments communicates with interrupt mapping data 409, which may in some embodiments be a part of control map 115 of Fig. 1A, but may also in some embodiments be a separate entity on or off the processor.
  • the interrupt logic interrupts individual streams, and individual streams respond (acknowledge) by logic paths 411.
  • the interrupt (405) is generated by a source external to the processor, and is detected in a multi-stream processor according to embodiments of the present invention by Interrupt Detection Logic 403.
  • This detection can be done in any of several ways.
  • the external devices may, for example, exert each a dedicated interrupt line having an input pin to the processor, and the detection can be on leading or trailing edge of a voltage change, for example. Other means of interrupt issuance are known to the inventors.
  • logic 403 communicates the receipt of the interrupt to Logic 407. It is the task of Logic 407 in this embodiment to process the interrupt according to one of several possible mechanisms, new in the art for multi-streaming processors.
  • Interrupt Logic 407 receives the interrupt and decides which stream or streams to interrupt depending on the type of interrupt and on one or any combination of the following mechanisms:
  • Static mapping - Interrupts are mapped to specific streams and this mapping is fixed and unchangeable.
  • static mapping of this sort is accomplished by specific logic devices in logic 407 (hard-wired), and is fixed at design time.
  • Interrupt Map 409, and Interrupt Logic 407 refers to this map for each interrupt received to determine which stream or streams to interrupt. This mapping is generally done at boot-up time or by the operating system, and is fixed prior to the interrupt occurring. Once an interrupt is detected, this mapping is consulted and appropriate streams are interrupted. Example: assume three types of interrupts and two streams. Type one interrupt may be mapped to stream two and type two to stream one, with type three mapped to both stream one and two. At a later point in time, when the streams are running different threads, types one and two are both mapped to stream one and type three is mapped to both streams one and two. The map (409) in this case will have been altered by software such as the operating system to change the mapping.
  • Dynamic or conditional mapping In this case interrupts are mapped to specific streams by using logic which is made aware of the state of the machine at the point in time the interrupt occurs and creates the mapping based on that fact and also on any other parameter, for example the type of interrupt. Mapping is created dynamically every time an interrupt occurs. For dynamic mapping there may be an algorithm to process (firmware or software routine), or logic may have bits and registers settable to alter the result of an interrupt communicated to Logic 407. Example - The interrupt-mapping algorithm could map interrupts to the stream that is inactive or if no inactive stream exists to the stream running the lowest priority thread. Once Logic 407 determines the mapping, streams are interrupted on logical paths 411.
  • interrupt may be issued to a stream once the determination is made.
  • the streams have to acknowledge that they are ready to execute an interrupt service routine.
  • This acknowledgement can occur at different times for different streams. Any delay may be due to code currently executing on the stream, or the stream may temporarily mask the interrupt. The interrupt, however, will remain pending as long as the external source exerts the interrupt.
  • Interrupt detection logic 403 will control this behavior. If multiple streams are required to acknowledge the interrupt, the interrupt logic will wait till all of them have acknowledged before sending an external acknowledgement. This behavior can be modified as necessary, i.e. the external acknowledgement can happen after only one stream has recognized the interrupt or in any other combination.
  • Example - an audio device may interrupt two streams, one of which vectors to the interrupt service routine to store the audio data to a hard disk drive. The other stream may vector to an audio playback routine, and direct the audio data directly to speakers.
  • Internal interrupts are generally software interrupts that are used by programs to request specific functionality from the operating system or to execute sub-routines. These interrupts in embodiments of the present invention are managed to behave with the degrees of functionality described above for external interrupts.
  • Software interrupts issued by active streams are processed by logic 407, and the interrupt logic can handle these interrupts and map them according to the three types as described above. Another variation on the mapping is the special case wherein an interrupt is only mapped to the stream executing the soft-interrupt. The rest of the steps will be as above.
  • Synchronized interrupts are variations on the above cases, i.e. both internal and external interrupts. Synchronized interrupts behave differently in the vectoring stage. In the case of synchronized interrupts the interrupt logic will not vector the streams to execute interrupt service routines until it has received acknowledgements from all the streams to which the interrupt is mapped. This behavior is to require a synchronized start of response to an interrupt. For example, consider a debugging interrupt, i.e. a breakpoint set on a thread executing on one of the streams. The debugger may want to view the state of the system at a stable point, i.e. at the point where all the streams have stopped executing whatever thread they were executing. Hence the interrupt logic will generate the interrupt to all of the streams, but will wait till they all have generated acknowledgements before vectoring them to the debugger service routine. However, even though the logic waits for all the streams to catch up, the streams that acknowledge are stopped from further execution.
  • a debugging interrupt i.e. a breakpoint set on
  • Exceptions are generated by code running in the streams and generally indicate error conditions. Exceptions are always synchronous to executing code, i.e. the stream generating the exception will always do so at the same point. There may, however, be many different responses to exceptions.
  • Blocking send to another stream -
  • the stream generating the exception will stop execution of the current thread, but will not acknowledge the exception itself.
  • the exception instead is broadcast to all other streams. Whichever stream is programmed or mapped to acknowledge the exception will do so and start execution of the exception handler routine. If no stream acknowledges the exception, then the Master thread will be sent the exception. If the master thread is already executing, it will now vector to the exception handler. If it is not running, it will be made active and allocated to a stream for execution, and then it will handle the exception. The Master thread is always capable of acknowledging any exception, if no other thread will. This way structured exception handling can be implemented in hardware. Once the exception routine returns, the original stream is notified and can then start its execution again.
  • This method is implemented mainly for the reason that all streams may not have access to all the hardware resources needed to process a certain exception, and hence the stream that is incapable has to pass the exception to one that is capable.
  • Non-blocking send to another stream - This method is similar to the one above, but the original stream that generates the exception is not blocked. It generates the exception and then continues executing. Exception handling is guaranteed to complete by another stream or the master thread.
  • This method is generally used for non-fatal exceptions like overflow. As and example consider the overflow exception - the stream generating the overflow sets a flag to indicate that it generated the exception and continues to execute. Another stream can acknowledge the exception and update a counter to count the number of times a particular value overflowed.
  • Any exception can be mapped to any of the above three categories. This mapping can again be done using the same techniques as for the interrupts. Exceptions can also be synchronized, in that an exception handler may not be executed until all the streams that that exception is mapped to have stopped executing and acknowledge the exception. Implementation of the mechanisms described above, involving software and hardware interrupts and exception handling relative to multi-stream processors, may in some instances (programmable mapping) be accomplished partly through an alterable control file. Referring to Fig. 1 an inter-stream control bitmap 1 15 was described wherein state of single bits and bit values of multiple bits are associated with specific streams and functions, such as enabling, disabling and priorities.
  • bitmap 115 may be implemented on the multi-stream processor chip as hardware in a manner that the bit values may be altered by active streams.
  • a file in local memory may be used for mapping interrupts and interrupt and exception parameters.
  • single bits or bit values for multiple bits may be used to map streams to interrupts and exceptions much as enabling, disabling, priorities and the like are mapped in description above relative to Figs. IB through IE. It will be apparent to the skilled artisan that there are many alterations that may be made in the embodiments described above within the spirit and scope of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Multi Processors (AREA)

Abstract

Un processeur multitrain possède une pluralité de trains (103) permettant de traiter en continu une ou plusieurs unités d'exécution d'instruction, un ensemble de ressources fonctionnelles (105) permettant de traiter les instructions en provenance des trains, et une logique (407) gestionnaire d'interruptions. Cette logique détecte (403) les interruptions et les exceptions et les applique (409) à ceux des trains qui sont particulièrement concernés (401). Dans certains modes de réalisation de l'invention une interruption ou une exception peut être appliquée à au moins deux trains, et dans d'autres modes de réalisation des interruptions et des exceptions peuvent être appliquées à un train. Ces applications peuvent être statiques et déterminées au niveau de la conception du processeur, programmables avec des données stockées et modifiables, ou conditionnelles et dynamiques, la logique d'interruption exécutant un algorithme efficace de façon à déterminer ces applications. Les interruptions peuvent être des interruptions externes (405) générées par des dispositifs externes aux interruptions (411) (internes) du logiciel du processeur générées par des trains actifs, ou conditionnelles, fondées sur des variables. Après que les interruptions ont accusé réception des trains dans lesquels elles (ou les exceptions) ont été appliquées, elles sont guidées vers les routines de service appropriées. Dans un procédé synchrone aucun guidage ne survient tant que tous les trains auxquelles une interruption est appliquée n'ont pas accusé réception de l'interruption.
PCT/US2000/006621 1999-05-14 2000-03-14 Traitement des interruptions et des exceptions pour processeurs numeriques multitrain WO2000070482A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU38820/00A AU3882000A (en) 1999-05-14 2000-03-14 Interrupt and exception handling for multi-streaming digital processors

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/312,302 US7020879B1 (en) 1998-12-16 1999-05-14 Interrupt and exception handling for multi-streaming digital processors
US09/312,302 1999-05-14

Publications (1)

Publication Number Publication Date
WO2000070482A1 true WO2000070482A1 (fr) 2000-11-23

Family

ID=23210834

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/006621 WO2000070482A1 (fr) 1999-05-14 2000-03-14 Traitement des interruptions et des exceptions pour processeurs numeriques multitrain

Country Status (2)

Country Link
AU (1) AU3882000A (fr)
WO (1) WO2000070482A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1221654A3 (fr) * 2000-12-22 2004-06-23 Nortel Networks Limited Processeur de paquets multithread
EP1554651A2 (fr) * 2002-10-15 2005-07-20 Sandbridge Technologies, Inc. Procede et appareil pour des interruptions de filieres croisees a vitesse elevee dans un processeur de traitememt multifiliere
WO2008021416A1 (fr) * 2006-08-14 2008-02-21 Marvell Semiconductor, Inc. Gestion d'interruption
EP2483772A1 (fr) * 2009-09-29 2012-08-08 Nvidia Corporation Architecture de routine de déroutement pour une unité de traitement parallèle
EP1914414A3 (fr) * 2006-10-10 2013-08-14 Robert Bosch Gmbh Système d'injection et procédé de fonctionnement d'un système d'injection
US10318591B2 (en) 2015-06-02 2019-06-11 International Business Machines Corporation Ingesting documents using multiple ingestion pipelines
GB2579617A (en) * 2018-12-06 2020-07-01 Advanced Risc Mach Ltd An apparatus and method for handling exception causing events

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5784613A (en) * 1995-09-12 1998-07-21 International Busines Machines Corporation Exception support mechanism for a threads-based operating system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5784613A (en) * 1995-09-12 1998-07-21 International Busines Machines Corporation Exception support mechanism for a threads-based operating system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NEMIROVSKY et al., "DISC: Dynamic Instruction Stream Computer", ACM, 1991, pages 163-171, XP002930682. *
YAMAMOTO et al., "Performance Estimation of Multistreamed Superscalar Processor", IEEE 1994, Proceedings of the Twenty- Seventh Annual Hawaii, International Conference on System Sciences, 1994, pages 195-204, XP002930683 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8762581B2 (en) 2000-12-22 2014-06-24 Avaya Inc. Multi-thread packet processor
EP1221654A3 (fr) * 2000-12-22 2004-06-23 Nortel Networks Limited Processeur de paquets multithread
EP2306313A1 (fr) * 2002-10-15 2011-04-06 Aspen Acquisition Corporation Procédé et appareil pour des interruptions de filières croisées à vitesse élevée dans un processeur de traitememt multifilière
EP1554651A2 (fr) * 2002-10-15 2005-07-20 Sandbridge Technologies, Inc. Procede et appareil pour des interruptions de filieres croisees a vitesse elevee dans un processeur de traitememt multifiliere
EP1554651A4 (fr) * 2002-10-15 2007-10-31 Sandbridge Technologies Inc Procede et appareil pour des interruptions de filieres croisees a vitesse elevee dans un processeur de traitememt multifiliere
US8473728B2 (en) 2006-08-14 2013-06-25 Marvell World Trade Ltd. Interrupt handling
US8190866B2 (en) 2006-08-14 2012-05-29 Marvell World Trade Ltd. Interrupt handling
US7870372B2 (en) 2006-08-14 2011-01-11 Marvell World Trade Ltd. Interrupt handling
WO2008021416A1 (fr) * 2006-08-14 2008-02-21 Marvell Semiconductor, Inc. Gestion d'interruption
EP1914414A3 (fr) * 2006-10-10 2013-08-14 Robert Bosch Gmbh Système d'injection et procédé de fonctionnement d'un système d'injection
EP2483772A1 (fr) * 2009-09-29 2012-08-08 Nvidia Corporation Architecture de routine de déroutement pour une unité de traitement parallèle
EP2483772A4 (fr) * 2009-09-29 2014-04-02 Nvidia Corp Architecture de routine de déroutement pour une unité de traitement parallèle
US10318591B2 (en) 2015-06-02 2019-06-11 International Business Machines Corporation Ingesting documents using multiple ingestion pipelines
US10572547B2 (en) 2015-06-02 2020-02-25 International Business Machines Corporation Ingesting documents using multiple ingestion pipelines
GB2579617A (en) * 2018-12-06 2020-07-01 Advanced Risc Mach Ltd An apparatus and method for handling exception causing events
GB2579617B (en) * 2018-12-06 2021-01-27 Advanced Risc Mach Ltd An apparatus and method for handling exception causing events
US11630673B2 (en) 2018-12-06 2023-04-18 Arm Limited System and method for physically separating, across different processing units, software for handling exception causing events from executing program code

Also Published As

Publication number Publication date
AU3882000A (en) 2000-12-05

Similar Documents

Publication Publication Date Title
US8468540B2 (en) Interrupt and exception handling for multi-streaming digital processors
US7765546B2 (en) Interstream control and communications for multi-streaming digital processors
US7257814B1 (en) Method and apparatus for implementing atomicity of memory operations in dynamic multi-streaming processors
US6477562B2 (en) Prioritized instruction scheduling for multi-streaming processors
US6260150B1 (en) Foreground and background context controller setting processor to power saving mode when all contexts are inactive
US7996843B2 (en) Symmetric multi-processor system
US7103631B1 (en) Symmetric multi-processor system
US20060146864A1 (en) Flexible use of compute allocation in a multi-threaded compute engines
EP1299801B1 (fr) Procede et appareil de mise en oeuvre d'une atomicite d'operations de memoire dans des processeurs multi-flux dynamiques
WO2000070482A1 (fr) Traitement des interruptions et des exceptions pour processeurs numeriques multitrain
JPH06243112A (ja) マルチプロセッサ装置
US6581089B1 (en) Parallel processing apparatus and method of the same
JPH07182168A (ja) 演算装置及びその制御方法
JPH11249917A (ja) 並列型計算機及びそのバッチ処理方法及び記録媒体
JPH01220040A (ja) タスクスケジューリング方式
JPS63208154A (ja) マルチプロセツサスケジユ−ル方式
Warren by Maria Lima
JPH06103224A (ja) 割込み制御装置

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase