EP3803597A1 - Device and method for serializing access to a shared resource - Google Patents
Device and method for serializing access to a shared resourceInfo
- Publication number
- EP3803597A1 EP3803597A1 EP18750154.9A EP18750154A EP3803597A1 EP 3803597 A1 EP3803597 A1 EP 3803597A1 EP 18750154 A EP18750154 A EP 18750154A EP 3803597 A1 EP3803597 A1 EP 3803597A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- lwp
- thread
- shared resource
- context information
- queue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
- G06F9/526—Mutual exclusion algorithms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
Definitions
- the present invention relates to the field of task parallelization and multi-threaded runtime environments.
- the present invention in particular relates to thread synchronization and scheduling in multithreaded applications, and thus provides a device and method for synchronized and scheduled access to a shared resource.
- NUMA non uniform memory access
- Thread synchronization and scheduling in a non-trivial multithreaded application on a current multicore NUMA server architecture becomes an even greater re-emerging problem that has been known so far.
- Efficient and scalable synchronization among threads is needed to protect shared data structures in an application and in an OS kernel to enable performance and scalability of an application.
- Thread scheduling is dealt within an OS kernel in a way transparent from an application. A scheduler might itself suffer from scalability problems when a large enough number of threads need urgent servicing.
- Coarse-grained synchronization primitives e.g.
- the present invention aims to improve the concept governing the design of the conventional run time environments.
- the present invention has the objective to provide a device and method that enables for thread synchronization and scheduling that avoids the above mentioned problems and emphasizes single-thread locality of reference.
- the device specifically enables constructing efficient task parallel NUMA-aware run times.
- the present invention handles a generic task-parallel application, each task being run on a cooperating user- level-thread (or a fiber, or LWP, which are interchangeable terms) that accesses a global data structure (i.e. a shared resource) wherein synchronization could be semantically specified in the application’s code in terms of locks.
- the task’s code the fiber executes is encapsulated in a function or a class method.
- the present invention schedules and synchronizes fibers over OS threads, which are identified with CPU cores, using a so-called fiber delegation queue, i.e. a queue to which fibers are delegated for execution, in particular in a serialized manner.
- the delegation queue may also be called QD.
- the present invention dynamically translates lock-based semantics into delegation.
- a first aspect of the present invention provides a device for serializing access to a shared resource, wherein the device is configured to operate a thread to execute a light weight process, LWP; probe access by the LWP to the shared resource, and, if the shared resource is locked; push the LWP to a delegation queue.
- LWP light weight process
- the LWP can be de-scheduled.
- the enqueued LWPs later can be processed in a serial manner by means of a single thread. That is, contention of multiple LWPs that try to access the shared resource at a same time is mitigated, since the LWPs are processed in a serialized manner based on the delegation queue.
- the device is further configured to operate a helper thread to pop the LWP from the delegation queue and execute the LWP by means of the shared resource.
- the device is further configured to operate the thread to, if the shared resource is locked, obtain LWP context information associated with the LWP, and to push the LWP context information to the delegation queue.
- the device is further configured to operate the helper thread to pop the LWP context information from the delegation queue and execute the LWP based on the LWP context information.
- the helper thread is the first thread to access the shared resource.
- the device is further configured to operate the helper thread to, after completion of executing the LWP by means of the shared resource, push the LWP to a ready queue.
- the device is further configured to operate the helper thread to, after completion of executing the LWP by means of the shared resource, update the LWP context information, thereby obtaining updated LWP context information, and push the updated LWP context information to the ready queue.
- the device is further configured to operate the thread by a first core of the device.
- the device is further configured to operate the helper thread by a second core of the device.
- the LWP context information comprises a predefined application binary interface, ABI.
- the LWP context information comprises an ABI specified LWP register context.
- the device is further configured to operate the thread to pop the LWP from the ready queue and continue executing the LWP. This ensures that the thread that previously executed the LWP can continue execution, after the part of the LWP which required access to the shared resource was executed by the helper thread.
- the device is further configured to operate the thread to pop the updated LWP context information from the ready queue and continue executing the LWP based on the updated LWP context information.
- the device is further configured to operate a second thread, to pop the LWP from the ready queue and continue executing the LWP, wherein the device preferably is further configured to operate the second thread to pop the updated LWP context information from the ready queue and continue executing the LWP based on the updated LWP context information.
- a second aspect of the present invention provides a method for serializing access to a shared resource, wherein the method comprises the steps of operating a thread to execute a light-weight process, LWP; probing access by the LWP to the shared resource, and, if the shared resource is locked; pushing the LWP to a delegation queue.
- the method further comprises operating a helper thread to pop the LWP from the delegation queue and execute the LWP by means of the shared resource.
- the method further comprises obtaining LWP context information associated with the LWP, and operating the thread to, if the shared resource is locked, push the LWP context information to the delegation queue.
- the method further comprises operating the helper thread to pop the LWP context information from the delegation queue and execute the LWP based on the LWP context information.
- the helper thread is the first thread to access the shared resource.
- the method further comprises operating the helper thread to, after completion of executing the LWP by means of the shared resource, push the LWP to a ready queue.
- the method further comprises operating the helper thread to, after completion of executing the LWP by means of the shared resource, update the LWP context information, thereby obtaining updated LWP context information, and push the updated LWP context information to the ready queue.
- the method further comprises operating the thread by a first core.
- the method further comprises operating the helper thread by a second core.
- the LWP context information comprises a predefined application binary interface, ABI. In a further implementation form of the second aspect, the LWP context information comprises an ABI specified LWP register context.
- the method further comprises operating the thread to pop the LWP from the ready queue and continue executing the LWP.
- the method further comprises operating the thread to pop the updated LWP context information from the ready queue and continue executing the LWP based on the updated LWP context information.
- the method further comprises operating a second thread, to pop the LWP from the ready queue and continue executing the LWP, wherein the method preferably further comprises operating the second thread to pop the updated LWP context information from the ready queue and continue executing the LWP based on the updated LWP context information.
- the method of the second aspect and its implementation forms include the same advantages as the device according to the first aspect and its implementation forms.
- a third aspect of the present invention provides a computer program product comprising a program code for controlling the device according to the first aspect or any one of its implementation forms, or for performing, when running on a computer, the method according to the second aspect or any one of its implementation forms.
- the computer program product of the third aspect includes the same advantages as the device according to the first aspect and its implementation forms.
- FIG. 1 shows a schematic view of a device according to an embodiment of the present invention.
- FIG. 2 shows a schematic view of a device according to an embodiment of the present invention in more detail.
- FIG. 3 shows a schematic view of ready queues associated to cores.
- FIG. 4 shows a code listing of functionality provided by the present invention.
- FIG. 5 shows another code listing of functionality provided by the present invention.
- FIG. 6 shows another code listing of functionality provided by the present invention.
- FIG. 7 shows another code listing of functionality provided by the present invention.
- FIG. 8 shows a schematic view of an embodiment of the present invention.
- FIG. 9 shows a schematic view of a method according to an embodiment of the present invention.
- Fig. 1 shows a device 100 for serializing access to a shared resource 101 according to an embodiment of the present invention.
- the device 100 is configured to operate a thread 102.
- the thread 102 executes a light-weight process (LWP) 103.
- LWP 103 is a means for providing multitasking capabilities.
- An LWP 103 e.g. runs in a user space on top of a single kernel thread and shares its address space and system resources with other LWPs within the same process.
- Multiple user level threads, managed by a thread library can be placed on top of one kernel managed thread - allowing multitasking to be done at the user level, which allows for achieving performance benefits.
- the shared resource 101 can e.g. be a globally used data structure, or any kind of device, e.g. a storage, memory, I/O, or network device of a computer system.
- the device 100 is further configured to probe access by the LWP 103 to the shared resource 101. If the shared resource 101 is locked, the device 100 is configured to push the LWP 103 to a delegation queue 104. In the delegation queue 104, all LWPs 103 which need to access the same shared resource 101 can be enqueued. Once the LWPs 103 are in the delegation queue 104, they can be processed in a serialized manner, by taking each LWP 103, one by one, from delegation queue 104, for executing them by means of shared resource 101.
- the delegation queue 104 can also be called QD 104.
- Fig. 2 shows a device 100 according to an embodiment of the present invention in more detail.
- the device 100 of Fig. 2 includes all features and functionality as the device 100 of Fig. 1. To this end, identical features are labelled with identical reference signs. All features that are going to be described in view of Fig. 2 are optional features of the device 100.
- the device 100 is further configured to operate an optional helper thread 201.
- the helper thread 201 is configured to pop the LWP 103 from the delegation queue 104, and execute the LWP 103.
- the helper thread 201 in particular can access the shared resource 101 and can therefore execute the LWP 103 by means of the shared resource 101. That is, an LWP 103, which previously could not access the shared resource 101, because the shared resource 101 was locked, can now access the shared resource 101 by means of the helper thread 201 which executes the LWP 103 and which has access to the shared resource 101.
- the helper thread 201 in particular has access to the shared resource 101, because the helper thread 201 is the first thread to access the shared resource 101.
- the device 100 can optionally obtain LWP context information 202.
- the LWP context information 202 can also be regarded as an execution context of the LWP 103.
- the LWP context information 202 is associated with the LWP 103.
- the LWP context information 202 can in particular store a present state of execution of the LWP 103.
- the device 100 can further be configured operate the thread 102 to, if the shared resource is locked, pushed the LWP context information 202 to the legation queue 104. That is, now the delegation queue 104 holds the LWP 103 and the associated context information 200.
- the device 100 can now optionally be configured to operate the helper thread 201 to pop the LWP context information 202 from the delegation queue 104.
- the helper thread 201 can obtain information about a previous state of execution of the LWP 103, before the LWP 103 was pushed to the delegation queue 104.
- the device 100 can execute the LWP 103 based on the LWP context information 202.
- the helper thread 201 can start execution of the LWP 103 at the state of execution at which the LWP 103 arrived, before it was pushed to the delegation queue 104.
- the helper thread 201 switches into the context (i.e. the LWP context information 202) of the LWP 103.
- the device 100 can further be optionally configured to push the LWP 103 to a ready queue 203 after completion of execution of the LWP 103 by means of the shared resource 101.
- the helper thread 201 can push the LWP 103 to the ready queue 203.
- the LWP 103 can be popped by any other thread for further execution (e.g. an execution that does not require access to the shared resource 101).
- the helper thread 201 updates the LWP context information 202. That is, the LWP context information 202 now contains information regarding the state of the LWP 103 after its execution by the helper thread 201 is completed. Thereby, updated LWP context information 202’ is obtained.
- the updated LWP context information 202’ is pushed to the ready queue 203, where it can be popped by means of any other thread, so that any other thread can pop the LWP 103 and the updated LWP context information 202’.
- any other thread can continue execution of the LWP 103 that was popped from the ready queue 203, starting from the state of the LWP 103 according to the updated LWP context information 202’.
- the device 100 further can be configured to operate the thread 102 to pop the LWP
- the thread 102 then can continue executing the LWP 103. That is, the thread 102 that initially pushed the LWP 103 to the delegation queue
- the 104 can now pop the LWP 103 from the ready queue 203 to continue execution of the LWP 103, in particular after access to the shared resource 101 is no longer needed.
- the thread 102 can also pop the updated LWP context information 202’ from the ready queue 203 and continue executing the LWP 103 based on the updated LWP context information 202’. That is, the thread 102 can continue executing the LWP 103 beginning at the last state of the LWP 103 that is stored in the updated context information 202’.
- the device 100 is in particular suitable for use in a multithreaded runtime environment. That is, the device 100 in particular can operate multiple threads, e.g. to run them in parallel or synchronize them. Therefore, the device 100 can further optionally be configured to operate a second thread 206, to pop the LWP 103 from the ready queue 203 and continue executing the LWP 103. The device 100 preferably can operate the second thread 206 to pop the updated LWP context information 202’ from the ready queue 203 and can continue executing the LWP 103 based on the updated LWP context information 202’.
- the device 100 can be, or can be used in a multi-processor system, the device 100 can further be optionally configured to operate the thread 102 by a first core 204 of the device 100, and/or can optionally further be configured to operate the helper thread 201 by a second core 205 of the device 100.
- the LWP context information 202 can comprise a predefined application programming interface (API), e.g. for manipulation of the LWP context information 202.
- the API can e.g. be one of the linux functions getcontext() or setcontext() that allow user-level context switching between multiple threads of control within a process.
- the LWP context information 202 can comprise an application binary interface (ABI) specified LWP register context.
- ABI can e.g. be a System V Application Binary Interface.
- an example embodiment of the device 100 may consist of a set of pinned CPU cores, each being associated with a ready queue.
- the ready queues of the cores hold ready task fibers, that is, user- level-contexts (i.e. the LWPs 103).
- a user- level-context structure i.e. the LWP 103
- may hold a compatible user- level ABI this ABI standard defines all methods concerned with binary execution format and convention, e.g. a call function, registers for passing arguments, the binary frame of the stack, etc.), necessary register context (i.e. the LWP context information 202, and the stack, so that control can be passed to the fiber by setting these registers.
- the ready queues are interconnected through an all-to-all mesh of point-to-point message passing channels, such that each core can pass a fiber to any other core.
- the mesh configuration design enables each communication point to be either a single producer or mostly a single consumer, thus avoiding contention of the channels.
- the cores service their respective ready queues by de-queueing a ready task fiber and switching into it.
- the ready queue is empty, the core is engaged in job stealing from its neighboring cores’ ready queues. That is, an idle core can pick an LWP from another cores ready queue and execute it.
- the cores are aware of their NUMA interconnect distances, such that the job stealing scheduling guarantees load-balancing both per socket cores and inter socket cores, with the preference that local cores access local data.
- the throughput and latency of the task processing are standards measures of efficiency that also determine scalability.
- function pointers to critical sections i.e. sections that require access to the shared resource
- the delegating threads can then“detach”, i.e. continue execution after successful enqueue, or wait (through a future mechanism) till the helper thread is done with executing the associated critical section.
- a data structure i.e. a delegation queue
- contexts i.e. the LWP context information 202
- the fiber entering the critical section and discovering that the resource is taken switches and stores its context (i.e. the LWP context information), putting its fiber context in the resource’s delegation queue, and returns to service on any other ready fiber from its ready queue.
- Fig. 4 and Fig. 5 present code that manages the delegating and the helper fibers.
- Fig. 4 shows an enqueue function.
- the code finds a unique space in the dq array to place the fiber context and then the context switches to a scheduler, while placing the old context in an array.
- Fig. 5 shows the lock function that manages dq delegating and helper thread roles in the NUMA-aware lock algorithm. If a resource is free, the thread becomes the helper thread; see Fig. 5, lines 6 - 12; it opens the associated QD, returns from the lock function, and then executes its own critical section code. The delegating thread, finding the locks taken, calls _enqueue_ at Fig. 4, suspending its fiber at a unique queue slot, and context switches to the scheduler to process a next fiber from the core’s ready queue.
- Fig. 6 shows an unlock function. As it is shown, a helper thread repeatedly executes enqueued fiber contexts, switching to the next fiber in the queue, while sending completed fiber contexts to some ready queue on the socket.
- the helper thread executes its critical section and at its end calls unlock, see Fig. 6.
- the function queue_fiber_context_swap is a composite function that enables switching into the context provided as an argument while transmitting the old context to a communication queue, specified as the first argument.
- the function node_other_rand_q finds a communication queue corresponding to a ready queue of a core on the same socket. Otherwise, the QD is closed and its locks released, see Fig. 6, lines 18 - 23.
- the function other_node_delegate and hdq->glock are used to support delegation to another socket in a NUMA multi-socket case. While the helper thread completes execution of the first critical section it encounters another unlock call, and continues to execute the unlock function at Fig. 6, lines 32 - 39. It probes whether all the remaining contexts in the QD are done and closes the queue if it is the case. Otherwise, similarly to the first enqueued context, it resumes the next context in the queue while sending the completed context to some ready queue on the socket. Please note that the resumed context wakes up on the helper core under the impression it just returned from the lock function, and the completed context will wake up on some core under the impression it just returned from unlock function.
- the QD is made hierarchical by composing a global spinlock and a local per socket fiber QD, see Fig. 5, lines 6-9 and Fig. 6, lines 20-21 and 28-29.
- a local QD lock is tried, if it is free, then it is taken and then the global lock is tried. If the global lock is taken, the fiber de-schedules itself placing its context in the first entry of the QD and returns to serve its ready queue in the function enqueue_and_open, which is not shown here for brevity.
- the function enqueue_and_open opens the local delegation queue and enqueues its context, leaving it open for subsequent local fibers to enqueue.
- the lock holder chooses the next closest socket in a predefined, yet avoiding starvation way, and sends the whole of the that socket’s QD to that socket’s Ready Queue core, see Fig. 7.
- the function get_nn_order_socks returns socket nodes in the order of a Hamilton path of the sockets’ topology interconnect.
- Each node is tried to see if it has buffered fibers on its DQ on the resource. If it is the case the first fiber on the QD is delivered to some Ready queue on the socket, see Fig. 7, line 13. In that way all of the QD would be executed by the corresponding core. When the core is done with the QD it will call other_node_delegate on its turn, and the next socket on the Hamilton path would be probed.
- the present invention further extends the fiber QD to support multi reader by accommodating a sleeping reader in a per-core sleeping-on-an-event list, and transforming the lock and unlock functions to read/writer lock/unlock.
- the event is a location such that if set, all the cores having that event on their list will wake up the associated fiber.
- a simplified Two Phase Lock (2PL) transaction processing system database using a hierarchical multi reader fiber QD is shown.
- the database constitutes of a set of rows; each row is associated with a fiber QD.
- the transactions are represented as embedded C functions.
- Each core in the database is associated with a ready queue of fibers, and a list of reader sleeping fibers each associated with an event, represented as a pointer to a boolean.
- the ready queues are interconnected through communication channels, implemented as FF-queues, in an all-to-all topology to minimize contention.
- the cores process their ready queues and the associated lists and if those are empty try to job steal from increasingly farther neighbors.
- the transaction processing system runs a YCSB benchmark on an eight-socket 192 core machine and achieves very good performance and excellent scalability under high contention.
- Fig. 9 shows a method 900 for operating the device 100. That is, the method 900 is for serializing access to a shared resource 101.
- the method 900 comprises a first step of operating 901 a thread 102 to execute an LWP 103.
- the method comprises a further step of probing 902 access by the LWP 103 to the shared resource 101, and, if the shared resource 101 is locked, the method comprises a step of pushing 903 the LWP 103 to a delegation queue 104.
- the present invention also provides a computer program product comprising a program code for controlling a device 100 or for performing, when running on a computer, the method 900.
- the computer program product includes any kind of computer readable data, including e.g. any kind of storage, or information that is transmitted via a communication network.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multi Processors (AREA)
Abstract
Description
Claims
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2018/070819 WO2020025122A1 (en) | 2018-08-01 | 2018-08-01 | Device and method for serializing access to a shared resource |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3803597A1 true EP3803597A1 (en) | 2021-04-14 |
Family
ID=63108553
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18750154.9A Pending EP3803597A1 (en) | 2018-08-01 | 2018-08-01 | Device and method for serializing access to a shared resource |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP3803597A1 (en) |
WO (1) | WO2020025122A1 (en) |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6223204B1 (en) * | 1996-12-18 | 2001-04-24 | Sun Microsystems, Inc. | User level adaptive thread blocking |
US6330612B1 (en) * | 1998-08-28 | 2001-12-11 | International Business Machines Corporation | Method and apparatus for serializing access to a shared resource in an information handling system |
DE602008001802D1 (en) * | 2008-10-13 | 2010-08-26 | Alcatel Lucent | A method of synchronizing access to a shared resource, device, storage means and software program therefor |
US9152468B2 (en) * | 2010-10-25 | 2015-10-06 | Samsung Electronics Co., Ltd. | NUMA aware system task management |
US8966491B2 (en) * | 2012-04-27 | 2015-02-24 | Oracle International Corporation | System and method for implementing NUMA-aware reader-writer locks |
-
2018
- 2018-08-01 WO PCT/EP2018/070819 patent/WO2020025122A1/en unknown
- 2018-08-01 EP EP18750154.9A patent/EP3803597A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
WO2020025122A1 (en) | 2020-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10241831B2 (en) | Dynamic co-scheduling of hardware contexts for parallel runtime systems on shared machines | |
US7962923B2 (en) | System and method for generating a lock-free dual queue | |
Harris et al. | Callisto: Co-scheduling parallel runtime systems | |
US8640140B2 (en) | Adaptive queuing methodology for system task management | |
US7844973B1 (en) | Methods and apparatus providing non-blocking access to a resource | |
WO2012082330A1 (en) | Non-blocking wait-free data-parallel scheduler | |
JP2005284749A (en) | Parallel computer | |
US8495642B2 (en) | Mechanism for priority inheritance for read/write locks | |
US9075706B2 (en) | Electronic device with reversing stack data container and related methods | |
Nemitz et al. | Using lock servers to scale real-time locking protocols: Chasing ever-increasing core counts | |
US8578380B1 (en) | Program concurrency control using condition variables | |
JP7346649B2 (en) | Synchronous control system and method | |
Michael et al. | Relative performance of preemption-safe locking and non-blocking synchronization on multiprogrammed shared memory multiprocessors | |
Endo et al. | Parallelized software offloading of low-level communication with user-level threads | |
WO2024007207A1 (en) | Synchronization mechanism for inter process communication | |
JP7042105B2 (en) | Program execution control method and vehicle control device | |
Aggarwal et al. | Lock-free and wait-free slot scheduling algorithms | |
WO2020025122A1 (en) | Device and method for serializing access to a shared resource | |
Rheindt et al. | CaCAO: complex and compositional atomic operations for NoC-based manycore platforms | |
Deshpande et al. | Analysis of the Go runtime scheduler | |
Fukuoka et al. | An efficient inter-node communication system with lightweight-thread scheduling | |
Rahman | Process synchronization in multiprocessor and multi-core processor | |
Calciu et al. | How to implement any concurrent data structure | |
Singh | Communication Coroutines For Parallel Program Using DW26010 Many Core Processor | |
US11809219B2 (en) | System implementing multi-threaded applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20210107 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20220727 |