WO2013097098A1 - Procédé de traitement de données, unité de processeur graphique (gpu) et dispositif de premier nœud - Google Patents
Procédé de traitement de données, unité de processeur graphique (gpu) et dispositif de premier nœud Download PDFInfo
- Publication number
- WO2013097098A1 WO2013097098A1 PCT/CN2011/084764 CN2011084764W WO2013097098A1 WO 2013097098 A1 WO2013097098 A1 WO 2013097098A1 CN 2011084764 W CN2011084764 W CN 2011084764W WO 2013097098 A1 WO2013097098 A1 WO 2013097098A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- gpu
- communication data
- communication
- node device
- cpu
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/544—Buffers; Shared memory; Pipes
Definitions
- the present invention relates to the field of communications technologies, and in particular, to a data processing method, an image processor GPU, and a first node device.
- the data communication mechanism between node devices is the basis of distributed parallel computing.
- distributed parallel computing there is a certain amount of shared data or data flow between processes belonging to the same task. These processes need to be peered at a specific location.
- a GPU Graphic Processing Unit
- a distributed GPU system is formed.
- each process belonging to the same task is run by a GPU of a different node device, wherein the node device can be a commercial server; since there is a certain shared data between the processes, a communication mechanism between the nodes is needed to implement The flow of the shared data.
- the node device can be a commercial server; since there is a certain shared data between the processes, a communication mechanism between the nodes is needed to implement The flow of the shared data.
- the CPU of the second node device Central Processing Unit, central processing
- the communication data is copied to the internal memory and transmitted to the GPU 1 by the CPU 1 of the first node device, so that the GPU 1 executes the processing process of the first process.
- the inventors have found that the prior art has at least the following problems:
- the first process of the GPU 1 needs to share the intermediate running data of the second process of the GPU 2
- the first process also needs After the GPU 2 runs the complete second process, the intermediate running data of the second process can be obtained, and the running time of the first process is extended, thereby reducing the computing efficiency of the system.
- an embodiment of the present invention provides a data processing method, an image processor GPU, and a first node device.
- the technical solution is as follows:
- a data processing method comprising: when a central processor CPU of a first node device starts a kernel program of a graphics processor GPU of the node device, the GPU runs the kernel program, and the kernel program includes at least Programming a preset GPU communication application P API ;
- the GPU acquires first communication data
- the GPU Determining, by the GPU, whether the communication operation corresponding to the preset GPU communication API is a communication operation for sending or a communication operation for receiving, and if the communication operation is for sending, the GPU performs the first communication
- the data is stored in a preset buffer of the memory of the node device, so that the CPU copies the first communication data from the preset buffer to the memory of the node device; if it is a communication for receiving In operation, the GPU acquires second communication data from the preset buffer, where the second communication data is copied by the CPU into the preset buffer.
- a graphics processor GPU comprising:
- a running module configured to: when a central processor CPU of the first node device starts a kernel program of a graphics processor GPU of the node device, the kernel program includes at least one preset GPU communication application programming Interface API ;
- An obtaining module configured to acquire first communication data when a kernel program of the GPU runs to the preset GPU communication API
- a determining processing module configured to determine whether the communication operation corresponding to the preset GPU communication API is a communication operation for sending or a communication operation for receiving, and if it is a communication operation for sending, the GPU
- the first communication data is stored in a preset buffer of the memory of the node device, so that the CPU copies the first communication data from the preset buffer to the memory of the node device;
- the GPU acquires second communication data from the preset buffer, wherein the second communication data is copied by the CPU into the preset buffer.
- a first node device comprising a central processing unit CPU and the above graphics processor GPU;
- the technical solution provided by the embodiment of the present invention has the following beneficial effects: inserting a preset GPU communication API in a kernel program of the GPU of the first node device, where the intermediate running data needs to be shared, when the kernel program of the GPU runs to the
- the preset GPU communication API acquires intermediate running data of the running part of the kernel program, that is, the first communication data; and the GPU determines whether the communication operation corresponding to the GPU communication API is for a communication operation for sending or for receiving
- the communication operation is performed by the CPU of the GPU and the local node device according to the judgment result, and the communication operation of the GPU is completed, so that the CPU acquires the first communication data, and the GPU acquires the second communication data, compared with the existing communication data.
- FIG. 5 is a schematic diagram of communication interaction between GPUs of different nodes according to Embodiment 3 of the present invention.
- Embodiments of the present invention provide a data processing method, an image processor GPU, and a first node device.
- the GPU determines whether the communication operation corresponding to the GPU communication API is a communication operation for sending or In the received communication operation, if it is a communication operation for sending, the GPU stores the first communication data to a preset buffer of the memory of the local node device, so that the CPU will use the first communication Data is copied from the preset buffer to the memory of the local device; if it is a communication operation for receiving, the GPU acquires second communication data from the preset buffer, where the second Communication data is copied by the CPU into the preset buffer.
- a preset GPU communication API is inserted in a kernel program of the GPU of the first node device, where the intermediate running data needs to be shared, when the kernel program of the GPU runs to the preset GPU communication API, Obtaining intermediate running data of the running part of the kernel program, that is, the first communication data; the GPU determining whether the communication operation corresponding to the GPU communication API is a communication operation for sending or a communication operation for receiving, according to the judgment result
- the GPU and the CPU of the local node device perform corresponding processing to complete the communication operation of the GPU, so that the CPU acquires the first communication data, and the GPU acquires the second communication data.
- the embodiment is The intermediate running data (the first communication data and the second communication data) are acquired in time during the running of the kernel program of the GPU, so that the second node device does not need to wait for the entire kernel program of the first node device to run before acquiring the intermediate running data, which is shortened.
- the running time of the process on the second node device improves the computational efficiency of the system.
- FIG. 2 is a flowchart of an embodiment of a data processing method according to Embodiment 2 of the present invention.
- the kernel (kernel) program of the GPU1 includes at least one preset GPU communication API.
- the preset GPU communication API divides the kernel program of the GPU 1 into a plurality of sub-kernel programs, so the kernel program includes at least two sub-kernel programs, each of the sub-kernel programs There is no communication operation; the preset GPU communication API is a communication API supported by the GPU, which corresponds to different communication operations, wherein the communication operation includes a communication operation for transmission and a communication operation for reception.
- the GPU 1 determines whether the communication operation corresponding to the preset GPU communication API is a communication operation for sending or a communication operation for receiving, if it is a communication operation for sending, performing S204; if it is for receiving For the communication operation, execute S205.
- the GPU 1 stores the first communication data to a preset buffer of the memory of the local node device, so that the CPU copies the first communication data from the preset buffer to the local node device. In memory.
- the communication operation corresponding to the preset GPU communication API is a communication operation for transmission, indicating that the GPU 1 wants to send the first communication data to the CPU 1 of the local node device, but due to the slave processor characteristics of the GPU, Therefore, the first communication data can only be acquired by the CPU 1 of the own node from the preset buffer.
- the GPU 1 stores the first communication data in a preset buffer of the memory of the local node device,
- the kernel program is switched to the CPU code, and the CPU 1 runs its own program.
- the CPU 1 runs to the CPU communication API corresponding to the communication operation for reception, the CPU 1 copies the first communication data into the memory of the own node device.
- the preset buffer is specified by the user.
- the GPU 1 acquires second communication data from the preset buffer, where the second communication data is copied by the CPU1 into the preset buffer.
- the communication operation corresponding to the preset GPU communication API is a communication operation for reception, it indicates that the CPU 1 wants to transmit the second communication data to the GPU 1.
- the kernel program is switched to a CPU code, and the CPU 1 runs its own program.
- the CPU 1 runs to the CPU communication API corresponding to the communication operation for transmitting, the CPU 1 copies the second communication data from the memory of the node device to the preset buffer of the memory of the node device.
- the second communication data may be communication data of a program run by the CPU 1 itself; or may be second communication data generated by a kernel program of the GPU 2 on the second node device, specifically, the CPU 2 of the second node device
- the second communication data is copied from the preset buffer on the second node device to the memory of the second node device, and the CPU 2 transmits the second communication data to the CPU1.
- the subsequent part of the kernel program of the GPU is continuously executed, that is, the remaining sub-core programs of the kernel program of the GPU are sequentially executed.
- the GPU When there are multiple GPU communication APIs in the kernel program of the GPU, the GPU cyclically executes the processes of the above S202-S205 until the end of the kernel program of the entire GPU.
- the method further includes: the CPU1 of the first node device transmits the first communication data to the GPU2 of the second node device via the CPU2 of the second node device, so that the second node The GPU 2 of the device shares the first communication data; similarly, the GPU 2 on the second node device can also transmit its second communication data to the GPU 1 through the CPU 2 and the CPU 1 in sequence, thereby realizing the GPU running time on different node devices in the cluster.
- Two-way communication The communication mechanism between the CPUs on the different node devices may be implemented by using a prior art such as a socket or a message passing interface (MPI), and is not described here.
- MPI message passing interface
- the kernel program of the GPU includes a preset GPU communication API, so that the GPU has the function of active communication.
- the kernel program of the GPU executes to the preset GPU communication API, indicating that the GPU wants to send or receive communication data, correspondingly, the CPU on the node device fetches communication data from a preset buffer or The communication data is copied into the preset buffer, thereby indirectly implementing the communication operation of the GPU, thereby implementing two-way communication between the CPU and the GPU on the same node device during the running of the GPU kernel program.
- the embodiment is The intermediate running data (the first communication data and the second communication data) are acquired in time during the running of the kernel program of the GPU, so that the second node device does not need to wait for the entire kernel program of the first node device to run before acquiring the intermediate running data, which is shortened.
- the running time of the process on the second node device improves the computational efficiency of the system.
- the two-way communication between the GPU and the CPU on the single-node device is implemented during the running of the kernel program of the GPU; and two-way communication between the GPU and the CPU on the single-node device is realized by running the kernel program of the GPU.
- the two-way communication of the GPU running on different node devices in the cluster is realized.
- the GPU 1 stores the first communication data to the first communication of the memory of the local node device.
- the data buffer sets the state of the first indicator signal bit to a set state.
- the GPU 1 continuously queries (ie, polls) the state of the first indicator signal bit, and when the state of the first indicator signal bit is set, the GPU 1 continues to query the first indicator signal bit.
- the system adopts a method of task scheduling policy optimization, specifically, identifying a computing task that needs to perform a synchronization operation before distributing the calculation task, and distributing the calculation tasks to the system.
- the global identification bit is set.
- the computing tasks on all nodes that need to be synchronized are ready to run, the computing tasks are uniformly scheduled to run, thereby ensuring the user.
- the exclusiveness of the GPU task determines that the number of tasks to be synchronized cannot exceed the number of concurrent tasks allowed by the system.
- the tasks to be synchronized need to be in the running state at the same time. Otherwise, the system performance will be brought. damage.
- a preset GPU communication API is inserted in a kernel program of the GPU of the first node device, where the intermediate running data needs to be shared, when the kernel program of the GPU runs to the preset GPU communication API, Obtaining intermediate running data of the running part of the kernel program, that is, the first communication data; the GPU determining whether the communication operation corresponding to the GPU communication API is a communication operation for sending or a communication operation for receiving, according to the judgment result
- the GPU and the CPU of the local node device perform corresponding processing to complete the communication operation of the GPU, so that the CPU acquires the first communication data, and the GPU acquires the second communication data.
- the obtaining module 502 is configured to acquire first communication data when the kernel program of the GPU runs to the preset GPU communication API.
- the determining processing module 503 is configured to determine whether the communication operation corresponding to the preset GPU communication API is a communication operation for sending or a communication operation for receiving, and if it is a communication operation for sending, the GPU will
- the first communication data is stored in a preset buffer of the memory of the local device, so that the CPU copies the first communication data from the preset buffer to the memory of the node device;
- the GPU acquires second communication data from the preset buffer, where the second communication data is copied by the CPU into the preset buffer.
- the obtaining module 502 includes: an obtaining unit 5021, as shown in FIG. 7, FIG. 7 is a second structural diagram of a GPU embodiment of a graphics processor according to Embodiment 4 of the present invention;
- the obtaining unit 5021 is configured to acquire communication data of the sub-kernel program.
- the preset buffer includes a flag signal bit and a communication data buffer; the flag signal bit includes a first flag signal bit and a second flag signal bit, and the communication data
- the buffer includes a first communication data buffer and a second communication data buffer, wherein the first indication signal bit and the first communication data buffer correspond to the CPU receiving the indication signal bit of the GPU and the communication data buffer
- the second indication signal bit and the second communication data buffer correspond to the GPU receiving the indication signal bit of the CPU and the communication data buffer.
- the determination processing module 503 includes: the storage setting unit 5031, as shown in FIG. 8, FIG. 8 is a third structural diagram of a GPU embodiment of a graphics processor according to Embodiment 4 of the present invention;
- the storage setting unit 5031 is configured to store the first communication data to a first communication data buffer of a memory of the local node device, and set a state of the first indication signal bit to a set state, so that the CPU In the query to the said After the state of the signal bit is set to the state, the first communication data in the first communication data buffer is copied into the memory of the node device.
- the determining processing module 503 includes:
- FIG. 9 is a fourth structural diagram of a GPU embodiment of a graphics processor according to Embodiment 4 of the present invention.
- the CPU 40 is configured to start a kernel program of a graphics processor GPU of the node device; copy the first communication data from a preset buffer to a memory of the node device; and copy the second communication data to the preset In the buffer.
- the CPU 40 is further configured to transmit the first communication data to a GPU of the second node device by using a CPU of the second node device, so that the GPU of the second node device shares the first communication data.
- the CPU 40 is further configured to check whether the first communication data is valid, and if yes, the first identifier The state of the signal bit is set to the reset state; if not, the state of the flag signal bit is set to the reception error state.
- a preset GPU communication API is inserted in a kernel program of the GPU of the first node device, where the intermediate running data needs to be shared, when the kernel program of the GPU runs to the preset GPU communication API, Obtaining intermediate running data of the running part of the kernel program, that is, the first communication data; the GPU determining whether the communication operation corresponding to the GPU communication API is a communication operation for sending or a communication operation for receiving, according to the judgment result
- the GPU and the CPU of the local node device perform corresponding processing to complete the communication operation of the GPU, so that the CPU acquires the first communication data, and the GPU acquires the second communication data.
- the two-way communication between the GPU and the CPU on the single-node device is implemented during the running of the kernel program of the GPU; and two-way communication between the GPU and the CPU on the single-node device is realized by running the kernel program of the GPU.
- the two-way communication of the GPU running on different node devices in the cluster is realized.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multi Processors (AREA)
- Computer And Data Communications (AREA)
Abstract
La présente invention concerne un procédé de traitement de données, une unité de processeur graphique (GPU) et un dispositif de premier nœud qui relèvent du domaine technique des communications. Le procédé de traitement de données comprend les étapes au cours desquelles : lorsqu'une CPU lance un programme du noyau d'une GPU d'un dispositif de nœud, la GPU exécute le programme du noyau, le programme du noyau comprenant au moins une API de communication de GPU prédéterminée ; quand le programme du noyau de la GPU exécute l'API de communication de GPU prédéterminée, la GPU obtient des premières données de communication ; puis la GPU évalue si une opération de communication correspondant à l'API de communication de GPU prédéterminée est une opération de communication d'émission ou une opération de communication de réception et, s'il s'agit d'une opération de communication d'émission, la GPU mémorise les premières données de communication dans un tampon prédéterminé d'une mémoire vidéo et autorise la CPU à copier les premières données de communication dans une mémoire du dispositif de nœud à partir du tampon prédéterminé ; et, s'il s'agit d'une opération de communication de réception, la GPU obtient des secondes données de communication à partir du tampon prédéterminé. La présente invention permet d'améliorer l'efficacité de calcul du système.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2011/084764 WO2013097098A1 (fr) | 2011-12-27 | 2011-12-27 | Procédé de traitement de données, unité de processeur graphique (gpu) et dispositif de premier nœud |
CN201180003244.XA CN103282888B (zh) | 2011-12-27 | 2011-12-27 | 数据处理方法、图像处理器gpu及第一节点设备 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2011/084764 WO2013097098A1 (fr) | 2011-12-27 | 2011-12-27 | Procédé de traitement de données, unité de processeur graphique (gpu) et dispositif de premier nœud |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013097098A1 true WO2013097098A1 (fr) | 2013-07-04 |
Family
ID=48696189
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2011/084764 WO2013097098A1 (fr) | 2011-12-27 | 2011-12-27 | Procédé de traitement de données, unité de processeur graphique (gpu) et dispositif de premier nœud |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN103282888B (fr) |
WO (1) | WO2013097098A1 (fr) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103716635A (zh) * | 2013-12-12 | 2014-04-09 | 浙江宇视科技有限公司 | 一种提升智能分析性能的方法和装置 |
CN107333136A (zh) * | 2017-06-26 | 2017-11-07 | 西安万像电子科技有限公司 | 图像编码方法和装置 |
CN111506420A (zh) * | 2020-03-27 | 2020-08-07 | 北京百度网讯科技有限公司 | 内存同步方法、装置、电子设备及存储介质 |
TWI715613B (zh) * | 2015-09-25 | 2021-01-11 | 美商英特爾股份有限公司 | 用於實施gpu-cpu雙路徑記憶體複製之設備、系統及方法 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110969565B (zh) * | 2018-09-28 | 2023-05-16 | 杭州海康威视数字技术股份有限公司 | 图像处理的方法和装置 |
CN113986771B (zh) * | 2021-12-29 | 2022-04-08 | 北京壁仞科技开发有限公司 | 用于调试目标程序代码的方法及装置、电子设备 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1250567A (zh) * | 1997-03-13 | 2000-04-12 | 国际商业机器公司 | 连到计算机网络上的信息站和服务器 |
CN101599009A (zh) * | 2009-04-30 | 2009-12-09 | 浪潮电子信息产业股份有限公司 | 一种异构多处理器上并行执行任务的方法 |
CN101802789A (zh) * | 2007-04-11 | 2010-08-11 | 苹果公司 | 多处理器上的并行运行时执行 |
CN102099788A (zh) * | 2008-06-06 | 2011-06-15 | 苹果公司 | 用于在多处理器上进行数据并行计算的应用编程接口 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5572572A (en) * | 1988-05-05 | 1996-11-05 | Transaction Technology, Inc. | Computer and telephone apparatus with user friendly interface and enhanced integrity features |
-
2011
- 2011-12-27 WO PCT/CN2011/084764 patent/WO2013097098A1/fr active Application Filing
- 2011-12-27 CN CN201180003244.XA patent/CN103282888B/zh active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1250567A (zh) * | 1997-03-13 | 2000-04-12 | 国际商业机器公司 | 连到计算机网络上的信息站和服务器 |
CN101802789A (zh) * | 2007-04-11 | 2010-08-11 | 苹果公司 | 多处理器上的并行运行时执行 |
CN102099788A (zh) * | 2008-06-06 | 2011-06-15 | 苹果公司 | 用于在多处理器上进行数据并行计算的应用编程接口 |
CN101599009A (zh) * | 2009-04-30 | 2009-12-09 | 浪潮电子信息产业股份有限公司 | 一种异构多处理器上并行执行任务的方法 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103716635A (zh) * | 2013-12-12 | 2014-04-09 | 浙江宇视科技有限公司 | 一种提升智能分析性能的方法和装置 |
CN103716635B (zh) * | 2013-12-12 | 2017-04-19 | 浙江宇视科技有限公司 | 一种提升智能分析性能的方法和装置 |
TWI715613B (zh) * | 2015-09-25 | 2021-01-11 | 美商英特爾股份有限公司 | 用於實施gpu-cpu雙路徑記憶體複製之設備、系統及方法 |
CN107333136A (zh) * | 2017-06-26 | 2017-11-07 | 西安万像电子科技有限公司 | 图像编码方法和装置 |
CN111506420A (zh) * | 2020-03-27 | 2020-08-07 | 北京百度网讯科技有限公司 | 内存同步方法、装置、电子设备及存储介质 |
CN111506420B (zh) * | 2020-03-27 | 2023-09-22 | 北京百度网讯科技有限公司 | 内存同步方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN103282888A (zh) | 2013-09-04 |
CN103282888B (zh) | 2017-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI543073B (zh) | 用於多晶片系統中的工作調度的方法和系統 | |
US7490089B1 (en) | Methods and apparatus facilitating access to shared storage among multiple computers | |
US7668923B2 (en) | Master-slave adapter | |
CN103827829B (zh) | 在中间件机器环境中提供和管理用于多节点应用的消息队列的系统及方法 | |
JP6475625B2 (ja) | コア間通信装置及び方法 | |
US8032892B2 (en) | Message passing with a limited number of DMA byte counters | |
US6826123B1 (en) | Global recovery for time of day synchronization | |
US9367251B2 (en) | System and method of a shared memory hash table with notifications | |
JP6353086B2 (ja) | マルチアイテムトランザクションサポートを有するマルチデータベースログ | |
US7797588B2 (en) | Mechanism to provide software guaranteed reliability for GSM operations | |
JP2018163671A (ja) | 拡張縮小可能なログベーストランザクション管理 | |
US20050081080A1 (en) | Error recovery for data processing systems transferring message packets through communications adapters | |
TWI547870B (zh) | 用於在多節點環境中對i/o 存取排序的方法和系統 | |
WO2013097098A1 (fr) | Procédé de traitement de données, unité de processeur graphique (gpu) et dispositif de premier nœud | |
TW201543218A (zh) | 具有多節點連接的多核網路處理器互連之晶片元件與方法 | |
TWI541649B (zh) | 用於多晶片系統的晶片間互連協定之系統與方法 | |
US8086766B2 (en) | Support for non-locking parallel reception of packets belonging to a single memory reception FIFO | |
US10185681B2 (en) | Hybrid message-based scheduling technique | |
US20050080869A1 (en) | Transferring message packets from a first node to a plurality of nodes in broadcast fashion via direct memory to memory transfer | |
KR20110047753A (ko) | 교착 상태의 방지를 위한 데이터 처리 방법 및 시스템 | |
US20050080920A1 (en) | Interpartition control facility for processing commands that effectuate direct memory to memory information transfer | |
US20090199191A1 (en) | Notification to Task of Completion of GSM Operations by Initiator Node | |
US9830263B1 (en) | Cache consistency | |
US20090199200A1 (en) | Mechanisms to Order Global Shared Memory Operations | |
EP2676203B1 (fr) | Protocole de diffusion pour réseau d'antémémoires |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11879132 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11879132 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11879132 Country of ref document: EP Kind code of ref document: A1 |