WO2021196160A1 - Appareil de gestion de stockage de données et noyau de traitement - Google Patents

Appareil de gestion de stockage de données et noyau de traitement Download PDF

Info

Publication number
WO2021196160A1
WO2021196160A1 PCT/CN2020/083208 CN2020083208W WO2021196160A1 WO 2021196160 A1 WO2021196160 A1 WO 2021196160A1 CN 2020083208 W CN2020083208 W CN 2020083208W WO 2021196160 A1 WO2021196160 A1 WO 2021196160A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
ram
random access
access memory
processing unit
Prior art date
Application number
PCT/CN2020/083208
Other languages
English (en)
Chinese (zh)
Inventor
罗飞
王维伟
Original Assignee
北京希姆计算科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京希姆计算科技有限公司 filed Critical 北京希姆计算科技有限公司
Priority to CN202080096316.9A priority Critical patent/CN115380292A/zh
Priority to PCT/CN2020/083208 priority patent/WO2021196160A1/fr
Publication of WO2021196160A1 publication Critical patent/WO2021196160A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Definitions

  • the invention relates to the technical field of processing cores, in particular to a data storage management device and a processing core.
  • the chip is the cornerstone of data processing, and it fundamentally determines the ability of people to process data. From the perspective of application areas, there are two main routes for chips: one is a general chip route, such as CPU, etc., which can provide great flexibility, but the effective computing power is relatively low when processing algorithms in a specific field; the other is a dedicated chip Routes, such as TPU, can exert high effective computing power in some specific fields, but they have poor processing capabilities or even unable to handle flexible and versatile fields.
  • a general chip route such as CPU, etc.
  • TPU dedicated chip Routes
  • the chip Due to the wide variety and huge amount of data in the intelligent era, the chip is required to have extremely high flexibility, capable of processing different fields and rapidly changing algorithms, and extremely strong processing capabilities, which can quickly process extremely large and rapidly increasing data. quantity.
  • the invention provides a data storage management device and a processing core, which can eliminate the decrease in calculation efficiency caused by Cache access failure, and improve the controllability of program efficiency.
  • the first aspect of the present invention provides a data storage management device, including: at least two random access memories RAM; a control unit for receiving instructions, generating and sending control signals according to the instructions; direct memory access controller DMAC , Used to realize the access to the data in the random access memory RAM according to the control signal.
  • the data storage management device receives and responds to instructions sent from an external processing unit, and reads data from the external storage unit, so that the external processing unit can directly read from the data storage management device when executing a program
  • the external processing unit does not need to fetch the data from the external storage unit through the high-speed Cache, which eliminates the decrease in computing efficiency caused by the Cache access failure, and improves the controllability of the program efficiency.
  • a direct memory access controller DMAC is configured to implement access to data in the RAM according to the control signal, including: the direct memory access controller DMAC is configured to send data from an external storage unit according to the control signal Read data in the control signal, and store the data in the random access memory RAM indicated by the control signal; or the direct memory access controller DMAC is used to obtain data from the control signal according to the control signal.
  • the indicated random access memory RAM reads data and stores the data in an external storage unit.
  • the number of the random access memory RAM indicated by the control signal is one or more.
  • all the addresses of the random access memory RAM and the addresses of the external storage unit are uniformly addressed.
  • the addresses of all the random access memory RAMs are uniformly addressed.
  • the access address range of the direct memory access controller DMAC is an address segment of the random access memory RAM and an address segment of an external storage unit.
  • a processing core including a processing unit, a storage unit, and the storage management device provided in the first aspect; the processing unit is configured to send instructions, and the instructions are used to instruct the The storage management device realizes the access to the data in the storage unit; the processing unit is also used to read the data required for executing the program from any random access memory RAM.
  • the instructions include a fetch instruction and a storage instruction; the processing unit is used to send a fetch instruction, and the fetch instruction is used to instruct the data storage management device to fetch data from the storage unit and store the data Stored in the random access memory RAM indicated by the fetch instruction.
  • the direct memory access controller DMAC is configured to send a storage completion signal after completing the fetch instruction; the processing unit is configured to issue a new fetch instruction according to the storage completion signal, and send a new fetch instruction from the The data is read from the random access memory RAM indicated by the fetch instruction.
  • the processing unit and the direct memory access controller DMAC access different random access memory RAMs.
  • the processing unit and the direct memory access controller DMAC access different random access memory RAMs, including: the at least two random access memory RAMs include a first random access memory RAM. Fetch the memory RAM and the second random access memory RAM; at the first time, the processing unit reads the first data from the first random access memory RAM, and the direct memory access controller DMAC sends the second random access memory to the The second data retrieved from the storage unit is written in the access memory RAM; at the second time, the processing unit reads the second data from the second random access memory RAM, and the direct The memory access controller DMAC writes the third data retrieved from the storage unit into the first random access memory RAM.
  • the first random access memory RAM is a first group of random access memory RAM including a plurality of RAMs
  • the second random access memory RAM is a second group of random access memory RAM including a plurality of random access memories RAM .
  • the processing unit and the direct memory access controller DMAC are respectively Each random access memory RAM in a group is accessed, or, at the same time, the processing unit and the direct memory access controller DMAC access RAMs belonging to two groups.
  • a chip including one or more processing cores provided in the second aspect.
  • a card board which includes one or more chips provided in the third aspect.
  • an electronic device including one or more cards provided in the fourth aspect.
  • a control unit receives an instruction, generates and sends a control signal according to the instruction; the direct memory access controller DMAC realizes the control of the random memory according to the control signal. Access to the data in the memory RAM.
  • an electronic device including: a memory for storing computer-readable instructions; and one or more processors for running the computer-readable instructions so that the processor The method for realizing any of the aforementioned data storage management in the sixth aspect at runtime.
  • a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute any of the aforementioned sixth aspects Describe the method of data storage management.
  • a computer program product which includes computer instructions, and when the computer instructions are executed by a computing device, the computing device can execute any of the data storage in the sixth aspect. Methods of management.
  • the data storage management device receives and responds to instructions sent from an external processing unit, and reads data from the external storage unit, so that the external processing unit can directly read from the data storage management device when executing a program
  • the external processing unit does not need to fetch the data from the external storage unit through the high-speed Cache, which eliminates the decrease in computing efficiency caused by the Cache access failure, and improves the controllability of the program efficiency.
  • FIG. 1 is a schematic diagram of reading data in a processing core in the prior art
  • Figure 2 is a schematic structural diagram of a data storage management device according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a processing core according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of the structure of a neural network according to an embodiment of the present invention.
  • Fig. 5 is a schematic structural diagram of a processing core according to an embodiment of the present invention.
  • Fig. 6 is a sequence diagram of neural network calculation performed by a processing core according to an embodiment of the present invention.
  • Fig. 7 is a schematic flowchart of a data storage management method according to an embodiment of the present invention.
  • multi-core or many-core chips are often used.
  • the cores in the multi-core architecture all have a certain degree of independent processing capability, and have a relatively large internal storage space for storing their own programs, data, and weights.
  • the play of the basic computing power of a single core determines the ability of the entire chip to compute neural networks.
  • the performance of the basic computing power of a single-core is determined by the ideal computing power and storage access efficiency of the single-core computing unit.
  • SRAM Static Random Access Memory
  • DDR SDRAM Double Data Rate Synchronous Dynamic Access Memory
  • the general concern is the access of the processing unit to the memory unit.
  • the speed of the processing unit is very fast, and its main frequency is generally several hundred MHz (megahertz) to several GHz (gigahertz), that is, ps to ns level, and the access speed of the memory unit is tens of ns level, both There is a big difference in speed.
  • How to solve the speed difference between the processing unit and the memory access, and effectively utilize the computing power of the processing unit, is a difficult point in modern CPU design.
  • Figure 1 is a schematic diagram of reading data in a processing core.
  • a high-speed cache is inserted between the processing unit (PU) and the storage unit memory.
  • the PU accesses the Memory in a hierarchical and indirect manner, that is, the PU directly accesses the Cache.
  • PU accesses Memory indirectly through Cache.
  • Cache is a mapping of Memory, and its content is a subset of the memory content.
  • the Cache has no independent organization space, and the address of the Cache is the same as the address of the accessed memory.
  • the PU when the PU is executing the program, it reads some data from the Memory through the Cache, that is, the Cache saves this part of the data. When the PU needs to use this part of the data again in a short time, the PU will directly call it from the Cache .
  • the Cache is transparent and has no functional significance, that is, the program cannot access the Cache alone, that is, the program thinks that the PU has retrieved data from the memory, but in fact the PU is called from the Cache. Fetched data.
  • Fig. 2 is a schematic structural diagram of a data storage management device according to an embodiment of the present invention.
  • the data storage management device includes: at least two random access memories RAM, a control unit, and a direct memory access controller (DMAC).
  • RAM random access memories
  • control unit a control unit
  • DMAC direct memory access controller
  • the data storage management device may be arranged in the processing core.
  • At least two RAMs include RAM_0, RAM_1...RAM_N.
  • the data storage management device has at least two RAMs, and each RAM can be accessed independently and in parallel.
  • the storage capacity of all RAMs can be the same or different.
  • the control unit is used for receiving instructions, generating and sending control signals C_DMAC according to the instructions. Among them, the instruction is sent by the processing unit PU located outside the data storage management device.
  • DMAC is used to realize the access to the data in the RAM according to the control signal.
  • the DMAC is used to implement access to data in RAM according to a control signal, including: DMAC is used to read data from an external storage unit Memory according to the control signal, and store the data in In the RAM indicated by the control signal.
  • the DMAC is used to implement access to data in RAM according to a control signal, including: DMAC is used to read data from the RAM indicated by the control signal according to the control signal, and The data is stored in an external memory.
  • the number of RAM indicated by the control signal is one or more.
  • the access addresses of all RAMs are addressed uniformly. More preferably, the access addresses of the RAMs are programmed continuously to reduce the complexity of program control.
  • the addresses of all RAMs are uniformly addressed with the addresses of the external storage unit.
  • the data storage management device contains two RAMs, namely RAM_0 and RAM_1.
  • the address of RAM_0 is 0000H-0FFFH
  • the address of RAM_1 is 1000H-1FFFH
  • the address of the external memory is 2000H-FFFFH.
  • the access address of the DMAC is a full address range, specifically all RAM address segments and external memory address segments.
  • the access address of PU is the address segment of all RAM.
  • the DMAC after the DMAC reads data from the external memory according to the control signal and stores the data in the RAM indicated by the control signal, it sends a storage completion signal to the external processing unit, and the storage is completed The signal is used to prompt the external processing unit to read data from the RAM that has just completed storage.
  • Fig. 3 is a schematic structural diagram of a processing core according to an embodiment of the present invention.
  • the processing core includes a processing unit PU, a storage unit memory, and the data storage management device provided in the above-mentioned embodiment.
  • the PU is used to send instructions, and the instructions are used to instruct the data storage management device to implement access to the data in the memory.
  • the instructions include fetch instructions and store instructions.
  • the processing unit is configured to send instructions, where the instructions are used to instruct the data storage management device to implement access to the data in the memory, including:
  • the PU is used to send a fetch instruction, and the fetch instruction is used to instruct the data storage management device to read data from the memory and store the data in the RAM indicated by the fetch instruction.
  • the DAMC sends a storage complete signal to the PU.
  • the PU is also used to read data needed to execute the program from any RAM.
  • the PU is used to issue a new fetch instruction every time after receiving a storage completion signal sent by the DMAC, and read data from the RAM that has just completed storage.
  • the PU when the PU receives the storage completion signal sent by the DMAC, it issues a new fetch instruction, and then reads the data from the RAM that has just completed the storage, so that the DMAC reads the data from the memory according to the new fetch instruction And stored in the corresponding RAM, and the PU fetches the data from the RAM that has just completed the storage to execute the program in parallel, which improves the efficiency of the operation.
  • the PU can first read the data from the RAM that has just completed storage, and then issue a new fetch instruction.
  • the PU and the DMAC access different RAMs.
  • the at least two RAMs include a first RAM and a second RAM.
  • the PU reads the first data from the first RAM, and the DMAC writes the second data retrieved from the memory into the second RAM.
  • the PU reads the second data from the second RAM, and the DMAC writes the third data retrieved from the memory into the first RAM.
  • PU and DMAC can also access the same RAM at the same time, and RAM responds to PU and DMAC serially.
  • the PU and DMAC can also access the same RAM at the same time, and the dual-port RAM responds to the PU and DMAC in parallel.
  • first RAM may be a first group of RAMs including a plurality of RAMs
  • second RAM may be a second group of RAMs including a plurality of RAMs.
  • the number of RAMs in the first group of RAMs may be the same as or different from the number of RAMs in the second group.
  • the PU and DMAC can respectively access each RAM in a group, or at the same time, the PU and DMAC accesses belong to two groups. RAM for each group.
  • the PU reads the first data from the first group of RAM, and the DMAC writes the second data retrieved from the memory to the second group of RAM; at the second time, the PU reads from the second group of RAM The second data is read in the DMAC, and the DMAC writes the third data retrieved from the memory to the first group of RAM.
  • the processing unit or the DMAC may also time-sharing access to each RAM in the same group.
  • the PU fetches data from the RAM and the DMAC stores the data stored in the memory in the RAM can be processed in parallel, which can further improve the computing power of the processing core, and is more suitable for neural network operations.
  • there is no need to design a complex Cache circuit in the processing core which saves the cost of the processing core and reduces the difficulty of chip design.
  • the processing unit does not need to fetch data from external memory through the high-speed cache. The decrease in computing efficiency caused by Cache access failure is eliminated, and the processing core can directly call data from the RAM of the storage management device, which improves the controllability of program efficiency.
  • Fig. 4 is a schematic structural diagram of a neural network according to an embodiment of the present invention.
  • the neural network has two layers, the output of the first layer is used as the input of the second layer, and the output of the second layer is the output of the entire neural network.
  • Fig. 5 is a schematic structural diagram of a processing core according to an embodiment of the present invention.
  • the processing core shown in FIG. 5 is used to realize the calculation of the neural network shown in FIG. 4.
  • the processing core includes a data storage management device, a processing unit, and a storage unit.
  • the data storage management device includes RAM_0, RAM_1, DAMC and a control unit.
  • RAM_0 and RAM_1 can hold the parameters and data calculated by the next layer of neural network.
  • PU and DMAC access different RAMs, so that the PU execution program and DMAC can store data in parallel, thereby optimizing calculation and storage efficiency. For example, at the first time, PU accesses RAM_0, and DAMC accesses RAM_1. At the second time point, PU accesses RAM_1, DAMC accesses RAM_0, and so on.
  • the PU sends the instruction lls_dis, the control unit receives the instruction, and generates a control signal C_DMAC and sends it to the DMAC.
  • the DMAC reads the data indicated by the instruction from the memory according to the instruction, and stores the data in the instruction indicated by the instruction.
  • RAM_0 when the DMAC has finished storing the data, it sends a storage completion signal to the PU; the PU receives the storage completion signal and issues a new instruction to instruct the DMAC to read new data from memory and store it in RAM_1, and then Read data from RAM_0.
  • the DMAC stores the data in RAM_1, it sends a signal that the storage is complete.
  • the PU issues an instruction again and then reads the data from RAM_1.
  • the instruction issued again instructs the DMAC to read new data from memory and store it in RAM_0.
  • DMAC storage data and PU read data realize parallel processing, so that both calculation and storage can maximize efficiency.
  • Fig. 6 is a sequence diagram of a neural network calculation performed by a processing core according to an embodiment of the present invention.
  • the program executed by the PU at t2 can be set to be the same as the program executed at t1, that is, the calculation of the first layer of the same neural network is also performed at t2, or the program executed by the PU at t2 can be set It is different from the program executed at t1, and the present invention is not limited to this.
  • a chip including one or more processing cores provided in the foregoing embodiments.
  • a card board which includes one or more chips provided in the foregoing embodiments.
  • an electronic device including one or more of the card boards provided in the foregoing embodiments.
  • FIG. 7 is a data storage management method provided by an embodiment of the present invention. The method includes: step S101-step S102;
  • step S101 the control unit receives an instruction, generates and sends a control signal according to the instruction.
  • Step S102 the direct memory access controller DMAC realizes the access to the data in the RAM according to the control signal.
  • the DMAC implements the access to the data in the RAM according to the control signal, including: the DMAC reads data from the external memory according to the control signal, and stores the data in the RAM indicated by the control signal.
  • the DMAC implements the access to the data in the RAM according to the control signal, including: the DMAC reads data from the RAM indicated by the control signal according to the control signal, and stores the data in an external memory.
  • An embodiment of the present invention provides a schematic flowchart of a method for processing core processing data.
  • the method includes: step S201-step S202,
  • Step S201 the processing unit sends a fetch instruction
  • Step S202 The data storage management device reads data from the storage unit according to the fetch instruction, and stores the data in the RAM of the data storage management device indicated by the fetch instruction.
  • the data storage management device sends a storage completion signal after storing the data in the RAM of the data storage management device indicated by the fetch instruction.
  • the processing unit receives a storage completion signal sent by the DMAC, it issues a new fetch instruction, and reads data from the RAM indicated by the fetch instruction just completed.
  • the processing unit and the direct memory access controller DMAC access different said RAMs.
  • the processing unit reads the first data from the first RAM, and the DMAC writes the second data retrieved from the memory into the second RAM.
  • the PU reads the second data from the second RAM, and the DMAC writes the third data retrieved from the memory into the first RAM.
  • an electronic device including: a memory for storing computer-readable instructions; and one or more processors for running the computer-readable instructions so that the processor
  • the method of data storage management of the foregoing embodiment is implemented at runtime.
  • a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the data storage management method of the foregoing embodiment .
  • a computer program product which includes computer instructions, and when the computer instructions are executed by a computing device, the computing device can execute the data storage management method of the foregoing embodiment.

Abstract

Appareil de gestion de stockage de données et noyau de traitement. L'appareil comprend : au moins deux mémoires à accès aléatoire (RAM) ; une unité de commande recevant une instruction, générant et envoyant un signal de commande en fonction de l'instruction (S101) ; et un contrôleur d'accès direct à la mémoire (DMAC) permettant d'accéder à des données dans la RAM en fonction du signal de commande (S102). L'appareil de gestion de stockage de données reçoit une instruction envoyée par une unité de traitement externe et y répond, et des données sont lues à partir d'une unité de stockage externe, de telle sorte que les données requises pour exécuter un programme peuvent être directement lues depuis l'appareil de gestion de stockage de données lorsque l'unité de traitement externe exécute le programme, et l'unité de traitement externe n'a pas besoin d'extraire un numéro de l'unité de stockage externe au moyen d'une mémoire cache, ce qui permet d'éviter une baisse de l'efficacité de calcul due à une défaillance d'accès à la mémoire cache et d'améliorer la contrôlabilité de l'efficacité du programme.
PCT/CN2020/083208 2020-04-03 2020-04-03 Appareil de gestion de stockage de données et noyau de traitement WO2021196160A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080096316.9A CN115380292A (zh) 2020-04-03 2020-04-03 一种数据存储管理装置及处理核
PCT/CN2020/083208 WO2021196160A1 (fr) 2020-04-03 2020-04-03 Appareil de gestion de stockage de données et noyau de traitement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/083208 WO2021196160A1 (fr) 2020-04-03 2020-04-03 Appareil de gestion de stockage de données et noyau de traitement

Publications (1)

Publication Number Publication Date
WO2021196160A1 true WO2021196160A1 (fr) 2021-10-07

Family

ID=77927304

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/083208 WO2021196160A1 (fr) 2020-04-03 2020-04-03 Appareil de gestion de stockage de données et noyau de traitement

Country Status (2)

Country Link
CN (1) CN115380292A (fr)
WO (1) WO2021196160A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1811741A (zh) * 2005-01-27 2006-08-02 富士通株式会社 直接存储器存取控制方法、直接存储器存取控制器、信息处理系统及程序
CN106776360A (zh) * 2017-02-28 2017-05-31 建荣半导体(深圳)有限公司 一种芯片及电子设备
CN108416422A (zh) * 2017-12-29 2018-08-17 国民技术股份有限公司 一种基于fpga的卷积神经网络实现方法及装置
US10515302B2 (en) * 2016-12-08 2019-12-24 Via Alliance Semiconductor Co., Ltd. Neural network unit with mixed data and weight size computation capability
CN110647722A (zh) * 2019-09-20 2020-01-03 北京中科寒武纪科技有限公司 数据处理方法及装置以及相关产品

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1811741A (zh) * 2005-01-27 2006-08-02 富士通株式会社 直接存储器存取控制方法、直接存储器存取控制器、信息处理系统及程序
US10515302B2 (en) * 2016-12-08 2019-12-24 Via Alliance Semiconductor Co., Ltd. Neural network unit with mixed data and weight size computation capability
CN106776360A (zh) * 2017-02-28 2017-05-31 建荣半导体(深圳)有限公司 一种芯片及电子设备
CN108416422A (zh) * 2017-12-29 2018-08-17 国民技术股份有限公司 一种基于fpga的卷积神经网络实现方法及装置
CN110647722A (zh) * 2019-09-20 2020-01-03 北京中科寒武纪科技有限公司 数据处理方法及装置以及相关产品

Also Published As

Publication number Publication date
CN115380292A (zh) 2022-11-22

Similar Documents

Publication Publication Date Title
Lee et al. Decoupled direct memory access: Isolating CPU and IO traffic by leveraging a dual-data-port DRAM
US10198204B2 (en) Self refresh state machine MOP array
US9892058B2 (en) Centrally managed unified shared virtual address space
JP4322259B2 (ja) マルチプロセッサシステムにおけるローカルメモリへのデータアクセスを同期化する方法および装置
US9141173B2 (en) Thread consolidation in processor cores
US9965222B1 (en) Software mode register access for platform margining and debug
CN104699631A (zh) Gpdsp中多层次协同与共享的存储装置和访存方法
US20220076739A1 (en) Memory context restore, reduction of boot time of a system on a chip by reducing double data rate memory training
JP7126136B2 (ja) 再構成可能なキャッシュアーキテクチャおよびキャッシュコヒーレンシの方法
CN108139994B (zh) 内存访问方法及内存控制器
JP2018136922A (ja) メモリープールを有するコンピューティングシステムのためのメモリー分割
US20130191587A1 (en) Memory control device, control method, and information processing apparatus
US11914903B2 (en) Systems, methods, and devices for accelerators with virtualization and tiered memory
KR20240004361A (ko) 프로세싱-인-메모리 동시적 프로세싱 시스템 및 방법
WO2022068149A1 (fr) Système et procédé de chargement et de stockage de données
Guoteng et al. Design and Implementation of a DDR3-based Memory Controller
WO2021196160A1 (fr) Appareil de gestion de stockage de données et noyau de traitement
US9720830B2 (en) Systems and methods facilitating reduced latency via stashing in system on chips
US11899970B2 (en) Storage system and method to perform workload associated with a host
EP4060505A1 (fr) Techniques de proximité d'accélération de données proches pour une architecture multic ur
Kang et al. An architecture of sparse length sum accelerator in axdimm
CN217588059U (zh) 处理器系统
US20240004560A1 (en) Efficient memory power control operations
EP4160423A1 (fr) Dispositif de mémoire, procédé de fonctionnement d'un dispositif de mémoire et dispositif électronique comprenant un dispositif de mémoire
Haridass Heterogeneous Computing The Future of Systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20929409

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20929409

Country of ref document: EP

Kind code of ref document: A1