WO2020103706A1 - 一种数据处理系统及数据处理方法 - Google Patents

一种数据处理系统及数据处理方法

Info

Publication number
WO2020103706A1
WO2020103706A1 PCT/CN2019/116653 CN2019116653W WO2020103706A1 WO 2020103706 A1 WO2020103706 A1 WO 2020103706A1 CN 2019116653 W CN2019116653 W CN 2019116653W WO 2020103706 A1 WO2020103706 A1 WO 2020103706A1
Authority
WO
WIPO (PCT)
Prior art keywords
processor
processors
data
data processing
memory
Prior art date
Application number
PCT/CN2019/116653
Other languages
English (en)
French (fr)
Inventor
冯杰
Original Assignee
北京灵汐科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京灵汐科技有限公司 filed Critical 北京灵汐科技有限公司
Publication of WO2020103706A1 publication Critical patent/WO2020103706A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/167Interprocessor communication using a common memory, e.g. mailbox

Definitions

  • the invention relates to the field of artificial intelligence chips, in particular to a data processing system with high computing power and low power consumption.
  • the chip processor (as shown in the solid frame in Figure 1) is generally composed of an arithmetic unit and a controller, and the memory is external.
  • the controller notifies the input device to transfer the input data to the memory; then notifies the operator to fetch the data from the memory, performs the operation, and stores the operation result in the memory; and finally notifies the output device to receive the output result .
  • the data required during the operation depends on external memory, so the computing power is limited by the storage bandwidth and cannot be effectively improved, becoming one of the biggest bottlenecks of the traditional architecture; the operator can be executed alone or in combination Various operations, but unable to achieve control and other functions; the controller centrally controls all modules, which is inefficient; the parallelism of operations is low, and the pipeline number of tasks usually needs to be increased to improve the utilization rate of the computing unit and increase the complexity of the circuit.
  • the management of on-chip memory becomes the difficulty and bottleneck of computing power.
  • a processor includes a controller, an arithmetic unit, and a memory.
  • the controller When the core starts the calculation, the controller notifies the input device to transfer the input data to the memory; then notifies the operator to fetch the data from the memory, performs the operation, and stores the operation result in the memory; and finally notifies the output device to receive the output result.
  • This kind of processor is generally used in multi-core or many-core architecture, so the function of the calculator is generally relatively simple, and it cannot complete complex calculation functions, and its flexibility is limited; the calculator can only complete simple calculation functions, and cannot achieve control and other functions; The controller centrally controls all modules, which is inefficient.
  • the present invention provides a high computing power and low power consumption chip architecture that not only maintains a high degree of support for various algorithms, but also can efficiently display effective computing power, and can also save power under the same process conditions.
  • the present invention provides a processing system.
  • the processing system includes: a memory and a plurality of processors, each of which has an arithmetic function and a control function;
  • the other processors of the plurality of processors are connected to the memory;
  • the first processor of the plurality of processors is used to receive the data sent by the data source and send the data to the data destination.
  • the processor has both an arithmetic function and a control function, that is, the processor can perform corresponding operations according to requirements, and can also control other processors to perform corresponding operations.
  • the processors are connected to each other, which can transmit data and control information, which realizes the cooperation between the processors and has a very high degree of flexibility.
  • Several processors are connected to the internal memory and share an internal memory, that is, the instructions and parameters required for operation can be read from the built-in memory of the processing system, which saves the waste of power consumption that traditional architectures need to take from the outside, and also It saves the time of reading data from the outside and can give full play to the computing power of the processor.
  • the several processors in the processing system have the same structure.
  • the several processors in the processing system have different structures.
  • the first processor of the plurality of processors in the processing system is a routing processor.
  • the several processors in the processing system include a neural network processor and a central processor.
  • the neural network processor in the processing system executes the operation of a neural network algorithm.
  • the neural network algorithm in the processing system includes: an artificial neural network algorithm or a neurodynamic algorithm.
  • the central processor in the processing system includes: ARM, X86 or RISCV.
  • the memory of the processing system stores operation instructions and parameters required for calculation of any one of the processors.
  • the several processors in the processing system perform the same function or perform different functions.
  • the present invention provides a many-core system, which is characterized by including an external processing system, and the above-mentioned processing system;
  • the external processing system controls the processing system to perform corresponding operations.
  • the present invention provides a data processing method for a processing system including several processors and memories.
  • the data processing method includes:
  • the first processor of the plurality of processors receives the data sent by the data source, performs first data processing on the data, and sends the first data processing result after the first data processing to the second processor;
  • the second processor of the plurality of processors performs second data processing on the first data result
  • the first processor receives the second data processing result of the second data processing, and sends the second data processing result to a data destination;
  • a third processor of the plurality of processors controls the operation of each processor, and the third processor sends instructions to the first processor and the second processor to control the first processor and the The operation of the second processor;
  • At least one processor of the plurality of processors reads data stored in the memory and / or writes data processing results of the at least one processor into the memory, the at least one of the plurality of processors A processor reads the running instructions and parameters in the memory and performs corresponding operations.
  • the data processing method includes:
  • the first processor stores the first data processing result in the memory
  • the second processor reads the first data processing result from the memory and performs the second data processing.
  • the data processing method includes:
  • the first processor directly sends the first data processing result to the second processor to perform the second data processing.
  • the second processor directly transmits the second data processing result to the first processor, and the first processor transmits the second The data processing result is transmitted to the data destination.
  • the second processor stores the second data processing result in the memory, and the first processor reads the memory from the memory The second data processing result is transmitted to the data destination via the second data processing result.
  • the present invention provides an arithmetic processing device.
  • the arithmetic processing device includes: N processors; a memory shared by the N processors;
  • a computer program instruction is stored on the memory, and when the computer program instruction is executed, the N processors execute any one of the above data processing methods.
  • the invention provides a computer readable and writable storage medium on which computer program instructions are stored.
  • the processor executes the method as described in any one of the above data processing methods.
  • the present invention provides a many-core processing device, which includes: an external processor; N processors;
  • the external memory stores computer program instructions.
  • the N processors execute the method described in any one of the above data processing methods.
  • the number in the present invention is a natural number of 1 or more.
  • the invention removes the limitation of the external storage bandwidth of the traditional architecture.
  • Each processor has a high degree of independence and can run in parallel to process different tasks at the same time, thereby greatly improving the effective computing power of the core; the structure of each processor in the core can be isomorphic or heterogeneous Yes, so the core has a very high degree of flexibility; when the processor in the core is heterogeneous, a special processor or circuit can be used to control the input and output of data; the data exchange between the processors in the core can be performed directly;
  • the processors can control and cooperate with each other to complete complex processing functions.
  • FIG. 1 shows a schematic diagram of prior art 1 in the art
  • Figure 3 shows a schematic diagram of the structure of the present invention
  • FIG. 5 shows a schematic diagram of a specific embodiment of the present invention applied in a multi-core network.
  • the processing system in the embodiment of the present invention has a chip structure, for example, it can be used as a single-core processing system or a core in a multi-core system.
  • the processing system of the present invention includes: a plurality of processors 1-N, a memory; a plurality of processors 1-N and a memory are provided on the same chip, and a plurality of processors 1-N share a memory.
  • processors 1-N have basic operation and control functions, and the physical positions of the processors 1-N can be interchanged.
  • Each processor in the multiple processors 1-N is connected to the other processors in the multiple processors 1-N and the memory respectively; the multiple processors 1-N may have the same structure or different structures ;
  • the functions of processors 1-N can be the same or different, for example, the processor can be used as a data routing processor for receiving and sending data, can be used for neural network operations and / or data processing according to the configuration, and can also be used as the center according to the configuration
  • the processor is used to control other processors to perform instruction execution and data calculation.
  • the memory is used to store data, running instructions of each processor and other parameters used for calculation. These parameters and instructions have been saved when the chip is initialized.
  • Multiple processors 1-N can read the corresponding operating instructions and calculation parameters in the memory, and perform corresponding operations. Multiple processors 1-N can directly exchange data and perform corresponding operations;
  • multiple processors 1-N one of them can be used as a control processor to control other processors; multiple processors 1-N can also control and cooperate with each other, and can also control the memory .
  • At least one of the plurality of processors 1-N includes its own memory.
  • one of the processors 1-N may serve as a data routing processor for receiving data and / or instructions sent by a data source, and sending processed data and / or instructions to Data destination.
  • the processor is mainly used for data sending and receiving and calculation, sending data received from external data sources to other processors or to memory, and sending processed data results received from other processors to data destinations, or Read the data processed by other processors from the memory and send it to the data destination.
  • some of the processors 1-N may be used as arithmetic processing units for performing arithmetic processing functions of data.
  • the part of the processors may be one or more, which are used to execute algorithms corresponding to the instructions and parameters according to the instructions and calculation parameters read from the memory, so as to realize specific functions.
  • one processor of the plurality of processors 1-N may serve as a central control processor for controlling other processors and coordinating data transmission and storage.
  • the advantage of the present invention is that when a single core performs calculation, it does not exchange data with external memory, only uses the internal memory of the chip, and the computing power is no longer limited by the external storage bandwidth, eliminating the data exchange caused by external memory Power consumption; there are multiple processors in the core, each processor has an arithmetic circuit and a control circuit, and has a very high degree of autonomy, so each processor can work independently in parallel without interference; each processor in the core can be Isomorphic, can also be heterogeneous, can achieve the same or different tasks; isomorphic processors can also execute different programs or instructions to achieve different functions.
  • the present invention greatly improves the effective computing power of the core through the flexible configuration of the core processor and independent operation; through the mutual control and cooperation of the processors, the complex processing function is completed; the internal memory is large enough to meet the calculation At the same time, the required parameters or data storage needs are eliminated, and the power consumption caused by the data exchange with the external memory is eliminated, and the processing function of low power consumption and high computing power is realized.
  • the first processor of the processing system of the present invention may be a routing processor, used for data transmission and reception, and capable of receiving and sending data / instructions.
  • the second processor may be a neural network processor, which is used to perform the operation of the neural network algorithm.
  • the neural network algorithms that the neural network processor can support include various commonly used neural network algorithms such as artificial neural network algorithms or neural dynamics algorithms. Among them, artificial neural network algorithms such as CNN and neural dynamics algorithms such as SNN.
  • the embodiment of the present invention may include multiple second processors, perform different neural network operations according to calculation needs to improve the speed of the operation, or a neural network processor may simultaneously support multiple neural network algorithms.
  • the third processor may be a general-purpose central processor, or may be other processors or circuits with basic arithmetic and control functions.
  • the central processor can be ARM, X86, RISCV or other CPU.
  • the embodiment of the present invention preferably uses RISCV.
  • the data source sends the data to the routing processor; the routing processor receives the data and performs corresponding processing; the routing processor stores the processed data in memory; the central processor notifies the neural network processor to perform the calculation; the neural network processor retrieves the data from the memory Read the data and the parameters needed for the neural network calculation, perform the neural network calculation, store the calculation result in the memory after the calculation, and notify the routing processor that the calculation has been completed; the routing processor processes the calculation result and sends it to the data destination.
  • the present invention is applied to the field of artificial intelligence.
  • the operating efficiency of the processing system is improved.
  • the progress of the neural network is improved.
  • multiple cores can be isomorphic or heterogeneous, but a single core in a multi-core only has arithmetic functions or only control functions, and whether multiple cores are isomorphic or heterogeneous Structure, its number is limited, its computing power is limited, and its flexibility is limited.
  • An embodiment of the present invention is a many-core system composed of multiple processing systems of Embodiment 1 and / or Embodiment 2.
  • Each core in the many-core system adopts the processing system described in the present invention, that is, each core is composed of multiple homogeneous and / or heterogeneous processors and a memory.
  • Each processor has computing and control functions, that is, each processor can work independently in parallel, give full play to its computing power, and can cooperate with each other to flexibly complete more complex tasks.
  • Many multi-core systems composed of multiple single cores using the processing system of the present invention have both powerful computing power and high efficiency of computing power.
  • the many-core system is composed of multiple processing systems of Embodiment 1 or 2, and each processing system serves as a core (Cn) of the many-core system, and is interconnected through the NOC (Network On Chip) on-chip network. Multi-core or many-core chips to achieve more powerful processing performance.
  • the present invention provides a processing system.
  • the processing system includes:
  • First processor second processor, third processor, memory.
  • the processing system uses the following data processing methods:
  • the first processor receives the data sent by the data source, performs first data processing on the data, and stores the first data processing result after the first data processing in the memory; after reading from the memory and after the second data processing The second data processing result and send the second data processing result to the data destination; the second processor reads the first data processing result after the first data processing from the memory and performs the second data processing And store the second data processing result in memory; the third processor sends instructions to the first processor and the second processor to control the operation of the first processor and the second processor.
  • the processing system adopts the following data processing method: the first processor receives data sent by a data source, performs first data processing on the data; and reads from the second processor The second data processing result after the second data processing, and sends the second data processing result to the data destination; the second processor, the second processor reads the first data processing from the first processor Result and perform a second data processor; a third processor, the third processor sends instructions to the first processor and the second processor to control the operation of the first processor and the second processor .
  • the above two data processing methods can be used in combination, and data can be exchanged between the processor and the processor through the memory, or the data can be directly exchanged without the memory.
  • a third processor may be used as a control processor to send control instructions to the first processor and the second processor to control the first processing and the second processor to perform data processing, or may be processed by The control commands are sent directly between the devices to realize mutual control and cooperation, and to complete data operations with higher efficiency.
  • the memory stores the operating instructions of the first processor, the second processor, and the third processor, and the first processor, The second processor and the third processor require parameters for calculation.
  • the first processor in the processing system of the present invention is a routing processor, which is mainly used for data sorting and input and output.
  • the data source does not need to wait for the notification signal sent by the controller Send data to the routing processor, so that no complicated memory management and control is required; the processor processes the data and stores it in memory.
  • the third processor is a central processor, which controls the first processor to read data and the second processor to start corresponding operations; the central processor Can be selected from ARM, X86 or RISCV.
  • the second processor in the processing system of the present invention is an arithmetic processor.
  • the arithmetic processor reads instructions from the local memory and the data and parameters required for the operation to perform specific operations; After the processor finishes the calculation, the calculation result is stored in the memory, and the routing processor is notified, or the data is directly transmitted to the routing processor; the routing processor sends the result to the data destination.
  • the arithmetic processor in the processing system of the present invention may be a neural network processor for executing neural network algorithms, such as artificial neural network algorithms or neural dynamics algorithms.
  • embodiments of the present application may also be computer program products, which include computer program instructions that when executed by a processor cause the processor to perform the above-described "exemplary method" of this specification
  • the computer program product may write program codes for performing operations of the embodiments of the present application in any combination of one or more programming languages, and the programming languages include object-oriented programming languages, such as Java, C ++, etc. , Also includes conventional procedural programming languages, such as "C" language or similar programming languages.
  • the program code may be executed entirely on the user's computing device, partly on the user's device, as an independent software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server On the implementation.
  • an embodiment of the present application may also be a computer-readable storage medium having computer program instructions stored thereon, which when executed by a processor causes the processor to perform the above-mentioned "exemplary method" part of the specification Steps described in the data processing method for the recommendation system according to various embodiments of the present application.
  • the computer-readable storage medium may employ any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may include, but is not limited to, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any combination of the above, for example. More specific examples of readable storage media (non-exhaustive list) include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • each component or each step can be decomposed and / or recombined.
  • decompositions and / or recombinations shall be regarded as equivalent solutions of this application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
  • Advance Control (AREA)

Abstract

一种高算力低功耗处理方法、系统及设备,所述处理系统包括:存储器和若干个处理器,每个所述处理器均具有运算功能和控制功能;任意处理器分别与其它任一处理器和所述存储器相连;所述处理器可以为同构,也可以为异构;所述处理器包括路由处理器,用于接收数据源发送的数据和发送数据到数据目的地;神经网络处理器,用于执行神经网络运算算法;中央处理器,用于控制神经网络处理器进行运算。通过对处理器灵活的配备,既保持对各种算法极高的支持度,又能高效的发挥有效算力,是一种在同等工艺条件下节省功耗的高算力低功耗芯片架构。

Description

一种数据处理系统及数据处理方法 技术领域
本发明涉及人工智能芯片领域,尤其涉及一种高算力低功耗的数据处理系统。
背景技术
当今时代,人工智能技术日新月异,澎湃发展,从各方面影响着人们的生产和生活,推动着世界的发展和进步。数据、算法和算力,被喻为推动人工智能快速发展的三大要素。其中,算力是处理数据和运行算法的核心动力。众所周知,算力是由芯片提供,所以如何在花费适当代价的前提下,提升芯片应对各种人工智能的有效算力,并且节省功耗,成为人工智能芯片领域专家们共同努力的目标。
当前,芯片处理器(如图1所示实框内结构)一般主要由运算器和控制器组成,存储器是外置的。当处理器开始计算时,由控制器通知输入设备将输入数据传送到存储器中;然后通知运算器到存储器中取数,进行运算,且将运算结果存入存储器中;最后通知输出设备接收输出结果。其存在的缺点有:运算过程中需要用到的数据,均依赖于外部存储器,从而算力受存储带宽限制而无法有效的提高,成为传统架构最大的瓶颈之一;运算器能单独或者组合执行各种运算,但是无法实现控制等功能;控制器集中控制所有模块,效率低;运算并行度低,通常需要增加任务的流水级数以提高运算单元的利用率,增加了电路的复杂度,对片上存储器的管理成为算力发挥的难点和瓶颈。
另外,现有技术中也有采用存储器内置的方式,在一个处理器(如图2所示实框内结构)中包含有控制器、运算器和存储器。当核开始计算时,由控制器通知输入设备将输入数据传送到存储器中;然后通知运算器到存储器中取数,进行运算,且将运算结果存入存储器中;最后通知输出设备接收输出结果。这种处理器一般用于多核或者众核架构,所以运算器的功能一般比较简单,不能完成复杂的运算功能,灵活性受限;运算器仅能完成简单的运 算功能,无法实现控制等功能;控制器集中控制所有模块,效率低。
本发明提供一种既保持对各种算法极高的支持度,又能高效的发挥有效算力,还能在同等工艺条件下节省功耗的高算力低功耗芯片架构。
发明内容
本发明提供了一种处理系统,所述处理系统包括:存储器和若干个处理器,每个所述处理器均具有运算功能和控制功能;所述若干个处理器中的每一个处理器分别与所述若干个处理器中的其他处理器和所述存储器相连;所述若干个处理器中的第一处理器用于接收数据源发送的数据和发送数据到数据目的地。
在本发明中,处理器同时具有运算功能和控制功能,即处理器可以根据需求进行相应的运算,也可以控制其他处理器进行相应的运算。处理器之间相互连接,既可以传输数据,也可以传输控制信息,实现了处理器之间相互配合,具有极高的灵活度。若干处理器均和内部存储器连接,共用一个内部存储器,即可以从处理系统中内置的存储器中读取运行所需的指令和参数,节省了传统架构需要从外面拿数浪费的功耗,同时也节省了从外部读取数据的时间,能把充分发挥处理器的算力。
根据本发明的一个实施方案,所述处理系统中所述若干个处理器为相同结构。
根据本发明的一个实施方案,所述处理系统中所述若干个处理器为不同结构。
根据本发明的一个实施方案,所述处理系统中所述若干个处理器中的第一处理器为路由处理器。
根据本发明的一个实施方案,所述处理系统中所述若干个处理器包括神经网络处理器和中央处理器。
根据本发明的一个实施方案,所述处理系统中所述神经网络处理器执行神经网络算法的运算。
根据本发明的一个实施方案,所述处理系统中所述神经网络算法包括:人工神经网络算法或神经动力学算法。
根据本发明的一个实施方案,所述处理系统中所述中央处理器选包括:ARM、X86或RISCV。
根据本发明的一个实施方案,所述处理系统中所述存储器中存有所述若干个处理器中任意一个处理器的运行指令和需要用于计算的参数。
根据本发明的一个实施方案,所述处理系统中所述若干个处理器执行相同功能或执行不同功能。
本发明提供了一种众核系统,其特征在于:包括外部处理系统,和上述的处理系统;
所述外部处理系统控制所述处理系统执行相应的操作。
本发明提供了一种数据处理方法,用于包括若干个处理器和存储器的处理系统中,所述数据处理方法包括:
所述若干个处理器中的第一处理器接收数据源发送的数据,对所述数据进行第一数据处理并将所述第一数据处理后的第一数据处理结果发送至第二处理器;
所述若干个处理器中的所述第二处理器对所述第一数据结果进行第二数据处理;
所述第一处理器接收所述第二数据处理的第二数据处理结果,将所述第二数据处理结果发送至数据目的地;
所述若干个处理器中的第三处理器控制各处理器运行,所述第三处理器向所述第一处理器和所述第二处理器发送指令控制所述第一处理器和所述第二处理器的运行;
所述若干个处理器的至少一个处理器读取所述存储器存储的数据和/或向所述存储器中写入所述至少一个处理器的数据处理结果,所述若干个处理器的所述至少一个处理器读取所述存储器中的运行指令及参数,进行相应的运算。
根据本发明的一个实施方案,所述数据处理方法包括:
所述第一处理器将所述第一数据处理结果存入所述存储器中;
所述第二处理器从所述存储器中读取经所述第一数据处理结果,进行所述第二数据处理。
根据本发明的一个实施方案,所述数据处理方法包括:
所述第一处理器将所述第一数据处理结果直接发送给所述第二处理器,进行所述第二数据处理。
根据本发明的一个实施方案,所述数据处理方法中,所述第二处理器将 所述第二数据处理结果直接传输给所述第一处理器,所述第一处理器将所述第二数据处理结果传输给所述数据目的地。
根据本发明的一个实施方案,所述数据处理方法中,所述第二处理器将所述第二数据处理结果存储于所述存储器,所述第一处理器从所述存储器中读取所述第二数据处理结果,经所述第二数据处理结果传输给所述数据目的地。
本发明提供了一种运算处理设备,所述运算处理设备包括:N个处理器;所述N个处理器共用的存储器;
所述存储器上存储有计算机程序指令,所述计算机程序指令被执行时,所述N个处理器执行上述数据处理方法任一所述方法。
本发明提供了一种计算机可读写存储介质,其上存储有计算机程序指令,所述计算机程序指令被执行时,处理器执行如上述数据处理方法任一所述的方法。
本发明提供了一种众核处理设备,所述众核处理设备包括:外部处理器;N个处理器;
所述外部存储器存储计算机程序指令,所述计算机程序指令被执行时,所述N个处理器执行所述上述数据处理方法任一项所述的方法。
本发明所述若干,为1以上自然数。
本发明去除了传统架构的外部存储带宽限制,核内有多个处理器,各处理器均有运算功能和控制功能。各处理器具有高度的独立性,且能并行运行,能同时处理不同的任务,从而大大提高了核的有效算力;核内各处理器的结构,可以是同构的,也可以是异构的,所以核有极高的灵活度;核内处理器异构时,可以有专门的处理器或者电路进行数据的输入输出控制;核内各处理器之间可以直接进行数据交换;核内各处理器之间可以互相控制,互相配合,完成复杂的处理功能。
附图说明
图1示出了本领域现有技术1的示意图;
图2示出了本领域现有技术2的示意图;
图3示出了本发明结构示意图;
图4示出了本发明一个具体实施例结构示意图;
图5示出了本发明的一个具体实施例在多核网络中应用的示意图。
具体实施方式
以下基于实施例对本申请进行描述,但是本申请并不仅仅限于这些实施例。在下文对本申请的细节描述中,详尽描述了一些特定的细节部分。对本领域技术人员来说没有这些细节部分的描述也可以完全理解本申请。为了避免混淆本申请的实质,公知的方法、过程、流程、元件和电路并没有详细叙述。
此外,本领域普通技术人员应当理解,在此提供的附图都是为了说明的目的,并且附图不一定是按比例绘制的。
除非上下文明确要求,否则整个说明书和权利要求书中的“包括”、“包含”等类似词语应当解释为包含的含义而不是排他或穷举的含义;也就是说,是“包括但不限于”的含义。
在本申请的描述中,需要理解的是,术语“第一”、“第二”等仅用于描述目的,表示多个中的任意一个,而不能理解为指示或暗示相对重要性,也不代表顺序。此外,在本申请的描述中,除非另有说明,“多个”的含义是两个或两个以上。
以下采用实施例和附图来详细说明本发明的实施方式,借此对本发明如何应用技术手段来解决技术问题,并达成技术效果的实现过程能充分理解并据以实施。
实施例1
本发明实施例的处理系统为一个芯片结构,例如可以作为单核处理系统,可以为众核系统中的一个核。
如图3所示,本发明的处理系统括:多个处理器1-N,存储器;多个处理器1-N和存储器设置在同一块芯片上,多个处理器1-N共用一个存储器。
多个处理器1-N均具有基本运算和控制功能,且处理器1-N的物理位置可以互换。多个处理器1-N中的每一个处理器分别与多个处理器1-N中的其他处理器和所述存储器相连;多个处理器1-N可以是相同结构,也可以是不同结构;处理器1-N的功能可以相同也可以不同,例如,处理器可以作为数据路由处理器用于接收和发送数据,可以根据配置用于神经网络运算和/或数 据处理,还可以根据配置作为中央处理器用于控制其他处理器,进行指令的执行和数据的运算。
存储器用于存储数据、各处理器的运行指令和用于计算的其他参数,这些参数和指令在芯片初始化时均已保存好。
多个处理器1-N可以分别读取存储器中对应的运行指令和计算参数,进行相应的运算。多个处理器1-N之间可以直接进行数据交换,进行相应的运算;
多个处理器1-N中可以将其中的一个处理器作为控制处理器,对其他处理器进行控制;多个处理器1-N之间也可以相互控制、相互配合,还可以对存储器进行控制。
多个处理器1-N中的至少一个处理器包括自用存储器。
根据本发明的一个实施方式,多个处理器1-N中的一个处理器可作为数据路由处理器,用于接收数据源发送的数据和/或指令、发送处理后的数据和/或指令至数据目的地。该处理器主要用于数据收发和计算,将从外部数据源收到的数据发送给其他处理器或发送给存储器,将从其他处理器接收到的处理完毕的数据结果发送到数据目的地,或从存储器中读取经其它处理器处理完毕的数据结果发送到数据目的地。
根据本发明的一个实施方式,多个处理器1-N中的部分处理器均可可以作为运算处理单元,用于执行数据的运算处理功能。该部分处理器可以为一个或多个,用于根据从存储器中读入的指令和计算参数,执行指令和参数对应的算法,以实现特定的功能。
根据本发明的一个实施方式,多个处理器1-N中的一个处理器可以作为中央控制处理器,用于控制其他处理器,协调数据的收发、存储。本发明的优点在于,单个核进行计算时,不和外部存储器进行数据交换,只使用芯片的内部存储器,算力不再受外部存储带宽的限制,消除了由于与外部存储器进行数据交换带来的功耗;核中有多个处理器,每个处理器都有运算电路和控制电路,有极高的自主性,所以各处理器可以独立互不干扰的并行工作;核中各处理器可以是同构的,也可以是异构的,能实现相同或者不同的任务;同构处理器上也可以执行不同的程序或指令,实现不同的功能。
本发明通过核内处理器的灵活配置,独立运行,大幅提高了核的有效算力;通过处理器的互相控制,互相配合,完成复杂的处理功能;包含了足够 大的内部存储器,能满足计算时所需的参数或数据的存储需要,同时消除了与外部存储器数据交换带来的功耗,实现了低功耗、高算力的处理功能。
实施例2
如图4所示,本发明处理系统第一处理器可以为路由处理器,用于数据收发,能够实现数据/指令的接收和发送。第二处理器可以为神经网络处理器,用于执行神经网络算法的运算,其中神经网络处理器可以支持的神经网络算法包括人工神经网络算法或神经动力学算法等各种常用的神经网络算法,其中,人工神经网络算法例如CNN,神经动力学算法例如:SNN。本发明实施例可以包括多个第二处理器,根据计算需要,进行不同的神经网络运算,提高运算速度,也可以一个神经网络处理器同时支持多个神经网络算法。
第三处理器可以为通用中央处理器,也可以是具有基本运算和控制功能的其他处理器或电路。该中央处理器可以为ARM、X86、RISCV或者其他CPU。本发明实施例优选采用RISCV。
数据源将数据发送给路由处理器;路由处理器接收数据,进行相应的处理;路由处理器将处理后的数据存入存储器;中央处理器通知神经网络处理器进行计算;神经网络处理器从存储器读入数据和进行神经网络计算需要的参数,进行神经网络计算,计算完毕将计算结果存入存储器,且通知路由处理器计算已经完成;路由处理器将计算结果经过处理后发送给数据目的地。
在本实施例中,本发明应用于人工智能领域,通过多个不同构处理器的灵活组合应用和相互配合提高了处理系统的运行效率,通过多个运算处理器的应用,提高了在进行神经网络运算时的运算能力,处理器和芯片内的存储器进行数据传输,降低了运行功耗,实现了高算力,低功耗的处理功能。
实施例3
在现有技术的常用多核处理架构中,多个核可以是同构,也可以是异构,但多核中的单个核只有运算功能,或者只有控制功能,且多个核不管为同构还是异构,其数量有限,算力有限,灵活度有限。
本发明实施例为由多个实施例1和/或实施例2的处理系统构成的众核系统。众核系统中的每个核均采用本发明所述的处理系统,既每个核都由多个同构和/或异构处理器,还有存储器组成。每个处理器均具备运算和控制功能, 即每个处理器既能并行独立工作,充分发挥算力,又能互相配合工作,灵活完成比较复杂的工作。多个采用本发明所述处理系统的单核构成的众核系统既有强大的算力,又有很高的算力发挥效率。
如图5所示,众核系统即由多个实施例1或2的处理系统构成,每个处理系统作为众核系统的一个核(Cn),通过NOC(Network On Chip)片上网络互联,组成多核或者众核芯片,以实现更强大的处理性能。
示例性方法
本发明提供了一种处理系统,所述处理系统包括:
第一处理器,第二处理器,第三处理器,存储器。
所述处理系统采用如下数据处理方法:
方法1
所述第一处理器接收数据源发送的数据,对所述数据进行第一数据处理,将经过第一数据处理后的第一数据处理结果存入存储器;从存储器读取经过第二数据处理后的第二数据处理结果,并将第二数据处理结果发送给数据目的地;所述第二处理器从所述存储器读取经过第一数据处理后的第一数据处理结果并进行第二数据处理,并将第二数据处理结果存入存储器;所述第三处理器向所述第一处理器和第二处理器发送指令控制所述第一处理器和所述第二处理器的运行。
方法2
根据本发明的一个实施方式,所述处理系统采用如下数据处理方法:所述第一处理器接收数据源发送的数据,对所述数据进行第一数据处理;从所述第二处理器读取经过第二数据处理后的第二数据处理结果,并将第二数据处理结果发送给数据目的地;第二处理器,所述第二处理器从所述第一处理器读取第一数据处理结果并进行第二数据处理器;第三处理器,所述第三处理器向所述第一处理器和第二处理器发送指令控制所述第一处理器和所述第二处理器的运行。
根据本发明的一个实施方式,上述两种数据处理方法可以结合使用,处理器和处理器之间可以通过存储器交换数据,也可以不通过存储器直接交换数据。
根据本发明的一个实施方式,可以由第三处理器作为控制处理器,向第一处理器和第二处理器发送控制指令,控制第一处理和第二处理器进行数据处理,也可以由处理器之间直接发送控制指令,实现相互控制,相互配合,更高效率的完成数据运算。
根据本发明的一个实施方式,本发明的处理系统中所述存储器中存有所述第一处理器、第二处理器以及第三处理器的运行指令,并且存有所述第一处理器、第二处理器以及第三处理器需要用于计算的参数。
根据本发明的一个实施方式,本发明的处理系统中所述第一处理器为路由处理器,主要用于数据的整理与输入输出,数据源不需要等待由控制器发来的通知信号即可发送数据到路由处理器中,这样可以不需要复杂的内存管理及控制;此处理器将数据做相应的处理后存入存储器中。
根据本发明的一个实施方式,本发明的处理系统中所述第三处理器为中央处理器,会控制第一处理器读取数据和控制第二处理器开始相应的运算;所述中央处理器可以选自ARM、X86或RISCV。
根据本发明的一个实施方式,本发明的处理系统中第二处理器为运算处理器,运算处理器从本地存储器读入指令,以及运算所需要的数据和参数,进行具体的运算;进行运算的处理器计算完毕后,将计算结果存入存储器,且通知路由处理器,或直接将数据传输给路由处理器;路由处理器将结果发送给数据目的地。
根据本发明的一个实施方式,本发明的处理系统中运算处理器可以是神经网络处理器,用于执行神经网络算法,例如人工神经网络算法或神经动力学算法。
示例性计算机程序产品和计算机可读存储介质
除了上述方法和设备以外,本申请的实施例还可以是计算机程序产品,其包括计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本申请各种实施例的用于推荐系统的数据处理方法中的步骤。
所述计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本申请实施例操作的程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言, 诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。
此外,本申请的实施例还可以是计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本申请各种实施例的用于推荐系统的数据处理方法中的步骤。
所述计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
以上结合具体实施例描述了本申请的基本原理,但是,需要指出的是,在本申请中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本申请的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本申请为必须采用上述具体的细节来实现。
本申请中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
还需要指出的是,在本申请的装置、设备和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本申请的等效方案。
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本申请。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本申请的范围。因此,本申请不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本申请的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。

Claims (19)

  1. 一种处理系统,其特征在于:
    所述处理系统包括若干个处理器和存储器,所述若干个处理器具有运算功能和控制功能;
    所述若干个处理器中的每一个处理器分别与所述若干个处理器中的其他处理器和所述存储器相连;
    所述若干个处理器中的第一处理器用于接收数据源发送的数据和发送数据到数据目的地。
  2. 根据权利要求1的所述处理系统,所述若干个处理器为相同结构。
  3. 根据权利要求1的所述处理系统,所述若干个处理器为不同结构。
  4. 根据权利要求1-3中任一项所述处理系统,所述若干个处理器中的所述第一处理器为路由处理器。
  5. 根据权利要求1-3任一项所述处理系统,所述若干个处理器包括神经网络处理器和中央处理器。
  6. 根据权利要求5的所述处理系统,所述神经网络处理器执行神经网络算法的运算。
  7. 根据权利要求6的所述处理系统,所述神经网络算法包括:人工神经网络算法或神经动力学算法。
  8. 根据权利要求5-7任一项的所述处理系统,所述中央处理器选包括:ARM、X86或RISCV。
  9. 根据权利要求1-8任一项所述处理系统,所述存储器中存有所述若干个处理器中任意一个处理器的运行指令和需要用于计算的参数。
  10. 根据权利要求1-3任一项所述处理系统,所述若干个处理器执行相同功能或执行不同功能。
  11. 一种众核系统,其特征在于:包括外部处理系统,和如权利要求1-10任一项所述的处理系统;
    所述外部处理系统控制所述处理系统执行相应的操作。
  12. 一种数据处理方法,用于包括若干个处理器和存储器的处理系统中,其特征在于,包括:
    所述若干个处理器中的第一处理器接收数据源发送的数据,对所述数据进行第一数据处理并将所述第一数据处理后的第一数据处理结果发送至第 二处理器;
    所述若干个处理器中的所述第二处理器对所述第一数据结果进行第二数据处理;
    所述第一处理器接收所述第二数据处理的第二数据处理结果,将所述第二数据处理结果发送至数据目的地;
    所述若干个处理器中的第三处理器控制各处理器运行,所述第三处理器向所述第一处理器和所述第二处理器发送指令控制所述第一处理器和所述第二处理器的运行;
    所述若干个处理器的至少一个处理器读取所述存储器存储的数据和/或向所述存储器中写入所述至少一个处理器的数据处理结果,所述若干个处理器的所述至少一个处理器读取所述存储器中的运行指令及参数,进行相应的运算。
  13. 根据权利要求11所述数据处理方法,其特征在于,还包括:
    所述第一处理器将所述第一数据处理结果存入所述存储器中;
    所述第二处理器从所述存储器中读取所述第一数据处理结果,进行所述第二数据处理。
  14. 根据权利要求11所述数据处理方法,其特征在于,还包括:
    所述第一处理器将所述第一数据处理结果直接发送给所述第二处理器,进行所述第二数据处理。
  15. 根据权利要求12或13所述数据处理方法,其特征在于:
    所述第二处理器将所述第二数据处理结果直接传输给所述第一处理器,所述第一处理器将所述第二数据处理结果传输给所述数据目的地。
  16. 根据权利要求12或13所述数据处理方法,其特征在于:
    所述第二处理器将所述第二数据处理结果存储于所述存储器,所述第一处理器从所述存储器中读取所述第二数据处理结果,将所述第二数据处理结果传输给所述数据目的地。
  17. 一种运算处理设备,其特征在于,包括:
    N个处理器;
    所述N个处理器共用的存储器;
    所述存储器上存储有计算机程序指令,所述计算机程序指令被执行时,所述N个处理器执行所述权利要求12-16任一项所述方法。
  18. 一种计算机可读写存储介质,其上存储有计算机程序指令,其特征在于:该计算机程序指令被执行时,处理器执行如权利要求12-16任一项所述的方法。
  19. 一种众核处理设备,其特征在于,包括:外部处理器;N个处理器;
    所述外部存储器存储计算机程序指令,所述计算机程序指令被执行时,所述N个处理器执行所述权利要求12-16任一项所述的方法。
PCT/CN2019/116653 2018-11-21 2019-11-08 一种数据处理系统及数据处理方法 WO2020103706A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811391431.X 2018-11-21
CN201811391431.XA CN109542830B (zh) 2018-11-21 2018-11-21 一种数据处理系统及数据处理方法

Publications (1)

Publication Number Publication Date
WO2020103706A1 true WO2020103706A1 (zh) 2020-05-28

Family

ID=65850061

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/116653 WO2020103706A1 (zh) 2018-11-21 2019-11-08 一种数据处理系统及数据处理方法

Country Status (2)

Country Link
CN (1) CN109542830B (zh)
WO (1) WO2020103706A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109542830B (zh) * 2018-11-21 2022-03-01 北京灵汐科技有限公司 一种数据处理系统及数据处理方法
CN110213165B (zh) * 2019-06-05 2021-04-13 北京灵汐科技有限公司 一种异构协同系统及其通信方法
US20220222513A1 (en) * 2019-09-03 2022-07-14 Agency For Science, Technology And Research Neural network processor system and methods of operating and forming thereof
CN112766470B (zh) * 2019-10-21 2024-05-07 地平线(上海)人工智能技术有限公司 特征数据处理方法、指令序列生成方法、装置及设备
CN112835510B (zh) 2019-11-25 2022-08-26 北京灵汐科技有限公司 一种片上存储资源存储格式的控制方法及装置
CN113449874A (zh) * 2020-03-25 2021-09-28 北京灵汐科技有限公司 样本数据生成方法、系统、电子设备及计算机可读介质
CN111723907B (zh) * 2020-06-11 2023-02-24 浪潮电子信息产业股份有限公司 一种模型训练设备、方法、系统及计算机可读存储介质
CN111723913A (zh) * 2020-06-19 2020-09-29 浪潮电子信息产业股份有限公司 一种数据处理方法、装置、设备及可读存储介质
CN112069324A (zh) * 2020-08-27 2020-12-11 北京灵汐科技有限公司 一种分类标签添加方法、装置、设备及存储介质
CN112259071A (zh) * 2020-09-22 2021-01-22 北京百度网讯科技有限公司 语音处理系统、语音处理方法、电子设备和可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080256305A1 (en) * 2007-04-11 2008-10-16 Samsung Electronics Co., Ltd. Multipath accessible semiconductor memory device
CN102567275A (zh) * 2010-12-08 2012-07-11 中国科学院声学研究所 一种多核处理器上多个操作系统间内存访问的方法及系统
CN106933692A (zh) * 2017-03-14 2017-07-07 哈尔滨工业大学 一种基于处理器阵列的航天器星载计算机系统及故障处理方法
CN106980595A (zh) * 2014-12-05 2017-07-25 三星半导体(中国)研究开发有限公司 共享物理内存的多处理器通信系统及其通信方法
CN109542830A (zh) * 2018-11-21 2019-03-29 北京灵汐科技有限公司 一种数据处理系统及数据处理方法

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2797969A1 (fr) * 1999-08-31 2001-03-02 Koninkl Philips Electronics Nv Dispositif a plusieurs processeurs partageant une memoire collective
US7483430B1 (en) * 2003-02-28 2009-01-27 Cisco Technology, Inc. Hierarchical hash method for performing forward route lookup
KR100740635B1 (ko) * 2005-12-26 2007-07-18 엠텍비젼 주식회사 휴대형 단말기 및 휴대형 단말기에서의 공유 메모리 제어방법
US8527709B2 (en) * 2007-07-20 2013-09-03 Intel Corporation Technique for preserving cached information during a low power mode
CN101187908A (zh) * 2007-09-27 2008-05-28 上海大学 单芯片多处理器共享数据存储空间的访问方法
CN101882127B (zh) * 2010-06-02 2011-11-09 湖南大学 一种多核心处理器
CN102497411B (zh) * 2011-12-08 2014-01-15 南京大学 面向密集运算的层次化异构多核片上网络架构
CN103714039B (zh) * 2013-12-25 2017-01-11 中国人民解放军国防科学技术大学 通用计算数字信号处理器
CN107688853B (zh) * 2016-08-05 2020-01-10 中科寒武纪科技股份有限公司 一种用于执行神经网络运算的装置及方法
CN107688551A (zh) * 2016-12-23 2018-02-13 北京国睿中数科技股份有限公司 主处理器与协处理器之间的数据交互控制方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080256305A1 (en) * 2007-04-11 2008-10-16 Samsung Electronics Co., Ltd. Multipath accessible semiconductor memory device
CN102567275A (zh) * 2010-12-08 2012-07-11 中国科学院声学研究所 一种多核处理器上多个操作系统间内存访问的方法及系统
CN106980595A (zh) * 2014-12-05 2017-07-25 三星半导体(中国)研究开发有限公司 共享物理内存的多处理器通信系统及其通信方法
CN106933692A (zh) * 2017-03-14 2017-07-07 哈尔滨工业大学 一种基于处理器阵列的航天器星载计算机系统及故障处理方法
CN109542830A (zh) * 2018-11-21 2019-03-29 北京灵汐科技有限公司 一种数据处理系统及数据处理方法

Also Published As

Publication number Publication date
CN109542830A (zh) 2019-03-29
CN109542830B (zh) 2022-03-01

Similar Documents

Publication Publication Date Title
WO2020103706A1 (zh) 一种数据处理系统及数据处理方法
JP4472339B2 (ja) マルチコアマルチスレッドプロセッサ
US8874681B2 (en) Remote direct memory access (‘RDMA’) in a parallel computer
US9594720B2 (en) Interface between a bus and a inter-thread interconnect
US9329664B2 (en) Power management for a computer system
US8676917B2 (en) Administering an epoch initiated for remote memory access
US11061742B2 (en) System, apparatus and method for barrier synchronization in a multi-threaded processor
US20110041132A1 (en) Elastic and data parallel operators for stream processing
US20070019636A1 (en) Multi-threaded transmit transport engine for storage devices
US11360809B2 (en) Multithreaded processor core with hardware-assisted task scheduling
WO2021139173A1 (zh) 一种ai视频处理方法与装置
US20140143570A1 (en) Thread consolidation in processor cores
US11956156B2 (en) Dynamic offline end-to-end packet processing based on traffic class
WO2021136512A1 (zh) 基于深度学习节点计算的调度方法、设备及存储介质
WO2020082813A1 (zh) 基于pis的存储装置控制器、存储装置、系统及方法
CN115033188B (zh) 一种基于zns固态硬盘的存储硬件加速模块系统
CN109491934A (zh) 一种集成计算功能的存储管理系统控制方法
CN116257471A (zh) 一种业务处理方法及装置
Klenk et al. Analyzing put/get apis for thread-collaborative processors
TWI823655B (zh) 適用於智慧處理器的任務處理系統與任務處理方法
WO2023123453A1 (zh) 运算加速的处理方法、运算加速器的使用方法及运算加速器
US20230259486A1 (en) Neural processing unit synchronization systems and methods
WO2024045580A1 (zh) 用于调度任务的方法及其相关产品
WO2024119869A1 (zh) 一种执行片间通信任务的方法和相关产品
JPH01255036A (ja) マイクロプロセッサ

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19887203

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19887203

Country of ref document: EP

Kind code of ref document: A1