WO2020107265A1 - 神经网络处理装置、控制方法以及计算系统 - Google Patents

神经网络处理装置、控制方法以及计算系统 Download PDF

Info

Publication number
WO2020107265A1
WO2020107265A1 PCT/CN2018/117960 CN2018117960W WO2020107265A1 WO 2020107265 A1 WO2020107265 A1 WO 2020107265A1 CN 2018117960 W CN2018117960 W CN 2018117960W WO 2020107265 A1 WO2020107265 A1 WO 2020107265A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
processing device
layer
network processing
calculation circuit
Prior art date
Application number
PCT/CN2018/117960
Other languages
English (en)
French (fr)
Inventor
杨康
李鹏
韩峰
谷骞
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201880038043.5A priority Critical patent/CN110785779A/zh
Priority to PCT/CN2018/117960 priority patent/WO2020107265A1/zh
Publication of WO2020107265A1 publication Critical patent/WO2020107265A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Definitions

  • the present application relates to the field of artificial intelligence, and more specifically, to a neural network processing device, control method, and computing system.
  • Some traditional neural network processing devices have poor flexibility and some have poor computing performance, which cannot meet the performance requirements of people for neural network computing.
  • the present application provides a neural network processing device, a control method, and a computing system, which can improve the performance of the neural network processing device.
  • a neural network processing device including: a calculation circuit; a control circuit, according to a target instruction, controlling the calculation circuit to perform calculations corresponding to at least two layers of the neural network.
  • a computing system for a neural network including: the neural network processing device as described in the first aspect; and a processor for distributing computing tasks for the neural network processing device.
  • a control method of a neural network processing device including: according to a target instruction, controlling a computing circuit in the neural network processing device to perform calculations corresponding to at least two layers of a neural network.
  • a computer-readable storage medium on which instructions for performing the method of the third aspect are stored.
  • a computer program product including instructions for performing the method of the third aspect.
  • a "one" target instruction is used to implement at least two layers of calculation of the neural network, which reduces the proportion of control signals and saves the power consumption and area of the neural network processing device, while ensuring the flexibility of the neural network processing device. Improve the performance of neural network processing devices.
  • FIG. 1 is a schematic structural diagram of a neural network processing device provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a neural network processing device provided by another embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a control method of a neural network processing device provided by an embodiment of the present application.
  • FIG. 4 is an example diagram of a target instruction provided by an embodiment of the present application.
  • FIG. 5 is another example diagram of the target instruction provided by the embodiment of the present application.
  • the neural network processing device may also be referred to as a neural network processor or a neural network accelerator.
  • the neural network processing device may be a dedicated neural network processing device, for example, it may be a hardware circuit or chip specifically used for neural network calculation.
  • the neural network processing device mentioned in this application can be used to calculate various types of neural networks, such as convolutional neural networks (convolutional neural networks, CNN) or recurrent neural networks (recurrent neural networks, RNN).
  • CNN convolutional neural networks
  • RNN recurrent neural networks
  • Neural networks usually have a multilayer structure.
  • the following uses convolutional neural networks as an example to illustrate the multilayer structure of neural networks.
  • the convolutional neural network may include one or more convolutional layers.
  • convolutional neural networks can also include other layers.
  • the other layer may be one or more of the following layers: a pooling layer, an activation layer, an elementwise layer (elementwise layer).
  • Neural network computing is usually relatively large, so how to perform high-performance neural network computing has become the focus of attention.
  • the traditional neural network processing device uses multiple instructions to configure each layer of the neural network. It can be understood that the content of the instruction is related to the desired neural network structure (the neural network structure can be trained in a pre-training manner), and different neural network structures can be used to achieve different functions. In other words, users can configure different neural network structures according to the instructions to achieve different neural network functions.
  • the user wants the neural network processing device to be used for image positioning, he can use commands to configure each layer of the neural network so that the configured neural network structure has image positioning function.
  • a user desires a neural network processing device for image classification, he can use instructions to configure each layer of the neural network so that the configured neural network structure has an image classification function.
  • each layer of the neural network needs to be configured with one or more instructions.
  • Commands are usually carried by control signals, so the more commands, the greater the proportion of control signals.
  • Excessive proportion of control signals leads to poor overall performance of neural network processing devices. For example, the greater the proportion of control signals, the greater the power consumption of the neural network processing device; the greater the proportion of control signals, the more complex the structure of the neural network processing device, and the area it occupies The bigger.
  • the following describes the neural network processing device provided in the application example.
  • the neural network processing device 1 may include a calculation circuit 2 and a control circuit 4.
  • the calculation circuit 2 can be used to perform calculations corresponding to multiple layers of the neural network.
  • the specific form of the calculation circuit 2 is related to the type of neural network, which is not limited in the embodiments of the present application.
  • the calculation circuit 2 may include a first calculation circuit 21 and a second calculation circuit 22.
  • the first calculation circuit 21 can be used to perform calculations corresponding to the convolution layer of the neural network;
  • the second calculation circuit 22 can be used to perform other layers of the neural network (such as at least one of the pooling layer, the activation layer, or the element-by-element operation layer) Corresponding calculation.
  • the first calculation circuit 21 may include, for example, a process engine array (process engine array) and a network on chip (NOC).
  • process engine array process engine array
  • NOC network on chip
  • the processing engine array may be referred to as a PE array.
  • the PE array may include multiple PEs.
  • the multiple PEs can be used to perform matrix multiplication operations in convolution operations. Therefore, the PE array can also be called a special accelerator for convolution.
  • NOC can be used to achieve communication and control between the PE array and the outside world.
  • the external can control the calculation object and calculation timing of each PE in the PE array through the NOC.
  • the second calculation circuit 22 may be used to implement one or more of a bias operation, an activation operation, and a pooling operation.
  • the second calculation circuit 22 may include at least one of the following circuits: a bias circuit for implementing a bias operation, an activation circuit for implementing an activation operation, and a pool for implementing a pooling operation ⁇ The circuit.
  • first calculation circuit 21 and the second calculation circuit 22 shown in FIG. 2 are only one possible implementation manner of the first calculation circuit 21 and the second calculation circuit 22, and the embodiments of the present application are not limited thereto, and may also be Use other methods.
  • the first calculation circuit 21 may also be composed of only a plurality of multiply-accumulate units; the second calculation circuit 22 may include a circuit for implementing calculation corresponding to the element-wise operation layer.
  • the convolutional neural network is used as an example for the above description.
  • the neural network processing device 1 is used to perform other types of neural network (such as recursive neural network) calculations, the calculation circuit 2 may also use completely different Implementation.
  • the control circuit 4 can control the calculation circuit 2 to execute the neural network calculation according to the instruction. Different from the control method of the conventional neural network processing device, in the embodiment of the present application, the control circuit 4 may control the calculation circuit 2 to perform calculation corresponding to "at least two layers" of the neural network according to the received "one" target instruction.
  • the embodiment of the present application uses "one" target instruction to realize the calculation of at least two layers of the neural network.
  • the proportion of control signals is reduced, and the power consumption of the neural network processing device is saved And area. Therefore, the neural network processing device provided in the embodiments of the present application is more suitable for high-performance neural network computing.
  • the target instruction can be used to configure at least two layers of the neural network. Therefore, the target instruction can include configuration parameters of at least two layers of the neural network.
  • the configuration parameter in the target instruction may be used to indicate the calculation method or implementation manner of the at least two layers.
  • the at least two layers configured by the target instruction may include layers with the same function or a combination of layers with different functions.
  • the at least two layers may be two convolutional layers, or one or more of the convolutional layer and other layers (such as a pooling layer, an activation layer, an element-by-element operation layer) Combination of layers.
  • the at least two layers may be any combination of the input layer, hidden layer, and output layer of the recurrent neural network.
  • the target instruction configuring at least two layers of the neural network is actually configuring a calculation circuit that implements calculations corresponding to the at least two layers. Therefore, the target instruction including the configuration parameters of the at least two layers can also be understood as: the target instruction includes configuration parameters of the calculation circuit for performing calculations corresponding to the two layers. This configuration parameter can be used to guide the calculation circuit to perform the calculation of the corresponding neural network layer, such as the guide data reading mode, operation mode, output mode, etc.
  • the content of the target instruction can specifically include one or more of the following parameters: configuration parameters of the convolution layer; configuration parameters of the pooling layer; configuration parameters of the activation layer; configuration parameters of the operation layer by element; parameters of the offset unit .
  • the configuration parameters of the convolution layer can be used to configure the convolution circuit in the calculation circuit.
  • the calculation object of the calculation unit in the convolution circuit is configured, and the data transfer method between the calculation units, so that each calculation unit can cooperate with each other to complete the calculation of the convolution layer.
  • the configuration parameters of the pooling layer can be used to configure the pooling circuit in the calculation circuit.
  • the configuration parameters of the activation layer can be used to configure the activation mode or the type of activation function.
  • the configuration parameters of the element-by-element operation layer can be used to configure the operation mode of the data input to the element-by-element operation layer.
  • the operation mode may include, for example, product by element, sum by element, and save the largest element.
  • the configuration parameters of the offset unit can be used to configure whether to apply an offset to the data and the size of the offset value.
  • the target instruction may include multiple fields or multiple fields, and each field may be used to carry a configuration parameter required for neural network calculation, which is equivalent to integrating multiple instructions into one instruction in a serial manner.
  • the target instruction 40 may include a first configuration field 42 and a second configuration field 44, where the first configuration field 42 may include configuration parameters of the convolutional layer, and the second configuration field 44 may include pooling Layer configuration parameters. It can be understood in this way that the embodiment of the present application integrates the convolutional layer configuration parameters and the pooling layer configuration parameters that need to be configured through multiple instructions into a single target instruction for configuration.
  • the target instruction is actually equivalent to containing a group of instructions, and each instruction in the group of instructions can be arranged in series in the target instruction, respectively carried in different configuration domains of the target instruction.
  • the target instruction 50 may include a first configuration field 51 to a fifth configuration field 55.
  • the first configuration domain 51 may contain configuration parameters of the convolution layer;
  • the second configuration domain 52 may contain configuration parameters of the pooling layer;
  • the third configuration domain 53 may contain configuration parameters of the activation layer;
  • the fourth configuration domain 54 may contain The configuration parameters of the element-by-element operation layer;
  • the fifth configuration field 55 may contain configuration parameters of the bias unit.
  • the embodiments of the present application will configure the configuration parameters of the convolution layer, the configuration parameters of the pooling layer, the configuration parameters of the activation layer, the configuration parameters of the element-by-element operation layer and the configuration of the offset unit that originally need to be configured by multiple instructions
  • the parameters are integrated and configured in a target instruction.
  • the target instruction is actually equivalent to containing a group of instructions, and each instruction in the group of instructions can be arranged in series in the target instruction, respectively carried in different configuration domains of the target instruction.
  • the target instruction may include configuration parameters of each layer of the neural network.
  • a target instruction can be used to complete the configuration of the entire neural network, thereby greatly reducing the proportion of control signals in the neural network processing device and improving the performance of the neural network processing device.
  • the target instruction may be an instruction generated inside the neural network processing device 1.
  • the target instruction may be an instruction read from an external memory.
  • the external memory may be a memory device (eg, double data rate (DDR)) in the same system as the neural network processing device 1.
  • DDR double data rate
  • the neural network processing device 1 may include an input interface, and use the input interface to read target instructions from an external memory 9.
  • the input interface may be a bus interface.
  • the input interface may be a connection interface between the neural network processing device 1 and the memory interconnect module 7.
  • the memory interconnection module 7 can play a role of connecting the neural network processing device 1 and the external memory 9, in some embodiments, the internal interconnection module 7 can also be integrated inside the neural network processing device 1 (ie, integrated on-chip) .
  • the neural network processing device 1 may further include an analysis circuit 5.
  • the analysis circuit 5 can be used to analyze the target instruction. For example, suppose the target instruction includes multiple fields, and each field is used to configure a part of the function of the neural network.
  • the parsing circuit 5 can parse each field of the target instruction to obtain the configuration parameters of the neural network; and then distribute the parsed configuration parameters to the corresponding circuit.
  • the neural network processing device 1 may not be provided with the above analysis circuit. After receiving the target instruction, the neural network processing device 1 directly distributes the configuration parameters in the target instruction to each functional circuit of the neural network device.
  • some buffers may be provided inside the neural network processing device 1.
  • the cache of the neural network processing apparatus 1 may include, for example, a first cache 61 of input data and a second cache 62 of weight data.
  • the input data of the neural network may sometimes be called an input feature map (input feature map). Therefore, the first cache 61 may also be called an input feature map cache (input feature map buffer, IF_BUF).
  • input feature map buffer input feature map buffer
  • the weight data (sometimes called weights) of the neural network can be used to filter the input feature map. Therefore, the second cache 62 can also be called a filter buffer (FILT_BUF).
  • FILT_BUF filter buffer
  • the target instruction may further include configuration parameters of the first cache 61 and the second cache 62.
  • the configuration parameters of the first cache 61 can be used to configure the manner in which the cache 61 reads the input feature map from the external memory 9 (also called external memory).
  • the configuration parameters of the first cache 61 can be used to configure at least one of the following information: the position of the input feature map in the memory 9, the number of input feature maps, the height and width of the input feature map, and the segmentation of the input feature map Way etc.
  • the configuration parameters of the second cache 62 can be used to configure the manner in which the second cache 62 reads the weight data from the external memory 9.
  • the configuration parameters of the second cache 62 can be used to configure at least one of the following information: the position of the weight data in the memory 9, the size of the convolution kernel, and so on.
  • the input interface mentioned above may also be used to receive input data (or feature maps) and/or weight data (or weights) of a neural network.
  • the neural network processing device 1 may further include a write control circuit 3.
  • the calculation result of the neural network can be written to the external memory 9 under the control of the write control circuit 3.
  • the convolutional neural network may include element-wise operation layers.
  • FIG. 2 does not show the circuit corresponding to the element-by-element operation layer.
  • the circuit corresponding to the element operation layer may be integrated in the second calculation circuit 22; as another possible implementation, the circuit corresponding to the element operation layer may be integrated in the write control circuit 3 in.
  • the calculation circuits corresponding to each layer may transmit the intermediate result to an on-chip temporary buffer (such as random access memory (RAM)).
  • RAM random access memory
  • two calculation circuits corresponding to the adjacent two layers of the neural network processing device 1 (such as the first calculation circuit 21 and the second calculation circuit 22 in FIG. 2) There is no need to set a temporary cache.
  • the control circuit 4 may be used to control the data transfer between the first calculation circuit 21 and the second calculation circuit 22 so that the output result of the first calculation circuit 21 is directly transferred to the second calculation circuit 22 without going through the buffer. This control method can further reduce the power consumption and area of the neural network processing device 1.
  • the embodiments of the present application also provide a neural network computing system.
  • the computing system includes the neural network processing device 1 and the processor 8 mentioned in any of the foregoing embodiments.
  • the processor 8 can be used to allocate computing tasks to the neural network processing device 1.
  • the neural network processing device 1 and the processor 8 can be connected by a bus.
  • the computing system may also include memory 9.
  • the memory 9 can be connected to the neural network processing device 1.
  • the memory 9 can be used to store at least one of the following data of the neural network: input data, weight data, and output data.
  • the embodiments of the present application also provide a control method of a neural network processing device.
  • This control method can be executed by the neural network processing device 1 mentioned above.
  • the control method may include step S34.
  • step S34 according to a target instruction, the calculation circuit in the neural network processing device is controlled to perform calculation corresponding to at least two layers of the neural network.
  • the target instruction may contain configuration parameters of at least two layers.
  • the method of FIG. 3 may further include step S32.
  • step S32 the target instruction is read from the external memory.
  • the target instruction contains configuration parameters of each layer of the neural network.
  • the target instruction may include at least one of the following configuration parameters of the neural network: configuration parameters of the convolutional layer; configuration parameters of the pooling layer; configuration parameters of the activation layer; configuration parameters of the operation layer by element; offset Unit parameters.
  • the target instruction may also include cached configuration parameters in the neural network processing device.
  • the cache may be used to store input data and/or weight data of the neural network.
  • the method of FIG. 3 may further include: parsing the target instruction.
  • the method of FIG. 3 may further include: receiving input data and/or weight data of the neural network; and/or outputting calculation results of the neural network to the outside.
  • step S34 may include: controlling the first calculation circuit to perform calculation corresponding to the first layer of the neural network; controlling the second calculation circuit to perform calculation corresponding to the second layer of the neural network; where the first layer is a convolutional layer
  • the second layer includes at least one of a pooling layer, an activation layer, or an element-based operation layer.
  • the method of FIG. 3 may further include: controlling the data transfer between the first calculation circuit and the second calculation circuit, so that the output result of the first calculation circuit is directly transferred to the second calculation circuit without passing through a buffer.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transferred from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be from a website site, computer, server or data center Transmission to another website, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device including a server, a data center, and the like integrated with one or more available media.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, digital video disc (DVD)), or semiconductor media (eg, solid state disk (SSD)), etc. .
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the units is only a division of logical functions.
  • there may be other divisions for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

提供一种神经网络处理装置、控制方法以及计算系统。该神经网络处理装置包括:计算电路;控制电路,根据一条目标指令,控制计算电路执行神经网络的至少两个层对应的计算。采用"一条"目标指令实现神经网络的至少两层的计算,使得保证神经网络处理装置灵活性的前提下,减少了控制信号的占比,节省了神经网络处理装置的功耗和面积,从而能够提高神经网络处理装置的性能。

Description

神经网络处理装置、控制方法以及计算系统
版权申明
本专利文件披露的内容包含受版权保护的材料。该版权为版权所有人所有。版权所有人不反对任何人复制专利与商标局的官方记录和档案中所存在的该专利文件或者该专利披露。
技术领域
本申请涉及人工智能领域,并且更为具体地,涉及一种神经网络处理装置、控制方法以及计算系统。
背景技术
随着神经网络技术的发展,神经网络处理装置的应用越来越广。
传统的神经网络处理装置有些灵活性较差,有些计算性能较差,无法满足人们对神经网络计算的性能要求。
发明内容
本申请提供一种神经网络处理装置、控制方法以及计算系统,能够提高神经网络处理装置的性能。
第一方面,提供一种神经网络处理装置,包括:计算电路;控制电路,根据一条目标指令,控制所述计算电路执行神经网络的至少两个层对应的计算。
第二方面,提供一种神经网络的计算系统,包括:如第一方面所述的神经网络处理装置;处理器,用于为所述神经网络处理装置分配计算任务。
第三方面,提供一种神经网络处理装置的控制方法,包括:根据一条目标指令,控制所述神经网络处理装置中的计算电路执行神经网络的至少两个层对应的计算。
第四方面,提供一种计算机可读存储介质,其上存储有用于执行第三方面所述的方法的指令。
第五方面,提供一种计算机程序产品,包括用于执行第三方面所述的方法的指令。
采用“一条”目标指令实现神经网络的至少两层的计算,使得保证神经 网络处理装置灵活性的前提下,减少了控制信号的占比,节省了神经网络处理装置的功耗和面积,从而能够提高神经网络处理装置的性能。
附图说明
图1是本申请一个实施例提供的神经网络处理装置的示意性结构图。
图2是本申请另一实施例提供的神经网络处理装置的示意性结构图。
图3是本申请实施例提供的神经网络处理装置的控制方法的示意性流程图。
图4是本申请实施例提供的目标指令的一个示例图。
图5是本申请实施例提供的目标指令的另一示例图。
具体实施方式
本申请提供的神经网络处理装置也可称为神经网络处理器或神经网络加速器。该神经网络处理装置可以是专用的神经网络处理装置,如可以是专门用于神经网络计算的硬件电路或芯片。
本申请提及的神经网络处理装置可用于计算各种类型的神经网络,如卷积神经网络(convolutional neural network,CNN)或递归神经网络(recurrent neural networks,RNN)。
神经网络通常具有多层结构。为了便于理解,下面以卷积神经网络为例,对神经网络的多层结构进行举例说明。
卷积神经网络可以包括一个或多个卷积层。此外,卷积神经网络还可以包括其它层。该其它层可以是以下层中的一种或多种:池化层、激活层、按元素操作层(elementwise层)。
神经网络的计算量通常比较大,因此,如何进行高性能的神经网络计算成为人们关注的重点。
为了满足一定的灵活性,在进行神经网络计算之前,传统的神经网络处理装置先采用多条指令对神经网络的每个层进行配置。可以理解的是,指令的内容与期望的神经网络结构(神经网络结构可以采用预先训练的方式训练得到)有关,而不同的神经网络结构能够用于实现不同的功能。换句话说,用户可以根据指令配置出不同的神经网络结构,从而实现不同的神经网络功 能。
例如,当用户希望神经网络处理装置用于图像定位时,可以利用指令对神经网络的每个层进行配置,使配置出的神经网络结构具有图像定位功能。又如,当用户期望神经网络处理装置用于图像分类时,可以利用指令对神经网络的每个层进行配置,使配置出的神经网络结构具有图像分类功能。
对于传统的神经网络处理装置,神经网络的每个层需要采用一条或多条指令进行配置。指令通常通过控制信号承载,因此,指令越多,意味着控制信号的占比越大。控制信号的占比过大会导致神经网络处理装置的整体性能较差。例如,控制信号的占比越大,意味着神经网络处理装置的功耗越大;控制信号的占比越大,意味着神经网络处理装置的结构也就越复杂,其所占的面积也就越大。
为了提高神经网络处理装置的整体性能,下文本申请实施例提供的神经网络处理装置进行介绍。
如图1所示,本申请实施例提供的神经网络处理装置1可以包括计算电路2和控制电路4。
计算电路2可用于执行神经网络的多个层对应的计算。计算电路2的具体形式与神经网络的类型有关,本申请实施例对此并不限定。
以卷积神经网络为例,如图2所示,计算电路2可以包括第一计算电路21和第二计算电路22。第一计算电路21可用于执行神经网络的卷积层对应的计算;第二计算电路22可用于执行神经网络的其它层(如池化层、激活层或按元素操作层中的至少一层)对应的计算。
第一计算电路21例如可以包括处理引擎阵列(process engine array)和芯片上的网络(network on chip,NOC)。
处理引擎阵列可以简称PE阵列。该PE阵列可以包括多个PE。该多个PE可用于执行卷积操作中的矩阵乘法操作。因此,该PE阵列也可称为卷积专用加速器。
NOC可用于实现PE阵列与外界的通讯和控制。例如,外部可以通过NOC对PE阵列中的每个PE的计算对象和计算时序进行控制。
第二计算电路22可用于实现偏置操作、激活操作和池化操作中的一种或多种。作为一种可能实现方式,第二计算电路22可以包括以下电路中的至少一种:用于实现偏置操作的偏置电路,用于实现激活操作的激活电路以 及用于实现池化操作的池化电路。
应理解,图2示出的第一计算电路21和第二计算电路22仅是第一计算电路21和第二计算电路22的一种可能的实现方式,本申请实施例不限于此,还可以采用其他方式实现。例如,第一计算电路21也可以仅由多个乘累加单元组成;第二计算电路22可以包括用于实现按元素操作层对应的计算的电路。
需要说明的是,上文是以卷积神经网络为例进行说明的,当神经网络处理装置1用于执行其他类型的神经网络(如递归神经网络)计算时,计算电路2还可以采用完全不同的实现方式。
控制电路4可以根据指令控制计算电路2执行神经网络计算。不同于传统神经网络处理装置的控制方式,在本申请实施例中,控制电路4可以根据接收到的“一条”目标指令,控制计算电路2执行神经网络的“至少两个层”对应的计算。
本申请实施例采用“一条”目标指令实现神经网络的至少两层的计算,在保证神经网络处理装置的灵活性的前提下,减少了控制信号的占比,节省了神经网络处理装置的功耗和面积。因此,本申请实施例提供的神经网络处理装置更适于高性能神经网络计算。
目标指令可用于对神经网络的至少两个层进行配置,因此,该目标指令可以包含神经网络的至少两个层的配置参数。目标指令中的配置参数可用于指示该至少两个层的计算方式或实现方式。
目标指令配置的至少两个层可以包括功能相同的层,也可以是功能不同的层的组合。以卷积神经网络为例,该至少两个层可以是两个卷积层,也可以是卷积层与其它层(如池化层、激活层、按元素操作层)中的一个或多个层的组合。同理,以递归神经网络为例,该至少两个层可以是递归神经网络的输入层、隐藏层、输出层的任意组合。
目标指令对神经网络的至少两个层进行配置实际上是对实现该至少两个层对应的计算的计算电路进行配置。因此,目标指令包含该至少两个层的配置参数也可理解为:目标指令包括用于执行该两个层对应的计算的计算电路的配置参数。该配置参数可用于指导计算电路执行相应的神经网络层的计算,如指导数据的读取方式,操作方式,输出方式等。
目标指令的内容具体可以包括以下参数中的一种或多种:卷积层的配置 参数;池化层的配置参数;激活层的配置参数;按元素操作层的配置参数;偏置单元的参数。
卷积层的配置参数可用于对计算电路中的卷积电路进行配置。如配置卷积电路中的计算单元的计算对象,计算单元之间的数据传递方式等,使得各个计算单元可以相互配合,完成该卷积层的计算。
池化层的配置参数可用于对计算电路中的池化电路进行配置。如配置池化的类型(最大池化或平均池化),池化窗口的尺寸等。
激活层的配置参数可用于配置激活模式,或激活函数的类型等。
按元素操作层的配置参数可用于配置输入至按元素操作层的数据的操作方式。操作方式例如可以包括按元素乘积,按元素求和,保存最大元素等。
偏置单元的配置参数可用于配置是否对数据施加偏置以及偏置值的大小等。
目标指令可以包括多个字段或多个域(field),每个字段可用于承载神经网络计算所需的一种配置参数,相当于采用串行的方式将多条指令集成在一条指令中。举例说明,如图4所示,目标指令40可以包括第一配置域42和第二配置域44,其中第一配置域42可包括卷积层的配置参数,第二配置域44可以包括池化层的配置参数。可以这样理解,本申请实施例将原本需要通过多条指令配置的卷积层配置参数和池化层配置参数集成在一条目标指令中进行配置。该目标指令实际上相当于包含了一组指令,该一组指令中的各个指令在目标指令中可以串行布置,分别承载于目标指令的不同的配置域。
图5示出了目标指令的另一示例。如图5所示,目标指令50可以包括第一配置域51至第五配置域55。其中,第一配置域51可以包含卷积层的配置参数;第二配置域52可以包含池化层的配置参数;第三配置域53可以包含激活层的配置参数;第四配置域54可以包含按元素操作层的配置参数;第五配置域55可以包含偏置单元的配置参数。可以这样理解,本申请实施例将原本需要通过多条指令配置的卷积层的配置参数、池化层的配置参数、激活层的配置参数、按元素操作层的配置参数和偏置单元的配置参数集成在一条目标指令中进行配置。该目标指令实际上相当于包含了一组指令,该一组指令中的各个指令在目标指令中可以串行布置,分别承载于目标指令的不同的配置域。
可选地,作为一个实施例,该目标指令可以包括神经网络的每个层的配置参数。换句话说,可以利用一条目标指令,完成整个神经网络的配置,从而极大降低神经网络处理装置中的控制信号的占比,提高神经网络处理装置的性能。
本申请实施例对目标指令的获取方式不做具体限定。作为一个示例,该目标指令可以是神经网络处理装置1内部生成的指令。作为另一个示例,该目标指令可以是从外部的存储器读取到的指令。该外部的存储器可以是与神经网络处理装置1在同一系统的内存设备(如双倍速率(double data rate,DDR))。
神经网络处理装置1可以包括输入接口,并利用该输入接口从外部的存储器9读取目标指令。在某些实施例中,该输入接口可以是总线接口。以图2为例(图2中未示出该输入接口),该输入接口可以是神经网络处理装置1与内存互联模块7之间的连接接口。该内存互联模块7可以起到连接神经网络处理装置1与外部存储器9的作用,在某些实施例中,该内部互联模块7也可以集成在神经网络处理装置1的内部(即集成在片上)。
可选地,如图2所示,在一些实施例中,神经网络处理装置1还可以包括解析电路5。该解析电路5可用于对目标指令进行解析。例如,假设目标指令包括多个字段,每个字段用于对神经网络的一部分功能进行配置。该解析电路5可以对目标指令的每个字段进行解析,得到神经网络的配置参数;然后将解析出的配置参数分发至对应的电路。
可选地,在某些实现方式中,如果目标指令中的配置参数设计成能够被神经网络装置的各个功能电路直接识别的参数,则神经网络处理装置1中也可以不设置上述解析电路。当接收到目标指令之后,神经网络处理装置1直接将目标指令中的配置参数分发至神经网络装置的各个功能电路即可。
可选地,在一些实施例中,神经网络处理装置1内部可以设置一些缓存。如图2所示,神经网络处理装置1的缓存例如可以包括输入数据的第一缓存61和权值数据的第二缓存62。
神经网络的输入数据有时也可称为输入特征图(input feature map),因此,第一缓存61也可称为输入特征图缓存(input feature map buffer,简称IF_BUF)。
神经网络的权值数据(有时也可称为权重)可用于对输入特征图进行滤 波,因此,第二缓存62也可称为滤波器缓存(filter buffer,简称FILT_BUF)。
在上述实施例中,目标指令还可以包括第一缓存61和第二缓存62的配置参数。
第一缓存61的配置参数可用于配置缓存61从外部的存储器9(也可称为外部的内存)中读取输入特征图的方式。举例说明,第一缓存61的配置参数可用于配置以下信息中的至少一种:输入特征图在存储器9中的位置,输入特征图的数量,输入特征图的高度和宽度,输入特征图的分割方式等。
第二缓存62的配置参数可用于配置第二缓存62从外部的存储器9中读取权值数据的方式。举例说明,第二缓存62的配置参数可用于配置以下信息中的至少一种:权值数据在存储器9中的位置,卷积核的大小等。
可选地,在一些实施例中,上文提及的输入接口还可用于接收神经网络的输入数据(或称特征图)和/或权值数据(或称权重)。
可选地,在一些实施例中,如图2所示,神经网络处理装置1还可以包括写控制电路3。神经网络的计算结果可以在写控制电路3的控制下写入至外部的存储器9。
上文指出,卷积神经网络可以包括按元素操作层。图2未示出该按元素操作层对应的电路。作为一种可能的实现方式,该按元素操作层对应的电路可以集成在第二计算电路22中;作为另一种可能的实现方式,该按元素操作层对应的电路可以集成在写控制电路3中。
可选地,作为一个实施例,在神经网络处理装置1中,各层对应的计算电路可以将中间结果传输到片上的临时缓存(如随机存取存储器(random access memory,RAM))。
可选地,作为另一实施例,如图2所示,神经网络处理装置1的相邻两层对应的两个计算电路(如图2中的第一计算电路21和第二计算电路22)之间可以不设置临时缓存。在这种情况下,可以利用控制电路4控制第一计算电路21与第二计算电路22之间的数据传递,使得第一计算电路21的输出结果无需经过缓存直接传递至第二计算电路22。这种控制方式可以进一步降低神经网络处理装置1的功耗和面积。
本申请实施例还提供一种神经网络的计算系统。如图2所示,该计算系统包括如前文任一实施例提及的神经网络处理装置1和处理器8。处理器8可用于为神经网络处理装置1分配计算任务。神经网络处理装置1和处理器 8之间可以通过总线相连。
可选地,该计算系统还可包括存储器9。该存储器9可以与神经网络处理装置1相连。该存储器9可用于存储神经网络的以下数据中的至少一种:输入数据、权值数据和输出数据。
上文结合图1至图2,详细描述了本申请的装置实施例,下面结合图3,详细描述本申请的方法实施例。应理解,方法实施例的描述与装置实施例的描述相互对应,因此,未详细描述的部分可以参见前面装置实施例。
本申请实施例还提供一种神经网络处理装置的控制方法。该控制方法可以由上文提及的神经网络处理装置1执行。如图3所示,该控制方法可以包括步骤S34。
在步骤S34中,根据一条目标指令,控制神经网络处理装置中的计算电路执行神经网络的至少两个层对应的计算。
目标指令可以包含至少两个层的配置参数。
可选地,图3的方法还可包括步骤S32。在步骤S32中,从外部的存储器读取目标指令。
可选地,目标指令包含神经网络的每个层的配置参数。
可选地,目标指令可以包含神经网络的以下配置参数中的至少一种:卷积层的配置参数;池化层的配置参数;激活层的配置参数;按元素操作层的配置参数;偏置单元的参数。
可选地,目标指令还可以包括神经网络处理装置中的缓存的配置参数。
可选地,缓存可用于存储神经网络的输入数据和/或权值数据。
可选地,图3的方法还可包括:对目标指令进行解析。
可选地,图3的方法还可包括:接收神经网络的输入数据和/或权值数据;和/或向外输出神经网络的计算结果。
可选地,步骤S34可包括:控制第一计算电路执行神经网络的第一层对应的计算;控制第二计算电路执行神经网络的第二层对应的计算;其中,第一层为卷积层,第二层包括池化层、激活层或按元素操作层中的至少一个。
可选地,图3的方法还可包括:控制第一计算电路与第二计算电路之间的数据传递,使得第一计算电路的输出结果无需经过缓存直接传递至第二计算电路。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其他任 意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中, 也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (23)

  1. 一种神经网络处理装置,其特征在于,包括:
    计算电路;
    控制电路,根据一条目标指令,控制所述计算电路执行神经网络的至少两个层对应的计算。
  2. 根据权利要求1所述的神经网络处理装置,其特征在于,还包括:
    输入接口,被配置成从外部的存储器读取所述目标指令。
  3. 根据权利要求1或2所述的神经网络处理装置,其特征在于,所述目标指令包含所述神经网络的每个层的配置参数。
  4. 根据权利要求1-3中任一项所述的神经网络处理装置,其特征在于,所述目标指令包含所述神经网络的以下配置参数中的至少一种:
    卷积层的配置参数;
    池化层的配置参数;
    激活层的配置参数;
    按元素操作层的配置参数;
    偏置单元的参数。
  5. 根据权利要求1-4中任一项所述的神经网络处理装置,其特征在于,所述目标指令还包括所述神经网络处理装置中的缓存的配置参数。
  6. 根据权利要求5所述的神经网络处理装置,其特征在于,所述缓存用于存储所述神经网络的输入数据和/或权值数据。
  7. 根据权利要求1-6中任一项所述的神经网络处理装置,其特征在于,还包括:
    解析电路,用于对所述目标指令进行解析。
  8. 根据权利要求1-7中任一项所述的神经网络处理装置,其特征在于,所述输入接口还用于:接收神经网络的输入数据和/或权值数据。
  9. 根据权利要求1-8中任一项所述的神经网络处理装置,其特征在于,还包括:
    写控制电路,用于将所述神经网络的计算结果写入外部的存储器。
  10. 根据权利要求1-9中任一项所述的神经网络处理装置,其特征在于,所述计算电路包括:
    第一计算电路,用于执行所述神经网络的第一层对应的计算;
    第二计算电路,用于执行所述神经网络的第二层对应的计算;
    其中,所述第一层为卷积层,所述第二层包括池化层、激活层或按元素操作层中的至少一个。
  11. 根据权利要求10所述的神经网络处理装置,其特征在于,所述控制电路还用于控制所述第一计算电路与所述第二计算电路之间的数据传递,使得所述第一计算电路的输出结果无需经过缓存直接传递至所述第二计算电路。
  12. 一种神经网络的计算系统,其特征在于,包括:
    如权利要求1-11中任一项所述的神经网络处理装置;
    处理器,用于为所述神经网络处理装置分配计算任务。
  13. 根据权利要求12所述的计算系统,其特征在于,还包括:
    存储器,与所述神经网络处理装置相连,用于存储所述神经网络的以下数据中的至少一种:输入数据、权值数据和输出数据。
  14. 一种神经网络处理装置的控制方法,其特征在于,包括:
    根据一条目标指令,控制所述神经网络处理装置中的计算电路执行神经网络的至少两个层对应的计算。
  15. 根据权利要求14所述的控制方法,其特征在于,还包括:
    从外部的存储器读取所述目标指令。
  16. 根据权利要求14或15所述的控制方法,其特征在于,所述目标指令包含所述神经网络的每个层的配置参数。
  17. 根据权利要求14-16中任一项所述的控制方法,其特征在于,所述目标指令包含所述神经网络的以下配置参数中的至少一种:
    卷积层的配置参数;
    池化层的配置参数;
    激活层的配置参数;
    按元素操作层的配置参数;
    偏置单元的参数。
  18. 根据权利要求14-17中任一项所述的控制方法,其特征在于,所述目标指令还包括所述神经网络处理装置中的缓存的配置参数。
  19. 根据权利要求18所述的控制方法,其特征在于,所述缓存用于存储所述神经网络的输入数据和/或权值数据。
  20. 根据权利要求14-19中任一项所述的控制方法,其特征在于,还包括:
    对所述目标指令进行解析。
  21. 根据权利要求14-20中任一项所述的控制方法,其特征在于,还包括:
    接收神经网络的输入数据和/或权值数据;和/或
    向外输出所述神经网络的计算结果。
  22. 根据权利要求14-21中任一项所述的控制方法,其特征在于,所述根据一条目标指令,控制所述神经网络处理装置中的计算电路执行神经网络的至少两个层对应的计算,包括:
    控制第一计算电路执行所述神经网络的第一层对应的计算;
    控制第二计算电路执行所述神经网络的第二层对应的计算;
    其中,所述第一层为卷积层,所述第二层包括池化层、激活层或按元素操作层中的至少一个。
  23. 根据权利要求22所述的控制方法,其特征在于,还包括:
    控制所述第一计算电路与所述第二计算电路之间的数据传递,使得所述第一计算电路的输出结果无需经过缓存直接传递至所述第二计算电路。
PCT/CN2018/117960 2018-11-28 2018-11-28 神经网络处理装置、控制方法以及计算系统 WO2020107265A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880038043.5A CN110785779A (zh) 2018-11-28 2018-11-28 神经网络处理装置、控制方法以及计算系统
PCT/CN2018/117960 WO2020107265A1 (zh) 2018-11-28 2018-11-28 神经网络处理装置、控制方法以及计算系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/117960 WO2020107265A1 (zh) 2018-11-28 2018-11-28 神经网络处理装置、控制方法以及计算系统

Publications (1)

Publication Number Publication Date
WO2020107265A1 true WO2020107265A1 (zh) 2020-06-04

Family

ID=69383052

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/117960 WO2020107265A1 (zh) 2018-11-28 2018-11-28 神经网络处理装置、控制方法以及计算系统

Country Status (2)

Country Link
CN (1) CN110785779A (zh)
WO (1) WO2020107265A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995030194A1 (en) * 1994-05-02 1995-11-09 Motorola Inc. Computer utilizing neural network and method of using same
CN108475347A (zh) * 2017-11-30 2018-08-31 深圳市大疆创新科技有限公司 神经网络处理的方法、装置、加速器、系统和可移动设备
CN108701250A (zh) * 2017-10-16 2018-10-23 深圳市大疆创新科技有限公司 数据定点化方法和装置
CN108701015A (zh) * 2017-11-30 2018-10-23 深圳市大疆创新科技有限公司 用于神经网络的运算装置、芯片、设备及相关方法
CN108805270A (zh) * 2018-05-08 2018-11-13 华中科技大学 一种基于存储器的卷积神经网络系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8965819B2 (en) * 2010-08-16 2015-02-24 Oracle International Corporation System and method for effective caching using neural networks
CN107016175B (zh) * 2017-03-23 2018-08-31 中国科学院计算技术研究所 适用神经网络处理器的自动化设计方法、装置及优化方法
CN107679621B (zh) * 2017-04-19 2020-12-08 赛灵思公司 人工神经网络处理装置
CN107679620B (zh) * 2017-04-19 2020-05-26 赛灵思公司 人工神经网络处理装置
CN107122826B (zh) * 2017-05-08 2019-04-23 京东方科技集团股份有限公司 用于卷积神经网络的处理方法和系统、和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995030194A1 (en) * 1994-05-02 1995-11-09 Motorola Inc. Computer utilizing neural network and method of using same
CN108701250A (zh) * 2017-10-16 2018-10-23 深圳市大疆创新科技有限公司 数据定点化方法和装置
CN108475347A (zh) * 2017-11-30 2018-08-31 深圳市大疆创新科技有限公司 神经网络处理的方法、装置、加速器、系统和可移动设备
CN108701015A (zh) * 2017-11-30 2018-10-23 深圳市大疆创新科技有限公司 用于神经网络的运算装置、芯片、设备及相关方法
CN108805270A (zh) * 2018-05-08 2018-11-13 华中科技大学 一种基于存储器的卷积神经网络系统

Also Published As

Publication number Publication date
CN110785779A (zh) 2020-02-11

Similar Documents

Publication Publication Date Title
US11176448B2 (en) Enhancing processing performance of a DNN module by bandwidth control of fabric interface
US20190325305A1 (en) Machine learning inference engine scalability
TW201918883A (zh) 高頻寬記憶體系統以及邏輯裸片
Tapiador-Morales et al. Neuromorphic LIF row-by-row multiconvolution processor for FPGA
WO2019128248A1 (zh) 一种信号处理方法及装置
WO2019215907A1 (ja) 演算処理装置
CN112347721A (zh) 基于fpga实现数据处理加速的系统及其加速方法
WO2021135571A1 (zh) 卷积计算方法、卷积计算装置及终端设备
WO2020062299A1 (zh) 一种神经网络处理器、数据处理方法及相关设备
TW202138999A (zh) 用於卷積運算的資料劃分方法及處理器
CN112799599A (zh) 一种数据存储方法、计算核、芯片和电子设备
US11494646B2 (en) Neural network system for performing learning, learning method thereof, and transfer learning method of neural network processor
CN103092781A (zh) 闪存接口的有效利用
KR20150096994A (ko) 두 단계로 페이지를 필터링하는 데이터 저장 장치, 이를 포함하는 시스템, 및 상기 데이터 저장 장치의 동작 방법
WO2019218900A9 (zh) 一种神经网络模型、数据处理方法及处理装置
WO2020107264A1 (zh) 神经网络架构搜索的方法与装置
WO2020107265A1 (zh) 神经网络处理装置、控制方法以及计算系统
US10360158B2 (en) Snoop filter with stored replacement information, method for same, and system including victim exclusive cache and snoop filter shared replacement policies
WO2019104639A1 (zh) 计算单元、计算系统以及计算单元的控制方法
CN104054049A (zh) 减少由cpu执行以便复制源数据从而实现对源数据的并行处理的读/写操作的数量
US20110283068A1 (en) Memory access apparatus and method
DE102022129397A1 (de) Beschleuniger-fabric für diskrete grafik
WO2022028224A1 (zh) 数据存储方法、装置、设备和存储介质
US11836606B2 (en) Neural processing unit and electronic apparatus including the same
US11144459B2 (en) Cache coherency adopted GPU shared memory

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18941391

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18941391

Country of ref document: EP

Kind code of ref document: A1