WO2019232965A1 - 一种模拟神经网络处理器的误差校准方法及装置 - Google Patents

一种模拟神经网络处理器的误差校准方法及装置 Download PDF

Info

Publication number
WO2019232965A1
WO2019232965A1 PCT/CN2018/104781 CN2018104781W WO2019232965A1 WO 2019232965 A1 WO2019232965 A1 WO 2019232965A1 CN 2018104781 W CN2018104781 W CN 2018104781W WO 2019232965 A1 WO2019232965 A1 WO 2019232965A1
Authority
WO
WIPO (PCT)
Prior art keywords
processor
weight parameters
error
learning process
trainable
Prior art date
Application number
PCT/CN2018/104781
Other languages
English (en)
French (fr)
Inventor
贾凯歌
乔飞
魏琦
樊子辰
刘辛军
杨华中
Original Assignee
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学 filed Critical 清华大学
Publication of WO2019232965A1 publication Critical patent/WO2019232965A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/065Analogue means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • Embodiments of the present invention relate to the technical field of error calibration, and in particular, to an error calibration method and device of an analog neural network processor.
  • the non-linearity of the system and devices makes the analog NN processor need to be improved in terms of processing accuracy.
  • the source of errors mainly comes from changes in the manufacturing process, voltage or temperature. And various noise sources.
  • the simulated NN algorithm can tolerate these defects to a certain extent, due to the accumulation of errors, the simulated NN processor still has a series of problems such as a decrease in accuracy that cannot be ignored.
  • Neural network algorithms have a certain degree of error-tolerant learning ability. They can obtain some fixed patterns of input data and even calculation errors. Therefore, retraining the network is an effective method that can reduce various errors, such as the impact of process variations (PV) on the system without increasing the extra consumption of the inference channel.
  • PV process variations
  • retraining all network layers online is a very complex task that requires high-precision calculations and requires a large number of additional computing units to achieve convergence, resulting in excessive energy and resource consumption of the simulated NN processor, which greatly reduces To simulate the efficiency of the NN processor.
  • embodiments of the present invention provide an error calibration method and device for an analog neural network processor.
  • an embodiment of the present invention provides an error calibration method for an analog neural network processor, the method includes:
  • Stochastic gradient descent SGD algorithm is used to train the trainable weight parameters; wherein the loss value and gradient in the learning process are quantified by pairs; the learning process is performed in the digital domain;
  • the learned weight parameters are stored for the NN to calibrate the error of the processor according to the weight parameters.
  • an embodiment of the present invention provides an error calibration device that simulates a neural network processor.
  • the device includes:
  • An obtaining unit configured to analyze the network structure of the NN if algorithm updates and / or error parameter adjustments are detected, to obtain trainable weight parameters of a fully connected layer in the network structure;
  • a quantization unit configured to train the trainable weight parameters by using a stochastic gradient descent SGD algorithm; wherein, the loss value and the gradient in the learning process are quantified in pairs; the learning process is performed in a digital domain;
  • An arithmetic unit configured to use a shift operation to replace the multiplication operation used in the back propagation and the training weight parameter update in the learning process
  • the storage unit is configured to store the learned weight parameters for the NN to calibrate the error of the processor according to the weight parameters.
  • an embodiment of the present invention provides an electronic device, including: a processor, a memory, and a bus, where:
  • the memory stores program instructions executable by the processor, and the processor invokes the program instructions to execute the following methods:
  • Stochastic gradient descent SGD algorithm is used to train the trainable weight parameters; wherein the loss value and gradient in the learning process are quantified by pairs; the learning process is performed in the digital domain;
  • the trained weight parameters are stored for the NN to calibrate the error of the processor according to the weight parameters.
  • an embodiment of the present invention provides a non-transitory computer-readable storage medium, including:
  • the non-transitory computer-readable storage medium stores computer instructions that cause the computer to perform the following methods:
  • Stochastic gradient descent SGD algorithm is used to train the trainable weight parameters; wherein the loss value and gradient in the learning process are quantified by pairs; the learning process is performed in the digital domain;
  • the learned weight parameters are stored for the NN to calibrate the error of the processor according to the weight parameters.
  • the error calibration method and device of the simulated neural network processor provided by the embodiment of the present invention adopts a logarithmic loss value and a gradient in the process of training the weight parameters of the fully connected layer, and uses a shift operation after the logarithmization. Instead of multiplying the backpropagation and training weight parameter update, the energy and resource consumption of the simulated NN processor can be reduced, thereby improving the efficiency of the simulated NN processor.
  • FIG. 1 is a schematic flowchart of an error calibration method of an analog neural network processor according to an embodiment of the present invention
  • FIG. 2 (a) is a schematic diagram of prior art using multiplication for back propagation and training weight parameter update
  • FIG. 2 (b) is a schematic diagram of backpropagation and trainable weight parameter update using a shift operation according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of an error calibration device for an analog neural network processor according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of an error calibration method for an analog neural network processor according to an embodiment of the present invention. As shown in FIG. 1, an error calibration method for an analog neural network processor according to an embodiment of the present invention includes the following steps:
  • the device detects an algorithm update and / or an error parameter adjustment, it analyzes the network structure of the NN to obtain a trainable weight parameter of a fully connected layer in the network structure.
  • An analog neural network processor refers to a processor that is designed for neural network algorithms to compute in the analog domain.
  • the algorithm update can be understood as the update of the neural network NN, and the error parameter can be a PV parameter, but it is not specifically limited.
  • the network structure may be a structure including connection relationships such as a convolutional layer, a pooling layer, and a full connection.
  • trainable weight parameters in the embodiments of the present invention can be understood as the weight parameters of the full connection layer, that is:
  • the embodiment of the invention only trains the weight parameters of the fully connected layer, and the weight parameters of each layer such as the convolutional layer need not be trained.
  • S102 The stochastic gradient descent SGD algorithm is used to train the trainable weight parameters; wherein, the loss value and the gradient in the learning process are quantified in pairs; the learning process is performed in the digital domain.
  • the device uses a stochastic gradient descent SGD algorithm to train the trainable weight parameters; wherein, the loss value and gradient in the learning process are quantified in pairs; the learning process is performed in a digital domain.
  • the neuron value calculated by the analog neural network processor can be converted from the analog domain to the digital domain through a low-precision ADC (that is, an ADC below a preset accuracy threshold).
  • Log quantization can present a wider range of data than linear quantization when using the same bit width. For example, the range of a 4-bit number based on 2 pairs is [-128,128], and the range of an integer of not less than 8 bits is [-128,127].
  • FIG. 2 (a) is a schematic diagram of prior art using multiplication operation for back propagation and trainable weight parameter update
  • FIG. 2 (b) is an embodiment of the present invention uses shift operation for back propagation and trainable weight parameter update principle
  • the above S102 can use 2-based pair quantization. After completing the 2-based pair quantization, the multiplication in the back propagation and trainable weight update stages Operations can be replaced by shift operations.
  • the energy consumption of quantization is very small, because it can be easily obtained by calculating the position of the highest bit (for positive data) or the lowest bit (for negative data) where the high level is located.
  • S104 Store the trained weight parameters for the NN to calibrate the error of the processor according to the weight parameters.
  • the device stores the trained weight parameters for the NN to calibrate the error of the processor according to the weight parameters. It should be noted that after the learning is completed, the new parameters already include the error information of the analog neural network processor, and no additional error compensation is required.
  • the average memory can be reduced by about 66.7%, as shown in Table 1 (the percentage of memory savings). This can reduce storage pressure and reduce data consumption embedded in the terminal.
  • Table 2 (32-bit fixed-point number is used as a reference in the prior art; the data in Table 2 is obtained by simulation under the SMIC 180nm process)
  • the error calibration method of the artificial neural network processor adopts the quantification loss value and gradient in the process of training the weight parameters of the fully connected layer, and uses a shift operation instead of the inversion after the quantization.
  • the multiplication operations used to update the propagation and trainable weight parameters can reduce the energy and resource consumption of the simulated NN processor, thereby improving the efficiency of the simulated NN processor.
  • the method further includes:
  • the trainable weight parameter and the weight parameter are stored with a resolution higher than a preset accuracy.
  • the device stores the trainable weight parameter and the weight parameter with a resolution higher than a preset accuracy.
  • the specific value of the preset accuracy can be set independently according to the actual situation.
  • Non-trainable weights (such as convolutional layers) are saved at low resolution (i.e., resolution lower than the preset accuracy), which can be bit width; while trainable weights (such as fully connected layers) are at high resolution (i.e., fully connected layers) Resolution higher than the preset accuracy), which can be bit width.
  • High-resolution trainable weights are only used to update the weights during the learning phase (learning phase) to ensure convergence.
  • the error calibration method of the artificial neural network processor provided by the embodiment of the present invention can ensure the convergence of the algorithm by storing the trainable weight parameters and weight parameters with a resolution higher than a preset accuracy.
  • the method further includes:
  • the device obtains weight parameters in the convolution layer in the network structure, and stores the weight parameters in the convolution layer with a resolution lower than a preset accuracy.
  • the error calibration method of the simulated neural network processor provided by the embodiment of the present invention can reduce the energy and resource consumption of the simulated NN processor by storing weight parameters in the convolution layer with a resolution lower than a preset accuracy, thereby improving simulation.
  • the efficiency of the NN processor can reduce the energy and resource consumption of the simulated NN processor by storing weight parameters in the convolution layer with a resolution lower than a preset accuracy, thereby improving simulation.
  • the use phase of the NN includes an inference phase and a learning phase; correspondingly, the method further includes:
  • the learning phase is frozen and the inference phase is activated.
  • the learning phase is frozen and the inference phase is activated. That is, in most cases, only the inference phase is active, so only a small amount of additional energy consumption is required.
  • the error calibration method of the simulated neural network processor provided by the embodiment of the present invention can further reduce the energy and resource consumption of the simulated NN processor by freezing the learning phase after the step of storing the learned weight parameters, thereby improving the simulated NN processing. Device efficiency.
  • the method further includes:
  • the inference phase is frozen and the method described above is performed to activate the learning phase.
  • the device freezes the inference phase and executes the above method to activate the learning phase. That is, in a few cases, only the learning phase is active, so only a small amount of additional energy consumption is required.
  • the error calibration method of the simulated neural network processor provided by the embodiment of the present invention can further reduce the energy and resource consumption of the simulated NN processor by freezing the inference stage if an algorithm update and / or error parameter adjustment is detected. Processor efficiency.
  • the method further includes:
  • the reverse phase of the NN is performed in a processor other than the processor of the terminal; wherein the terminal is loaded with the processor and the other processors.
  • the reverse phase of the NN in the device is executed in a processor other than the processor of the terminal; wherein the terminal is loaded with the processor and the other processor.
  • the terminal may be a PC or the like and is included in the scope of the device.
  • Other processors may include CPUs or FPGAs, etc.
  • the above steps such as weight update are transferred to other processors for processing, thereby further reducing the chip area and energy consumption of analog NN processors.
  • the error calibration method of the analog neural network processor provided by the embodiment of the present invention can further reduce the energy and resource consumption of the analog NN processor by transferring the above-mentioned steps such as weight update to other processors for processing, thereby improving the performance of the analog NN processor. effectiveness.
  • the other processors include a CPU or an FPGA.
  • the other processors in the device include a CPU or an FPGA. Reference may be made to the foregoing embodiment, and details are not described herein again.
  • the error calibration method of the artificial neural network processor ensures that other processors normally process the weight update steps by selecting other processors as the CPU or FPGA.
  • FIG. 3 is a schematic structural diagram of an error calibration device for an analog neural network processor according to an embodiment of the present invention.
  • an embodiment of the present invention provides an error calibration device for an analog neural network processor, which includes an acquisition unit 301 and a quantization unit. 302.
  • the arithmetic unit 303 stores the unit 304, where:
  • the obtaining unit 301 is configured to analyze the network structure of the NN to obtain a trainable weight parameter of a fully connected layer in the network structure if an algorithm update and / or an error parameter adjustment is detected; a quantization unit 302 is used to adopt a random gradient The descent SGD algorithm trains the trainable weight parameter; wherein, the loss value and gradient in the learning process are quantified in pairs; the learning process is performed in the digital domain; the replacement unit 303 is used to use a shift operation instead The back-propagation during the learning process and the multiplication operation used to update the trainable weight parameters; the storage unit 304 is used to store the learned weight parameters for the NN to calibrate the processor's errors according to the weight parameters.
  • the obtaining unit 301 is configured to analyze the network structure of the NN to obtain a trainable weight parameter of a fully connected layer in the network structure if an algorithm update and / or an error parameter adjustment is detected; a quantization unit 302 is used to Stochastic gradient descent SGD algorithm is used to train the trainable weight parameters; wherein, the loss value and gradient in the learning process are quantified by pairs; the learning process is performed in the digital domain; the replacement unit 303 is used to use the shift The bit operation replaces the multiplication operation used in the back propagation and the training weight parameter update in the learning process; the storage unit 304 is used to store the learned weight parameter for the NN to calibrate the processor according to the weight parameter The error.
  • a quantization unit 302 is used to Stochastic gradient descent SGD algorithm is used to train the trainable weight parameters; wherein, the loss value and gradient in the learning process are quantified by pairs; the learning process is performed in the digital domain; the replacement unit 303 is used to use the shift The bit operation replaces the multiplication operation used in
  • the error calibration device for an analog neural network processor provided by the embodiment of the present invention adopts logarithmic loss values and gradients in the process of training the weight parameters of the fully connected layer, and uses a shift operation instead of the inverse after the logarithmization.
  • the multiplication operations used to update the propagation and trainable weight parameters can reduce the energy and resource consumption of the simulated NN processor, thereby improving the efficiency of the simulated NN processor.
  • the error calibration device of the artificial neural network processor provided by the embodiment of the present invention may be specifically used to execute the processing flow of the foregoing method embodiments, and its functions are not described herein again, and reference may be made to the detailed description of the foregoing method embodiments.
  • the electronic device includes a processor 401, a memory 402, and a bus 403.
  • the processor 401 and the memory 402 complete communication with each other through a bus 403;
  • the processor 401 is configured to call program instructions in the memory 402 to execute the methods provided in the foregoing method embodiments, for example, if an algorithm update and / or an error parameter adjustment is detected, analyzing the network of the NN Structure to obtain the trainable weight parameters of the fully connected layer in the network structure; the stochastic gradient descent SGD algorithm is used to train the trainable weight parameters; wherein the loss value and gradient in the learning process are logarithmic The learning process is performed in the digital domain; a shift operation is used to replace the multiplication operation used in the back propagation and the trainable weight parameter update in the learning process; the trained weight parameters are stored for the NN to The weight parameter calibrates an error of the processor.
  • the computer program product includes a computer program stored on a non-transitory computer-readable storage medium.
  • the computer program includes program instructions.
  • the computer Being able to execute the methods provided by the above method embodiments includes, for example, if algorithm updates and / or error parameter adjustments are detected, analyzing the network structure of the NN to obtain the trainable weights of the fully connected layers in the network structure Parameters; the stochastic gradient descent SGD algorithm is used to train the trainable weight parameters; wherein the loss value and gradient in the learning process are quantified in pairs; the learning process is performed in the digital domain; shift operations are used instead The multiplication operation used in the learning process for backpropagation and trainable weight parameter update; the learned weight parameters are stored for the NN to calibrate the processor's error according to the weight parameters.
  • This embodiment provides a non-transitory computer-readable storage medium, where the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions cause the computer to execute the methods provided by the foregoing method embodiments, for example, including : If algorithm updates and / or error parameter adjustments are detected, analyze the network structure of the NN to obtain the trainable weight parameters of the fully connected layer in the network structure; use the stochastic gradient descent SGD algorithm to the trainable weights Parameters are used for training; wherein, the loss value and gradient in the learning process are quantified; the learning process is performed in the digital domain; a shift operation is used to replace the back propagation and trainable weight parameters in the learning process Update the used multiplication operation; store the learned weight parameters for the NN to calibrate the error of the processor according to the weight parameters.
  • the foregoing program may be stored in a computer-readable storage medium.
  • the program is executed, the program is executed.
  • the method includes the steps of the foregoing method embodiment.
  • the foregoing storage medium includes: a ROM, a RAM, a magnetic disk, or an optical disk, and other media that can store program codes.
  • the embodiments can be implemented by means of software plus a necessary universal hardware platform, and of course, they can also be implemented by hardware.
  • the above-mentioned technical solution essentially or part that contributes to the existing technology can be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM / RAM, magnetic A disc, an optical disc, and the like include instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in various embodiments or certain parts of the embodiments.

Abstract

一种模拟神经网络处理器的误差校准方法及装置,所述方法包括:若检测到算法更新和/或误差参数调整,解析所述NN的网络结构,以获取所述网络结构中的全连接层的可训练权重参数(S101);采用随机梯度下降SGD算法对所述可训练权重参数进行训练;其中,在学习过程中的损失值和梯度采用对数量化;所述学习过程是在数字域进行的(S102);采用移位运算代替所述学习过程中的反向传播和可训练权重参数更新使用的乘法运算(S103);存储学习好的权重参数,以供所述NN根据所述权重参数校准所述处理器的误差(S104)。所述装置执行上述方法。能够降低模拟NN处理器的能量和资源消耗,从而提高模拟NN处理器的效率。

Description

一种模拟神经网络处理器的误差校准方法及装置
交叉引用
本申请引用于2018年06月07日提交的专利名称为“一种模拟神经网络处理器的误差校准方法及装置”的第2018105809608号中国专利申请,其通过引用被全部并入本申请。
技术领域
本发明实施例涉及误差校准技术领域,具体涉及一种模拟神经网络处理器的误差校准方法及装置。
背景技术
由于模拟电路与数字电路相比的无时钟特性使得其具有功耗低、芯片尺寸小等特点,使得模拟神经网络(Neural Netowrk,简称“NN”)处理器得到了广泛应用。
然而,尽管模拟NN处理器具有较高的能效,但系统和器件的非线性使得模拟NN处理器在处理精度的问题上依然有待改进,导致误差的来源主要来自于制造工艺的变化、电压或温度的偏移以及各种噪声源等。虽然模拟NN算法可以在一定程度上能够容忍这些缺陷,但由于误差累积,模拟NN处理器仍然存在不可忽视的准确度下降等一系列问题。
神经网络算法在某种程度上具有容忍错误的学习能力,它们可以获得输入数据的一些固定模式,甚至是计算错误。因此,重新训练网络是一种有效的方法,可以减少各种误差,例如工艺偏差(Process Variations,简称“PV”)对系统的影响,而不会增加推理通道的额外消耗。然而,在线重新训练所有网络层是一项非常复杂的任务,需要高精度的计算,并且需要花费大量额外的计算单元来实现收敛,导致模拟NN处理器的能量和资源消耗过大,从而大大降低了模拟NN处理器的效率。
因此,如何避免上述缺陷,降低模拟NN处理器的能量和资源消耗,从而提高模拟NN处理器的效率,成为亟须解决的问题。
发明内容
针对现有技术存在的问题,本发明实施例提供一种模拟神经网络处理器的误差校准方法及装置。
第一方面,本发明实施例提供一种模拟神经网络处理器的误差校准方法,所述方法包括:
若检测到算法更新和/或误差参数调整,解析所述NN的网络结构,以获取所述网络结构中的全连接层的可训练权重参数;
采用随机梯度下降SGD算法对所述可训练权重参数进行训练;其中,在学习过程中的损失值和梯度采用对数量化;所述学习过程是在数字域进行的;
采用移位运算代替所述学习过程中的反向传播和可训练权重参数更新使用的乘法运算;
存储学习好的权重参数,以供所述NN根据所述权重参数校准所述处理器的误差。
第二方面,本发明实施例提供一种模拟神经网络处理器的误差校准装置,所述装置包括:
获取单元,用于若检测到算法更新和/或误差参数调整,解析所述NN的网络结构,以获取所述网络结构中的全连接层的可训练权重参数;
量化单元,用于采用随机梯度下降SGD算法对所述可训练权重参数进行训练;其中,在学习过程中的损失值和梯度采用对数量化;所述学习过程是在数字域进行的;
运算单元,用于采用移位运算代替所述学习过程中的反向传播和可训练权重参数更新使用的乘法运算;
存储单元,用于存储学习好的权重参数,以供所述NN根据所述权重参数校准所述处理器的误差。
第三方面,本发明实施例提供一种电子设备,包括:处理器、存储器和总线,其中,
所述处理器和所述存储器通过所述总线完成相互间的通信;
所述存储器存储有可被所述处理器执行的程序指令,所述处理器调用所述程序指令能够执行如下方法:
若检测到算法更新和/或误差参数调整,解析所述NN的网络结构,以获取所述网络结构中的全连接层的可训练权重参数;
采用随机梯度下降SGD算法对所述可训练权重参数进行训练;其中,在学习过程中的损失值和梯度采用对数量化;所述学习过程是在数字域进行的;
采用移位运算代替所述学习过程中的反向传播和可训练权重参数更新使用的乘法运算;
存储训练好的权重参数,以供所述NN根据所述权重参数校准所述处理器的误差。
第四方面,本发明实施例提供一种非暂态计算机可读存储介质,包括:
所述非暂态计算机可读存储介质存储计算机指令,所述计算机指令使所述计算机执行如下方法:
若检测到算法更新和/或误差参数调整,解析所述NN的网络结构,以获取所述网络结构中的全连接层的可训练权重参数;
采用随机梯度下降SGD算法对所述可训练权重参数进行训练;其中,在学习过程中的损失值和梯度采用对数量化;所述学习过程是在数字域进行的;
采用移位运算代替所述学习过程中的反向传播和可训练权重参数更新使用的乘法运算;
存储学习好的权重参数,以供所述NN根据所述权重参数校准所述处理器的误差。
本发明实施例提供的模拟神经网络处理器的误差校准方法及装置,通过在训练全连接层的权重参数的过程中,采用对数量化损失值和梯度,并在对数量化之后采用移位运算代替反向传播和可训练权重参数更新使用的乘法运算,能够降低模拟NN处理器的能量和资源消耗,从而提高模拟NN处理器的效率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员 来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例模拟神经网络处理器的误差校准方法流程示意图;
图2(a)为现有技术采用乘法运算进行反向传播和可训练权重参数更新原理图;
图2(b)为本发明实施例采用移位运算进行反向传播和可训练权重参数更新原理图;
图3为本发明实施例模拟神经网络处理器的误差校准装置结构示意图;
图4为本发明实施例提供的电子设备实体结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
图1为本发明实施例模拟神经网络处理器的误差校准方法流程示意图,如图1所示,本发明实施例提供的一种模拟神经网络处理器的误差校准方法,包括以下步骤:
S101:若检测到算法更新和/或误差参数调整,解析所述NN的网络结构,以获取所述网络结构中的全连接层的可训练权重参数。
具体的,装置若检测到算法更新和/或误差参数调整,解析所述NN的网络结构,以获取所述网络结构中的全连接层的可训练权重参数。模拟神经网络处理器指的是针对于神经网络算法在模拟域中计算的处理器。算法更新可以理解为神经网络NN的更新,误差参数可以是PV参数,但不作具体限定。网络结构可以是包括有卷积层、池化层和全连接等连接关系的结构,需要说明的是:本发明实施例中的可训练权重参数可以理解为全连接层的权重参数,即:本发明实施例仅对全连接层的权重参数进行训练,卷积层等各层的权重参数无需训练。
S102:采用随机梯度下降SGD算法对所述可训练权重参数进行训练;其中,在学习过程中的损失值和梯度采用对数量化;所述学习过程是在数字域进行的。
具体的,装置采用随机梯度下降SGD算法对所述可训练权重参数进行训练;其中,在学习过程中的损失值和梯度采用对数量化;所述学习过程是在数字域进行的。可以通过低精度ADC(即低于预设精度阈值的ADC)将模拟神经网络处理器计算出的神经元值从模拟域转换到数字域。对数量化可以在使用相同位宽时呈现比线性量化更宽的数据范围。例如:基于2的对数量化4比特数的范围是[-128,128],而对于不小于8比特的整数的范围则为[-128,127]。考虑到损失值和梯度总是变化很大,并且在随机梯度下降(SGD)算法中它们可以被近似处理,于是采用对数量化方法来量化损失值和梯度。采用随机梯度下降SGD算法对所述可训练权重参数进行训练为本领域成熟技术,不再赘述。
S103:采用移位运算代替所述学习过程中的反向传播和可训练权重参数更新使用的乘法运算。
具体的,装置采用移位运算代替所述学习过程中的反向传播和可训练权重参数更新使用的乘法运算。图2(a)为现有技术采用乘法运算进行反向传播和可训练权重参数更新原理图;图2(b)为本发明实施例采用移位运算进行反向传播和可训练权重参数更新原理图,如图2(a)和图2(b)所示,上述S102可以采用基于2的对数量化,在完成基于2的对数量化之后,反向传播和可训练权重更新阶段中的乘法运算便可以用移位运算代替。对数量化的耗能很小,因为它可以通过计算高电平所在的最高位(用于正数据)或最低位(用于负数据)所在的位置,较为容易的获得。
S104:存储训练好的权重参数,以供所述NN根据所述权重参数校准所述处理器的误差。
具体的,装置存储训练好的权重参数,以供所述NN根据所述权重参数校准所述处理器的误差。需要说明的是:在学习完成后,新的参数已经包含了所述模拟神经网络处理器的误差信息,无需进行额外的误差补偿。
在学习阶段,与现有技术相比,平均可以减少约66.7%的内存,如表1(节省存储器的百分比)所示。这可以减轻存储压力并降低嵌入终端的 数据消耗。
表1
  神经元&损失 梯度 权重 综合
现有技术 32bit 32bit 32bit 32bit
本发明实施例 6bit 6bit 20bit 10.67bit
节省内存 81.25% 81.25% 37.5% 66.7%
本发明实施例的对数量化之后,学习过程中几乎所有的乘法操作都可以用位移操作来代替,如果学习过程使用系统中的现有FPGA或ASIC集成实现,将显著降低功耗,如表2(能耗计算)所示。其中,PDP(Power Delay Product)为功率时延乘积。对数量化和移位操作的组合与现有技术相比可获得约50倍的能效提升。
表2(现有技术选用32bit定点数作为基准;表2中数据为在SMIC180nm工艺下仿真得到的)
  运算 延时(ns) 功率(mw) PDP(pJ)
现有技术 32bit乘法 6.53 15.7 102.52
本发明实施例 32bit移位 1.62 1.22 1.97
本发明实施例提供的模拟神经网络处理器的误差校准方法,通过在训练全连接层的权重参数的过程中,采用对数量化损失值和梯度,并在对数量化之后采用移位运算代替反向传播和可训练权重参数更新使用的乘法运算,能够降低模拟NN处理器的能量和资源消耗,从而提高模拟NN处理器的效率。
在上述实施例的基础上,所述方法还包括:
采用高于预设精度的分辨率存储所述可训练权重参数和所述权重参数。
具体的,装置采用高于预设精度的分辨率存储所述可训练权重参数和所述权重参数。预设精度的具体数值可以根据实际情况自主设置。不可训 练的权重(如卷积层)以低分辨率(即低于预设精度的分辨率)保存,具体可以是位宽;而可训练的权重(如全连接层)以高分辨率(即高于预设精度的分辨率)保存,具体可以是位宽。高分辨率可训练权重仅用于在学习阶段(学习阶段)更新权重以确保收敛。
本发明实施例提供的模拟神经网络处理器的误差校准方法,通过采用高于预设精度的分辨率存储可训练权重参数和权重参数,能够保证算法的收敛。
在上述实施例的基础上,所述方法还包括:
获取所述网络结构中的卷积层中的权重参数,并以低于预设精度的分辨率存储所述卷积层中的权重参数。
具体的,装置获取所述网络结构中的卷积层中的权重参数,并以低于预设精度的分辨率存储所述卷积层中的权重参数。可参照上述实施例,不再赘述。
本发明实施例提供的模拟神经网络处理器的误差校准方法,通过采用低于预设精度的分辨率存储卷积层中的权重参数,能够降低模拟NN处理器的能量和资源消耗,从而提高模拟NN处理器的效率。
在上述实施例的基础上,所述NN的使用阶段包括推理阶段和学习阶段;相应的,所述方法还包括:
存储学习好的权重参数的步骤之后,冻结所述学习阶段,激活所述推理阶段。
具体的,装置存储学习好的权重参数的步骤之后,冻结所述学习阶段,激活所述推理阶段。也就是说,在大多数情况下,只有推理阶段处于激活状态,所以只需要少量的额外的能量消耗。
本发明实施例提供的模拟神经网络处理器的误差校准方法,通过在存储学习好的权重参数的步骤之后,冻结学习阶段,进一步能够降低模拟NN处理器的能量和资源消耗,从而提高模拟NN处理器的效率。
在上述实施例的基础上,所述方法还包括:
若检测到算法更新和/或误差参数调整,冻结所述推理阶段,并执行上述的方法,以激活所述学习阶段。
具体的,装置若检测到算法更新和/或误差参数调整,冻结所述推理阶 段,并执行上述的方法,以激活所述学习阶段。也就是说,在少数情况下,只有学习阶段处于激活状态,所以只需要少量的额外的能量消耗。
本发明实施例提供的模拟神经网络处理器的误差校准方法,通过若检测到算法更新和/或误差参数调整,冻结推理阶段,进一步能够降低模拟NN处理器的能量和资源消耗,从而提高模拟NN处理器的效率。
在上述实施例的基础上,所述方法还包括:
所述NN的反向阶段在终端的除所述处理器以外的其他处理器中执行;其中,所述终端装载有所述处理器和所述其他处理器。
具体的,装置中的所述NN的反向阶段在终端的除所述处理器以外的其他处理器中执行;其中,所述终端装载有所述处理器和所述其他处理器。终端可以是PC等,包含在装置的范围之内。其他处理器可以包括CPU或FPGA等,将上述权重更新等步骤转移至其他处理器处理,从而更加减少了模拟NN处理器的芯片面积和能量消耗。
本发明实施例提供的模拟神经网络处理器的误差校准方法,通过将上述权重更新等步骤转移至其他处理器处理,进一步能够降低模拟NN处理器的能量和资源消耗,从而提高模拟NN处理器的效率。
在上述实施例的基础上,所述其他处理器包括CPU或FPGA。
具体的,装置中的所述其他处理器包括CPU或FPGA。可参照上述实施例,不再赘述。
本发明实施例提供的模拟神经网络处理器的误差校准方法,通过将其他处理器选定为CPU或FPGA,保证了其他处理器正常处理上述权重更新等步骤。
图3为本发明实施例模拟神经网络处理器的误差校准装置结构示意图,如图3所示,本发明实施例提供了一种模拟神经网络处理器的误差校准装置,包括获取单元301、量化单元302、运算单元303存储单元304,其中:
获取单元301用于若检测到算法更新和/或误差参数调整,解析所述NN的网络结构,以获取所述网络结构中的全连接层的可训练权重参数;量化单元302用于采用随机梯度下降SGD算法对所述可训练权重参数进行训练;其中,在学习过程中的损失值和梯度采用对数量化;所述学习过 程是在数字域进行的;代替单元303用于采用移位运算代替所述学习过程中的反向传播和可训练权重参数更新使用的乘法运算;存储单元304用于存储学习好的权重参数,以供所述NN根据所述权重参数校准所述处理器的误差。
具体的,获取单元301用于若检测到算法更新和/或误差参数调整,解析所述NN的网络结构,以获取所述网络结构中的全连接层的可训练权重参数;量化单元302用于采用随机梯度下降SGD算法对所述可训练权重参数进行训练;其中,在学习过程中的损失值和梯度采用对数量化;所述学习过程是在数字域进行的;代替单元303用于采用移位运算代替所述学习过程中的反向传播和可训练权重参数更新使用的乘法运算;存储单元304用于存储学习好的权重参数,以供所述NN根据所述权重参数校准所述处理器的误差。
本发明实施例提供的模拟神经网络处理器的误差校准装置,通过在训练全连接层的权重参数的过程中,采用对数量化损失值和梯度,并在对数量化之后采用移位运算代替反向传播和可训练权重参数更新使用的乘法运算,能够降低模拟NN处理器的能量和资源消耗,从而提高模拟NN处理器的效率。
本发明实施例提供的模拟神经网络处理器的误差校准装置具体可以用于执行上述各方法实施例的处理流程,其功能在此不再赘述,可以参照上述方法实施例的详细描述。
图4为本发明实施例提供的电子设备实体结构示意图,如图4所示,所述电子设备包括:处理器(processor)401、存储器(memory)402和总线403;
其中,所述处理器401、存储器402通过总线403完成相互间的通信;
所述处理器401用于调用所述存储器402中的程序指令,以执行上述各方法实施例所提供的方法,例如包括:若检测到算法更新和/或误差参数调整,解析所述NN的网络结构,以获取所述网络结构中的全连接层的可训练权重参数;采用随机梯度下降SGD算法对所述可训练权重参数进行训练;其中,在学习过程中的损失值和梯度采用对数量化;所述学习过程是在数字域进行的;采用移位运算代替所述学习过程中的反向传播和可训练权重参数更新使用的乘法运算;存储训练好的权重参数,以供所述NN 根据所述权重参数校准所述处理器的误差。
本实施例公开一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,计算机能够执行上述各方法实施例所提供的方法,例如包括:若检测到算法更新和/或误差参数调整,解析所述NN的网络结构,以获取所述网络结构中的全连接层的可训练权重参数;采用随机梯度下降SGD算法对所述可训练权重参数进行训练;其中,在学习过程中的损失值和梯度采用对数量化;所述学习过程是在数字域进行的;采用移位运算代替所述学习过程中的反向传播和可训练权重参数更新使用的乘法运算;存储学习好的权重参数,以供所述NN根据所述权重参数校准所述处理器的误差。
本实施例提供一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储计算机指令,所述计算机指令使所述计算机执行上述各方法实施例所提供的方法,例如包括:若检测到算法更新和/或误差参数调整,解析所述NN的网络结构,以获取所述网络结构中的全连接层的可训练权重参数;采用随机梯度下降SGD算法对所述可训练权重参数进行训练;其中,在学习过程中的损失值和梯度采用对数量化;所述学习过程是在数字域进行的;采用移位运算代替所述学习过程中的反向传播和可训练权重参数更新使用的乘法运算;存储学习好的权重参数,以供所述NN根据所述权重参数校准所述处理器的误差。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所描述的电子设备等实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动 的情况下,即可以理解并实施。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上各实施例仅用以说明本发明的实施例的技术方案,而非对其限制;尽管参照前述各实施例对本发明的实施例进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明的各实施例技术方案的范围。

Claims (10)

  1. 一种模拟神经网络处理器的误差校准方法,所述方法在模拟神经网络NN处理器中执行,其特征在于,包括:
    若检测到算法更新和/或误差参数调整,解析所述NN的网络结构,以获取所述网络结构中的全连接层的可训练权重参数;
    采用随机梯度下降SGD算法对所述可训练权重参数进行训练;其中,在学习过程中的损失值和梯度采用对数量化;所述学习过程是在数字域进行的;
    采用移位运算代替所述学习过程中的反向传播和可训练权重参数更新使用的乘法运算;
    存储学习好的权重参数,以供所述NN根据所述权重参数校准所述处理器的误差。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    采用高于预设精度的分辨率存储所述可训练权重参数和所述权重参数。
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获取所述网络结构中的卷积层中的权重参数,并以低于预设精度的分辨率存储所述卷积层中的权重参数。
  4. 根据权利要求1所述的方法,其特征在于,所述NN的使用阶段包括推理阶段和学习阶段;相应的,所述方法还包括:
    存储学习好的权重参数的步骤之后,冻结所述学习阶段,激活所述推理阶段。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    若检测到算法更新和/或误差参数调整,冻结所述推理阶段,并执行如权利要求1所述的方法,以激活所述学习阶段。
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述NN的反向阶段在终端的除所述处理器以外的其他处理器中执行;其中,所述终端装载有所述处理器和所述其他处理器。
  7. 根据权利要求6所述的方法,其特征在于,所述其他处理器包括CPU或FPGA。
  8. 一种模拟神经网络处理器的误差校准装置,所述装置包含有模拟神经网络NN处理器,其特征在于,包括:
    获取单元,用于若检测到算法更新和/或误差参数调整,解析所述NN的网络结构,以获取所述网络结构中的全连接层的可训练权重参数;
    量化单元,用于采用随机梯度下降SGD算法对所述可训练权重参数进行训练;其中,在学习过程中的损失值和梯度采用对数量化;所述学习过程是在数字域进行的;
    运算单元,用于采用移位运算代替所述学习过程中的反向传播和可训练权重参数更新使用的乘法运算;
    存储单元,用于存储学习好的权重参数,以供所述NN根据所述权重参数校准所述处理器的误差。
  9. 一种电子设备,其特征在于,包括:处理器、存储器和总线,其中,
    所述处理器和所述存储器通过所述总线完成相互间的通信;
    所述存储器存储有可被所述处理器执行的程序指令,所述处理器调用所述程序指令能够执行如权利要求1至7任一所述的方法。
  10. 一种非暂态计算机可读存储介质,其特征在于,所述非暂态计算机可读存储介质存储计算机指令,所述计算机指令使所述计算机执行如权利要求1至7任一所述的方法。
PCT/CN2018/104781 2018-06-07 2018-09-10 一种模拟神经网络处理器的误差校准方法及装置 WO2019232965A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810580960.8 2018-06-07
CN201810580960.8A CN110580523B (zh) 2018-06-07 2018-06-07 一种模拟神经网络处理器的误差校准方法及装置

Publications (1)

Publication Number Publication Date
WO2019232965A1 true WO2019232965A1 (zh) 2019-12-12

Family

ID=68769907

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/104781 WO2019232965A1 (zh) 2018-06-07 2018-09-10 一种模拟神经网络处理器的误差校准方法及装置

Country Status (2)

Country Link
CN (1) CN110580523B (zh)
WO (1) WO2019232965A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705776A (zh) * 2021-08-06 2021-11-26 山东云海国创云计算装备产业创新中心有限公司 一种基于asic实现激活函数的方法、系统、设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279554A (zh) * 2015-09-29 2016-01-27 东方网力科技股份有限公司 基于哈希编码层的深度神经网络的训练方法及装置
CN107395211A (zh) * 2017-09-12 2017-11-24 郑州云海信息技术有限公司 一种基于卷积神经网络模型的数据处理方法及装置
WO2018034682A1 (en) * 2016-08-13 2018-02-22 Intel Corporation Apparatuses, methods, and systems for neural networks
CN107944458A (zh) * 2017-12-08 2018-04-20 北京维大成科技有限公司 一种基于卷积神经网络的图像识别方法和装置
CN107992842A (zh) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 活体检测方法、计算机装置及计算机可读存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8352400B2 (en) * 1991-12-23 2013-01-08 Hoffberg Steven M Adaptive pattern recognition based controller apparatus and method and human-factored interface therefore
US7409372B2 (en) * 2003-06-20 2008-08-05 Hewlett-Packard Development Company, L.P. Neural network trained with spatial errors
CN104899641B (zh) * 2015-05-25 2018-07-13 杭州朗和科技有限公司 深度神经网络学习方法、处理器和深度神经网络学习系统
US11222263B2 (en) * 2016-07-28 2022-01-11 Samsung Electronics Co., Ltd. Neural network method and apparatus
CN106355248A (zh) * 2016-08-26 2017-01-25 深圳先进技术研究院 一种深度卷积神经网络训练方法及装置
US10255910B2 (en) * 2016-09-16 2019-04-09 Apptek, Inc. Centered, left- and right-shifted deep neural networks and their combinations
CN107092960A (zh) * 2017-04-17 2017-08-25 中国民航大学 一种改进的并行通道卷积神经网络训练方法
CN108009640B (zh) * 2017-12-25 2020-04-28 清华大学 基于忆阻器的神经网络的训练装置及其训练方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279554A (zh) * 2015-09-29 2016-01-27 东方网力科技股份有限公司 基于哈希编码层的深度神经网络的训练方法及装置
WO2018034682A1 (en) * 2016-08-13 2018-02-22 Intel Corporation Apparatuses, methods, and systems for neural networks
CN107395211A (zh) * 2017-09-12 2017-11-24 郑州云海信息技术有限公司 一种基于卷积神经网络模型的数据处理方法及装置
CN107944458A (zh) * 2017-12-08 2018-04-20 北京维大成科技有限公司 一种基于卷积神经网络的图像识别方法和装置
CN107992842A (zh) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 活体检测方法、计算机装置及计算机可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705776A (zh) * 2021-08-06 2021-11-26 山东云海国创云计算装备产业创新中心有限公司 一种基于asic实现激活函数的方法、系统、设备和存储介质
CN113705776B (zh) * 2021-08-06 2023-08-08 山东云海国创云计算装备产业创新中心有限公司 一种基于asic实现激活函数的方法、系统、设备和存储介质

Also Published As

Publication number Publication date
CN110580523B (zh) 2022-08-02
CN110580523A (zh) 2019-12-17

Similar Documents

Publication Publication Date Title
US11216719B2 (en) Methods and arrangements to quantize a neural network with machine learning
US20220374688A1 (en) Training method of neural network based on memristor and training device thereof
Lotrič et al. Applicability of approximate multipliers in hardware neural networks
CN105024704B (zh) 一种低复杂度的列分层ldpc译码器实现方法
US20210027195A1 (en) Systems and Methods for Compression and Distribution of Machine Learning Models
JP2017515205A5 (zh)
JP2019003546A (ja) 多層ニューラルネットワークのニューロンの出力レベル調整方法
CN113887845B (zh) 极端事件预测方法、装置、设备和存储介质
US20220236909A1 (en) Neural Network Computing Chip and Computing Method
US11294763B2 (en) Determining significance levels of error values in processes that include multiple layers
CN111158912A (zh) 云雾协同计算环境下一种基于深度学习的任务卸载决策方法
WO2019232965A1 (zh) 一种模拟神经网络处理器的误差校准方法及装置
CN104503847A (zh) 一种数据中心节能方法和装置
WO2020042832A1 (zh) 神经网络节点的自增减方法、装置及存储介质
TW202137075A (zh) 基於記憶體內運算電路架構之量化方法及其系統
Eldebiky et al. Correctnet: Robustness enhancement of analog in-memory computing for neural networks by error suppression and compensation
US20210374509A1 (en) Modulo Operation Unit
WO2021151324A1 (zh) 基于迁移学习的医疗数据处理方法、装置、设备及介质
CN116629342A (zh) 模型旁路调优方法及装置
US20230024977A1 (en) Method of processing data, data processing device, data processing program, and method of generating neural network model
US20220358183A1 (en) Matrix multiplier and operation method thereof
US20210081785A1 (en) Information processing device and method, and recording medium storing information processing program
CN111476356B (zh) 忆阻神经网络的训练方法、装置、设备及存储介质
WO2019208248A1 (ja) 学習装置、学習方法及び学習プログラム
US8041992B2 (en) Input compensated and/or overcompensated computing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18921568

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18921568

Country of ref document: EP

Kind code of ref document: A1