EP4097646A1 - Calcul accéléré par matériel de convolutions - Google Patents

Calcul accéléré par matériel de convolutions

Info

Publication number
EP4097646A1
EP4097646A1 EP21701465.3A EP21701465A EP4097646A1 EP 4097646 A1 EP4097646 A1 EP 4097646A1 EP 21701465 A EP21701465 A EP 21701465A EP 4097646 A1 EP4097646 A1 EP 4097646A1
Authority
EP
European Patent Office
Prior art keywords
input
convolution
memory
data
convolution kernel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP21701465.3A
Other languages
German (de)
English (en)
Inventor
Armin Runge
Taha Ibrahim Ibrahim SOLIMAN
Leonardo Luiz Ecco
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of EP4097646A1 publication Critical patent/EP4097646A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • the present invention relates to the computation of the convolution of input data with a convolution kernel by means of a hardware accelerator.
  • CNN Convolutional neural networks
  • the folding core is guided in a predetermined grid of positions within the input sensor.
  • the distance between adjacent positions in this grid is also known as the “stride”.
  • the convolution kernel is used in each of the positions by forming a sum weighted with the values of the convolution kernel from the input data in the area of the input sensor covered by the convolution kernel at its current position. In the convolution, this weighted sum is assigned to the current position of the convolution kernel.
  • the convolution kernel is used to identify a certain searched feature in the input data, then the weighted sum is greatest for those positions of the convolution kernel at which there is the greatest correspondence between the searched feature embodied in this convolution kernel and the input data.
  • the result of the convolution with a convolution kernel is therefore also referred to as a feature map in relation to this convolution kernel.
  • the weighted sum is calculated using at least one hardware accelerator.
  • This hardware accelerator has an input memory and a fixed number of multipliers which call up their operands, that is to say here input data and values of the convolution kernel, from predetermined storage locations in the input memory.
  • an inner product arithmetic unit that calculates the inner product of two vectors with a fixed length typically contains as many multipliers as the vectors each have elements. This means that all the multiplications required to calculate the inner product can be carried out at the same time. The resulting products then only have to be cumulated with adders. Overall, the inner product can be calculated in fewer clock cycles.
  • such hardware accelerators are usually operated in such a way that a maximum of as many summands are processed in each operation as corresponds to a depth of the input sensor, i.e. the extension of the input sensor measured in the number of elements in one dimension.
  • the input data includes RGB image data
  • the input sensor has a depth of 3 because each image pixel is assigned three intensity values for red, green and blue.
  • these three intensity values which are assigned to a specific pixel, are then always added up, weighted with values of the convolution kernel. This calculation is repeated for all pixels currently covered by the convolution kernel, and then all inner products obtained are added.
  • This procedure is modified within the scope of the method so that more summands are processed in at least one work step of the hardware accelerator than corresponds to the depth of the input sensor.
  • the hardware accelerator ideally has a completely filled input memory is operated. Since the pixel-wise intermediate results calculated up to now are all added anyway in order to obtain the final result of the convolution, it is irrelevant for the result if the calculation for several or ideally all pixels is combined in one operation of the hardware accelerator. However, this result is delivered much faster because, overall, significantly fewer hardware accelerator operations are required.
  • the time saving is particularly great if a fold is calculated in a layer of a CNN that has a large lateral extent and, at the same time, a shallow depth.
  • the aforementioned RGB image can have a resolution of Full HD (1,920 x 1,080 pixels) with a depth of only 3. If, for example, an inner product computing unit is used for vectors with a length of 128 elements, according to In the conventional operating mode, this arithmetic unit only performs three multiplications instead of 128 per operation. So almost 98% of the available computing capacity is idle. According to the method proposed here, the hardware accelerator is utilized much better.
  • the input data can be loaded into the input memory of the hardware accelerator according to a rule that is the same for all positions of the convolution kernel, and this mere reorganization is sufficient to obtain the same result much faster than before.
  • the value of the input data must be in the list of input data in the input memory of the hardware accelerator are in the position at which the matching value of the convolution kernel is in the list of values of the convolution kernel in the input memory.
  • the first possibility is to vary the assignment between operands and storage locations of at least one input memory and / or the assignment between operands and storage locations of at least one parameter memory for values of the convolution kernel during the calculation of the convolution.
  • a multiplexer can in particular be connected between at least one multiplier and at least one input memory.
  • a multiplexer for example, can be connected between at least one multiplier and at least one parameter memory for values of the convolution kernel.
  • one and the same multiplexer can have both access to the at least one input memory and access to the at least one parameter memory. In this way, a specific operand for a specific multiplication can optionally be read from one of several possible memory locations.
  • the hardware accelerator can access the input memory and / or the parameter memory at least to a limited extent.
  • the freedom of choice is sufficient to be able to reuse input data that are in the correct place in the input memory of the hardware accelerator for a first position of the convolution kernel, also for the position of the convolution kernel that occurs during the convolution.
  • the circuitry effort is significantly lower than, for example, for a bus system or a “Network on Chip”.
  • a 4: 1 multiplexer has turned out to be an optimal compromise between freedom of choice and thus efficiency on the one hand and hardware costs on the other.
  • the multiplexing can be applied to the input data, to the values of the convolution kernel, or also to both the input data and the values of the convolution kernel.
  • the second possibility which can be used alternatively or in combination, is to store input data and / or values of the convolution kernel multiple times in the input memory.
  • a separate copy can then be stored in the input memory, for example, for each intended use of a specific value from the input data in the course of the convolution.
  • the collection of the values from the input data to be processed in this work step can be stored in the correct order in the input memory for each intended work step of the hardware accelerator.
  • an input memory with at least one separate memory or memory area can be selected for each multiplier.
  • Those input data or values of the convolution kernel that the respective multiplier needs in the course of the calculation of the convolution can then be loaded into this memory or memory area.
  • the partitions are shift registers.
  • an inner product arithmetic unit for vectors with a length between 16 and 128 elements is selected as the hardware accelerator.
  • the said effort for providing the correct input data to the correct multipliers also increases.
  • the inventors' investigations have shown that the range between 16 and 128 elements is an optimal compromise.
  • a main application for CNNs is the processing of measurement data into output variables relevant for the respective application.
  • better utilization of the hardware accelerator means that lower costs are incurred for the hardware of a corresponding evaluation system and energy consumption is also reduced accordingly.
  • the invention therefore generally also relates to a method for evaluating measurement data recorded with at least one sensor, and / or realistic synthetic measurement data from this at least one sensor, for one or more output variables with at least one neural network.
  • Realistic synthetic measurement data can be used, for example, instead of or in combination with actually physically recorded measurement data in order to train the evaluation system.
  • a data set with realistic, synthetic measurement data from a sensor is difficult to distinguish from measurement data actually recorded physically with this sensor.
  • the neural network has at least one convolution layer. In this convolution layer, a convolution of a tensor of input data with at least one predefined convolution kernel is determined. This convolution is calculated using the method described above. As explained above, this means that the desired output variables can be evaluated particularly quickly from the input variables given the hardware resources. With a given processing speed, the evaluation can be carried out with less use of hardware resources and thus also with less energy consumption.
  • the convolution in the first convolution layer through which the measurement data pass is calculated using the method described above, while this method is not used in at least one convolution layer passed through later.
  • the gain in speed through the method described above is greatest in those layers of the CNN which are laterally greatly expanded, but only have a shallow depth.
  • the corresponding circuitry for the at least restricted random access of the hardware accelerator to the input memory, or the corresponding storage space in the input memory of the hardware accelerator, should therefore preferably be used on such layers.
  • the measurement data include image data of at least one optical camera or thermal camera, and / or audio data, and / or measurement data obtained by querying a spatial area with ultrasound, radar radiation or LI DAR. It is precisely these data in the state in which they are entered into the CNN, laterally very extensive and highly resolved, but of comparatively shallow depth. The lateral resolution is successively reduced by the folding from layer to layer, while the depth can increase.
  • the output variables sought can in particular, for example
  • CNNs • Include a semantic segmentation of the measurement data in relation to classes and / or objects. These are output variables which CNNs are preferably used to obtain from high-dimensional input variables.
  • a control signal is formed from the output variable or variables.
  • the use of the previously described method for calculating the convolution means that these systems, given the hardware resources for the evaluation, react more quickly to measurement data recorded by sensors. If, on the other hand, the response time is specified, hardware resources can be saved.
  • the methods can be implemented in whole or in part by a computer.
  • the invention therefore also relates to a computer program with machine-readable instructions which, when they are executed on one or more computers, cause the computer or computers to carry out one of the described methods.
  • control devices for vehicles and embedded systems for technical devices which are also able to execute machine-readable instructions, are to be regarded as computers.
  • the invention also relates to a machine-readable data carrier and / or to a download product with the parameter set and / or with the computer program.
  • a download product is a digital product that can be transmitted via a data network, ie can be downloaded by a user of the data network and that can be offered for sale for immediate download in an online shop, for example.
  • a computer can be equipped with the computer program, with the machine-readable data carrier or with the download product.
  • FIG. 1 exemplary embodiment of the method 100 for calculating a convolution 4
  • FIG. 2 an illustration of the basic mechanism of action that accelerates the calculation
  • FIG. 3 change in the assignment of operands 52a, 52b to memory locations 51a-51h in the input memory 51 of a hardware accelerator 5 with a multiplexer 53,
  • FIG. 4 multiple storage of input data la and / or values 2a of a convolution kernel 2 in the input memory 51 for more efficient processing;
  • FIG. 5 exemplary embodiment of the method 200 for evaluating measurement data 61, 62.
  • FIG. 1 is a schematic flow diagram of an exemplary embodiment of the method 100 with which the convolution 4 of an input sensor 1 is calculated from input data 1 a with a tensile convolution kernel 2.
  • the convolution core is guided in a predetermined grid of positions 21, 22 within the input sensor 1.
  • the convolution kernel 2 in Each of these positions 21, 22 is applied by forming a sum 3 weighted with the values 2a of the convolution kernel 2 from the input data la in the area of the input sensor 1 covered by the convolution kernel 2 at its current position 21, 22.
  • this weighted sum 3 in the fold 4 is assigned to the current position 21, 22 of the fold kernel 2.
  • a hardware accelerator 5 is used in accordance with block 121, with an inner product arithmetic unit for vectors with a length between 16 and 128 elements being selected here in accordance with block 125, for example.
  • block 122 more summands are processed in at least one operation of the hardware accelerator 5 than corresponds to a depth 11 of the input sensor 1.
  • the assignment between operands 52a, 52b and memory locations 51a-51h of input memory 51 can be varied during the calculation of convolution 4 in order to give multipliers 52 at least limited random access to input memory 51.
  • a multiplexer 53 can be used according to block 123a, which is explained in more detail in FIG.
  • input data 1 a and / or values 2 a of the convolution kernel 2 can be stored in the input memory 51 several times.
  • the input data la and values 2a can thus be fed to the hardware accelerator at each position 21, 22 of the convolution core 2 in an arrangement with respect to one another which ensures that the hardware accelerator 5 actually calculates summands occurring in the weighted sum 3. This is explained in more detail in FIG.
  • an input memory 51 with at least one separate memory or memory area can be selected for each multiplier 52.
  • those input data la and values 2a of the convolution kernel 2 that the respective multiplier 52 needs in the course of the calculation of the convolution 4 can then be loaded into this memory or memory area.
  • FIG. 2 explains the basic principle of improved utilization of a hardware accelerator 5.
  • the input sensor 1 has a depth 11 of 3 in this example.
  • FIGS. 3 and 4 illustrate the previously explained ways in which this can be ensured.
  • FIG. 3 illustrates the use of a multiplexer 53 in order to give a multiplier 52 in a hardware accelerator 5 at least restricted random access to the input memory 51 of the hardware accelerator 5.
  • eight storage locations 51a-51h of the input memory 51 are shown.
  • the 4: 1 multiplexer 53 can be used to select whether a value la is retrieved from the memory location 51a, 51c, 51e or 51g of the input memory 51 and fed to the multiplier 52 as the first operand 52a.
  • FIG. 3 shows two exemplary possible sources from which the second operand 52b can originate.
  • the second operand 52b can be fed to the multiplier 52 from the memory location 51b of the input memory 51, for example.
  • the multiplexer 53, or a further multiplexer can also, for example, have access to different storage locations 55a-55d of the parameter memory 55, each of which stores different values 2a of the convolution kernel 2.
  • the multiplexer 53 can then optionally supply one of these values 2a as a second operand 52b to the multiplier 52. This option is shown in dashed lines in FIG.
  • the multiplier 52 multiplies the two operands 52a and 52b and delivers the product 52c as the result.
  • a further multiplier 52 'shown by way of example also supplies such a product 52c which it has multiplied from other operands 52a and 52b.
  • Products 52c that have been supplied by different multipliers 52 are added with adders 54 to give intermediate results 54a.
  • the intermediate results 54a are again accumulated with further adders 54 (not shown in FIG. 4) until the weighted sum 3, or at least a part thereof, is finally calculated.
  • the maximum increase in efficiency results when the complete weighted sum 3 can be calculated for a position 21, 22 of the convolution kernel 2 with just one operation of the hardware accelerator 5.
  • FIG. 4 illustrates the replication of input data la in the input memory 51 of the hardware accelerator 5 with the aim of being able to call up the correct input data la as operands 2a for the multiplier 52 for each position 21, 22 of the convolution kernel 2.
  • the input sensor 1 comprises three levels, that is to say has a depth 11 of 3. Some different positions of input data la in these levels are identified by different hatching.
  • a value la is selected for illustration and denoted by the reference character la.
  • this value la is in the fourth position from the top in the input memory 51, since initially, starting from the upper left corner of the levels of the input tensor 1, a “column” is processed in the direction of the depth 11 of the input tensor 11 and the Value la forms the beginning of the second such "pillar". If, however, the convolution core 2 advances to position 22, the value la must be multiplied by the first value 2a of the convolution core 2.
  • the value la is therefore required in the first place in the input memory 51 for this position 22.
  • the input data la are replicated in the input memory 51 as shown in FIG.
  • FIG. 5 is a schematic flow diagram of an exemplary embodiment of the method 200 for evaluating measurement data.
  • This can be any mixture of measurement data 61, which were physically recorded with at least one sensor 6, and realistic synthetic measurement data 62 from this at least one sensor 6.
  • the measurement data 61, 62 are processed with a neural network 8 to form output variables 7.
  • the neural network 8 comprises a plurality of convolution layers 81-83, through which the measurement data 61, 62 pass one after the other. That is, the measurement data 61, 62 are processed by the layer 81 to an intermediate result (“feature map”), which then is processed by the layer 82 to a further intermediate result and by the layer 83 to the final output variables 7.
  • each convolution layer 81-83 a convolution 4 of a tensor 1 of input data 1 a with at least one predefined convolution kernel 2 is determined.
  • at least one such convolution 4 is calculated using the method 100 described above.
  • the convolution 4 in the first convolution layer 81 which the measurement data 61, 62 pass through, can be calculated with the method 100, while this method is not used in at least one convolution layer 82, 83 passed through later.
  • this method is not used in at least one convolution layer 82, 83 passed through later.
  • a control signal 220a is formed from the output variables 7.
  • a robot 91, and / or a vehicle 92, and / or a classification system 93, and / or a system 94 for monitoring areas, and / or a system 95 for the quality control of mass-produced vehicles is used with this control signal Products, and / or a system 96 for medical imaging, controlled.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Complex Calculations (AREA)
  • Image Analysis (AREA)

Abstract

Procédé (100) pour calculer une convolution (4) d'un tenseur d'entrée (1) de données d'entrée (1a) à l'aide d'un noyau de convolution tensoriel (2), • le noyau de convolution (2) étant guidée (110) dans une grille prédéfinie de positions (21, 22) dans le tenseur d'entrée (1), • le noyau de convolution (2) étant appliqué dans chacune de ces positions (21, 22) par formation (120), à partir des données d'entrée (1a), d'une somme (3) pondérée par les valeurs (2a) du noyau de convolution (2) dans cette région du tenseur d'entrée (1) qui est recouverte par le noyau de convolution (2) à la position actuelle (21, 22) de celui-ci, et • cette somme pondérée (3) étant attribuée (130) dans la convolution (4) à la position actuelle (21, 22) du noyau de convolution (2), la somme pondérée (3) étant calculée (121) à l'aide d'au moins un accélérateur matériel (5) qui comporte une mémoire d'entrée (51) et un nombre fixe de multiplicateurs (52) qui récupèrent chacun les opérandes (52a, 52b) de ceux-ci à partir d'emplacements de mémoire prédéfinis (51a - 51h) de la mémoire d'entrée (51), et • plus d'opérandes que ceux correspondant (122) à une profondeur (11) du tenseur d'entrée (1) étant traités dans au moins une opération de l'accélérateur matériel (5), • l'attribution entre les opérandes (52a, 52b) et des emplacements de mémoire (51a - 51h) de la mémoire d'entrée (51) étant modifiée (123) lors du calcul de la convolution (4), et/ou des données d'entrée (1a) et/ou des valeurs (2a) du noyau de convolution (2) étant stockées de manière répétée (124) dans la mémoire d'entrée (51).
EP21701465.3A 2020-01-31 2021-01-20 Calcul accéléré par matériel de convolutions Withdrawn EP4097646A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102020201182.6A DE102020201182A1 (de) 2020-01-31 2020-01-31 Hardwarebeschleunigte Berechnung von Faltungen
PCT/EP2021/051143 WO2021151749A1 (fr) 2020-01-31 2021-01-20 Calcul accéléré par matériel de convolutions

Publications (1)

Publication Number Publication Date
EP4097646A1 true EP4097646A1 (fr) 2022-12-07

Family

ID=74215928

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21701465.3A Withdrawn EP4097646A1 (fr) 2020-01-31 2021-01-20 Calcul accéléré par matériel de convolutions

Country Status (4)

Country Link
EP (1) EP4097646A1 (fr)
JP (1) JP2023513064A (fr)
DE (1) DE102020201182A1 (fr)
WO (1) WO2021151749A1 (fr)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10546211B2 (en) * 2016-07-01 2020-01-28 Google Llc Convolutional neural network on programmable two dimensional image processor
CN110073359B (zh) * 2016-10-04 2023-04-04 奇跃公司 用于卷积神经网络的有效数据布局
US10175980B2 (en) * 2016-10-27 2019-01-08 Google Llc Neural network compute tile
CN110050267B (zh) * 2016-12-09 2023-05-26 北京地平线信息技术有限公司 用于数据管理的系统和方法
GB2568776B (en) * 2017-08-11 2020-10-28 Google Llc Neural network accelerator with parameters resident on chip
US11386644B2 (en) * 2017-10-17 2022-07-12 Xilinx, Inc. Image preprocessing for generalized image processing
WO2019094843A1 (fr) * 2017-11-10 2019-05-16 Nvidia Corporation Systèmes et procédés pour véhicules autonomes sûrs et fiables
CN107844827B (zh) * 2017-11-28 2020-05-26 南京地平线机器人技术有限公司 执行卷积神经网络中的卷积层的运算的方法和装置

Also Published As

Publication number Publication date
WO2021151749A1 (fr) 2021-08-05
DE102020201182A1 (de) 2021-08-05
JP2023513064A (ja) 2023-03-30

Similar Documents

Publication Publication Date Title
DE3689049T2 (de) Selbstanpassender Prozessor.
DE202017105729U1 (de) Kerndurchschreiten in Hardware
DE202016107443U1 (de) Berechnen von Faltungen mithilfe eines neuronalen Netzwerkprozessors
DE112016002298T5 (de) Vorabruf von gewichten zur verwendung in einem neuronalen netzwerkprozessor
DE202017103725U1 (de) Blockoperationen für einen Bildprozessor mit einer zweidimensionalen Ausführungsbahnmatrix und einem zweidimensionalen Schieberegister
DE3911465C2 (de) Verfahren zur automatischen Konfiguration technischer Systeme aus Komponenten
DE112016001796T5 (de) Feinkörnige bildklassifizierung durch erforschen von etiketten von einem bipartiten graphen
DE112016002292T5 (de) Stapel-verarbeitung in einem neuronalen netzwerkprozessor
DE202016107446U1 (de) Rotation von Daten für Berechnungen in neuronalen Netzwerken
DE202017103727U1 (de) Kernprozesse für Blockoperationen an einem Bildprozessor mit einer zweidimensionalen Ausführungsbahnmatrix und einem zweidimensionalen Schieberegister
DE102019214402A1 (de) Verfahren und vorrichtung zum verarbeiten von daten mittels eines neuronalen konvolutionsnetzwerks
DE102009038454A1 (de) System und Verfahren zum Reduzieren einer Ausführungsdivergenz in Parallelverarbeitungsarchitekturen
DE19814422A1 (de) System zur Lösung eines Randbedingungsproblems und Aufbau eines derartigen Systems
DE69131020T2 (de) Gerät geeignet zur objekt-hintergrund-abtrennung
DE102018220941A1 (de) Auswertung von Messgrößen mit KI-Modulen unter Berücksichtigung von Messunsicherheiten
DE202017007534U1 (de) Multiskalige 3D-Textursynthese
DE10017551A1 (de) Verfahren zur zyklischen, interaktiven Bildanalyse sowie Computersystem und Computerprogramm zur Ausführung des Verfahrens
WO2021151749A1 (fr) Calcul accéléré par matériel de convolutions
DE112019001959T5 (de) Segmentieren unregelmässiger formen in bildern unter verwendung von tiefem bereichswachstum
DE4417932A1 (de) Paralleldatenverarbeitungssystem
WO2008034862A1 (fr) mélange de données d'image radiographique traitées différemment
DE202021102832U1 (de) Vorrichtung zum Training neuronaler Netzwerke im Hinblick auf den Hardware- und Energiebedarf
DE69902148T2 (de) Verfahren und vorrichtung zur bestimmung der intensitäts- und phasenverteilung in verschiedenen schnittebenen eines laserstrahles
DE102012204697A1 (de) Vorrichtung und verfahren zur optimierung der bestimmung von aufnahmebereichen
EP3327672B1 (fr) Procédé et dispositif de détermination d'une affectation entre un élément de matrice d'une matrice et un élément de matrice de comparaison d'une matrice de comparaison au moyen de l'écriture dans des plusieurs tableaux de correspondance

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220831

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20230321