WO2021026768A1 - Automatic driving method and apparatus based on data stream, and electronic device and storage medium - Google Patents
Automatic driving method and apparatus based on data stream, and electronic device and storage medium Download PDFInfo
- Publication number
- WO2021026768A1 WO2021026768A1 PCT/CN2019/100382 CN2019100382W WO2021026768A1 WO 2021026768 A1 WO2021026768 A1 WO 2021026768A1 CN 2019100382 W CN2019100382 W CN 2019100382W WO 2021026768 A1 WO2021026768 A1 WO 2021026768A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- automatic driving
- neural network
- parameters
- data stream
- data flow
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000013528 artificial neural network Methods 0.000 claims abstract description 98
- 238000012545 processing Methods 0.000 claims abstract description 52
- 238000010586 diagram Methods 0.000 claims abstract description 39
- 238000004590 computer program Methods 0.000 claims description 19
- 238000001514 detection method Methods 0.000 abstract description 9
- 238000004364 calculation method Methods 0.000 description 14
- 230000001133 acceleration Effects 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 238000012805 post-processing Methods 0.000 description 5
- 210000002569 neuron Anatomy 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 238000003672 processing method Methods 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000017525 heat dissipation Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 210000003169 central nervous system Anatomy 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
Definitions
- This application relates to the field of artificial intelligence, and more specifically, to a data stream-based automatic driving method, device, electronic equipment, and storage medium.
- Artificial neural network artificial neural network, abbreviation ANN
- neural network neural network
- NN neural network
- Mathematical model or calculation model used to estimate or approximate the function.
- the neural network is mainly composed of: input layer, hidden layer, and output layer.
- the network is a two-layer neural network. Since the input layer has not undergone any transformation, it can not be regarded as a separate layer.
- each neuron in the input layer of the network represents a feature, and the number of output layers represents the number of classification labels (when doing binary classification, if a sigmoid classifier is used, the number of neurons in the output layer is 1 ; If the softmax classifier is used, the number of neurons in the output layer is 2), and the number of hidden layers and hidden layer neurons are manually set.
- LR linear SVM is more suitable for linear classification. If the data is non-linear and separable (mostly non-linear in real life), LR usually needs to rely on feature engineering to do feature mapping, adding Gaussian terms or combination terms; SVM needs to select a kernel. The addition of Gaussian terms and combination terms will produce many useless dimensions and increase the amount of calculation. GBDT can be combined into a strong classifier using weak linear classifiers, but the effect may not be good when the dimensionality is high. When the neural network has three or more layers, it can perform nonlinear separability well.
- Deep learning has been applied to various fields at present, and the application scenarios are roughly divided into three categories: object recognition, target detection, and natural language processing.
- Target detection can be understood as the integration of object recognition and object positioning, not only to identify which category the object belongs to, but more importantly to get the specific location of the object in the picture.
- target detection models are divided into two categories.
- One type is two-stage, which divides object recognition and object positioning into two steps and completes them separately.
- Typical representatives of this type are R-CNN, fast R-CNN, and faster-RCNN families. They have low recognition error rates and low missed recognition rates, but they are slower and cannot meet real-time detection scenarios.
- Their recognition speed is very fast, can meet the real-time requirements, and the accuracy rate can basically reach the level of faster R-CNN.
- the first solution has problems such as insufficient recognition accuracy, difficulty in adapting to multiple scenes, and low robustness; the second solution uses neural networks for target detection and segmentation algorithms, which solves the problem of the first algorithm Shortcomings such as low accuracy, but higher calculations lead to higher power consumption, high hardware requirements, high cost, high power consumption, difficult heat dissipation, and low real-time performance.
- the purpose of this application is to provide a data stream-based automatic driving method, device, electronic equipment, and storage medium in response to the above-mentioned defects in the prior art, which solves the problem of target detection when the existing neural network is used to make target automatic driving decisions.
- the amount of calculation required is high, which leads to the problems of high cost, high power consumption, and difficult heat dissipation.
- a data stream-based deep network acceleration method includes:
- the image processing result is sent to the driving decision module to form a driving decision.
- FIG. 5 is a schematic diagram of a specific structure of a configuration module 402 provided by an embodiment of the application.
- the neural network diagram and parameters of the automatic driving model configure on the data flow architecture to obtain a data flow automatic driving model corresponding to the automatic driving model.
- the above neural network diagram includes the connection relationship between the data flow engine, the first data flow storage module, and the global data flow network.
- the above connection relationship may include the number of connections of the data flow engine, the connection sequence, etc., and the data flow engine Connect with the global data flow network through interconnection to form a corresponding autonomous driving model.
- different neural networks can be formed according to different neural network diagrams.
- the above-mentioned parameters correspond to each neural network layer. By allocating different data stream buffer areas in the first data stream storage module, the parameters of each neural network layer are allocated to different data stream buffer areas for reading.
- the above-mentioned data flow model is based on a non-instruction set model, so there is no instruction idle overhead, which can improve the hardware acceleration efficiency of the neural network.
- the above-mentioned image processing result may be the abstract result obtained by the data flow automatic driving model.
- the abstract result is sent to the driving decision module set in the cloud server for processing, and the processed result includes the object category , Probability, coordinates and other results, in this way, post-processing and decision-making modules and other hardware can be deployed on the cloud server, thereby reducing the power consumption of the autonomous driving system in the vehicle.
- the neural network diagram includes parallel or serial relationships between multiple neural network layers, and the multiple neural network layers under the data flow are configured according to the parallel or serial relationship between the multiple neural network layers Parallel or serial between.
- Parallel or serial under the above data flow is the parallel or serial embodiment of the data flow engine, and the above data flow engine provides computing resources for the corresponding neural network layer.
- the above-mentioned first data stream storage module may be a cache, a DDR or a high-speed access DDR. In the embodiment of the present application, it is preferably a cache.
- a controllable read-write address generating unit may be provided in the cache. Depending on the input data format and the calculations required in the data path, the address generation unit will generate an adapted address sequence to index the data in the cache.
- the data stream is stored through the first data stream storage module, and the data is controlled to flow to multiple neural network layers for calculation, so that the data processing can be processed in the data stream model like a pipeline, no instruction is idle, and the image processing is improved. effectiveness.
- the configuration unit 4021 is configured to configure parallel or serial between multiple neural network layers according to the neural network diagram
- the allocation unit 4022 is configured to allocate data stream memory corresponding to each neural network layer according to the parameters, and the data stream is internally used to store the parameters of the corresponding neural network layer;
- the first path unit 4023 is configured to form a data flow path between the multiple neural network layers based on the parallel or serial between the multiple neural network layers and allocating data flow memory corresponding to each neural network layer;
- the second path unit 4024 is configured to form the data flow automatic driving model according to the data flow path.
- the allocating unit 4022 includes:
- the address subunit 40221 is used to specify a starting memory address for the parameter data block preloaded in each neural network layer;
- the apparatus 400 includes:
- the post-processing module 407 is configured to perform post-processing on the result obtained after processing the data stream automatic driving model to obtain an image processing result.
- an embodiment of the present application provides an electronic device, including: a memory, a processor, and a computer program stored on the memory and capable of running on the processor.
- the processor executes the computer program The steps in the data stream-based automatic driving method provided in the embodiments of this application are implemented.
- an embodiment of the present application provides a computer-readable storage medium with a computer program stored on the computer-readable storage medium.
- the computer program is executed by a processor, the data stream-based Steps in an autonomous driving method. That is, in the specific embodiment of the present invention, when the computer program of the computer-readable storage medium is executed by the processor, the steps of the above-mentioned neural network processing method based on data flow are realized, which can reduce the nonlinearity of the digital circuit control capacitance.
- the computer program in the computer-readable storage medium includes computer program code
- the computer program code may be in the form of source code, object code, executable file, or some intermediate form.
- the computer-readable medium may include: capable of carrying the computer program code
- ROM Read-Only Memory
- RAM Random Access Memory
- the disclosed device may be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the program can be stored in a computer-readable memory, and the memory can include: flash disk , Read-only memory (English: Read-Only Memory, abbreviation: ROM), random access device (English: Random Access Memory, abbreviation: RAM), magnetic disk or optical disc, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (10)
- 一种基于数据流的自动驾驶方法,其特征在于,所述方法包括:A data stream-based automatic driving method, characterized in that the method includes:获取自动驾驶模型的神经网络图以及参数,所述参数为预先训练的参数;Acquiring a neural network diagram and parameters of the autonomous driving model, where the parameters are pre-trained parameters;根据所述自动驾驶模型的神经网络图以及参数,在数据流架构上配置得到所述自动驾驶模型对应的数据流自动驾驶模型;According to the neural network diagram and parameters of the automatic driving model, configure on the data flow architecture to obtain the data flow automatic driving model corresponding to the automatic driving model;获取用于自动驾驶的图像信息;Obtain image information for automatic driving;将所述图像信息输入到所述数据流自动驾驶模型中进行处理,得到图像处理结果;Inputting the image information into the data stream automatic driving model for processing to obtain an image processing result;将所述图像处理结果发送到驾驶决策模块中,形成驾驶决策。The image processing result is sent to the driving decision module to form a driving decision.
- 如权利要求1所述的方法,其特征在于,所述根据所述自动驾驶模型的神经网络图以及参数,在数据流架构上配置得到所述自动驾驶模型对应的数据流自动驾驶模型,包括:The method according to claim 1, wherein the configuring on a data flow architecture to obtain the data flow automatic driving model corresponding to the automatic driving model according to the neural network diagram and parameters of the automatic driving model comprises:根据所述神经网络图,配置多个神经网络层之间的并行或串行;According to the neural network diagram, configure parallel or serial between multiple neural network layers;根据所述参数,分配对应于各个神经网络层的数据流内存,所述数据流内在用于存储对应神经网络层的参数;According to the parameters, allocate data stream memory corresponding to each neural network layer, and the data stream is internally used to store the parameters of the corresponding neural network layer;基于所述多个神经网络层之间的并行或串行以及分配对应于各个神经网络层的数据流内存,形成多个神经网络层之间的数据流路径;Forming a data flow path between the multiple neural network layers based on the parallel or serial between the multiple neural network layers and allocating data flow memory corresponding to each neural network layer;根据所述数据流路径,形成所述数据流自动驾驶模型。According to the data flow path, the data flow automatic driving model is formed.
- 如权利要求2所述的方法,其特征在于,所述根据所述参数,分配对应于各个神经网络层的数据流内存,包括:The method according to claim 2, wherein the allocating data flow memory corresponding to each neural network layer according to the parameter comprises:对每一个神经网络层预加载的参数数据块,指定一个起始的内存地址;Specify a starting memory address for each parameter data block preloaded in the neural network layer;从所述指定一个起始的内存地址开始,开辟与所述参数数据块大小相同的内存空间,分配给所述参数数据块用于加载。Starting from the designated starting memory address, a memory space with the same size as the parameter data block is opened up and allocated to the parameter data block for loading.
- 如权利要求1所述的方法,其特征在于,所述获取用于自动驾驶的图像信息,包括:The method according to claim 1, wherein said acquiring image information for automatic driving comprises:从图像源获取图像信息,并将获取到的图像信息存储到图像内存;Obtain image information from the image source and store the obtained image information in the image memory;从所述图像内存中读取图像信息。Read image information from the image memory.
- 如权利要求4所述的方法,其特征在于,所述方法还包括:The method according to claim 4, wherein the method further comprises:若从所述图像内存中读取图像信息失败,则在预定时间内重新进行读取。If reading the image information from the image memory fails, the reading is performed again within a predetermined time.
- 如权利要求1所述的方法,其特征在于,在所述将所述图像信息输入到所述数据流自动驾驶模型中进行处理之后,所述方法还包括:The method according to claim 1, wherein after said inputting said image information into said data stream automatic driving model for processing, said method further comprises:将经过所述数据流自动驾驶模型处理后得到的结果进行后处理,得到图像处理结果。The result obtained after the data flow automatic driving model is processed is post-processed to obtain an image processing result.
- 如权利要求1中所述的方法,其特征在于,所述图像处理结果包括物体特征的类别及坐标数据,所述将所述图像处理结果发送到驾驶决策模块中,形成驾驶决策,包括:The method according to claim 1, wherein the image processing result includes the category of object characteristics and coordinate data, and the sending the image processing result to the driving decision module to form a driving decision includes:将所述物体特征的类别及坐标数据发送到驾驶决策模块中,形成驾驶决策。The category and coordinate data of the object features are sent to the driving decision module to form a driving decision.
- 一种基于数据流的自动驾驶装置,其特征在于,所述装置包括:A data stream-based automatic driving device, characterized in that the device includes:第一获取模块,用于获取自动驾驶模型的神经网络图以及参数,所述参数为预先训练的参数;The first acquisition module is used to acquire the neural network diagram and parameters of the automatic driving model, where the parameters are pre-trained parameters;配置模块,用于根据所述自动驾驶模型的神经网络图以及参数,在数据流架构上配置得到目标自动驾驶模型对应的数据流自动驾驶模型;The configuration module is used to configure the data flow automatic driving model corresponding to the target automatic driving model on the data flow architecture according to the neural network diagram and parameters of the automatic driving model;第二获取模块,用于获取用于自动驾驶的图像信息;The second acquisition module is used to acquire image information for automatic driving;处理模块,用于将所述图像信息输入到所述数据流自动驾驶模型中进行处理,得到图像处理结果;A processing module, configured to input the image information into the data stream automatic driving model for processing to obtain an image processing result;发送模块,用于将所述图像处理结果发送到驾驶决策模块中,形成驾驶决策。The sending module is used to send the image processing result to the driving decision module to form a driving decision.
- 一种电子设备,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至7中任一项所述的基于数据流的自动驾驶方法中的步骤。An electronic device, characterized by comprising: a memory, a processor, and a computer program stored on the memory and capable of running on the processor. The processor executes the computer program as claimed in claim 1. Steps in the data stream-based automatic driving method described in any one of to 7.
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的基于数据流的自动驾驶方法中的步骤。A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the data-based system according to any one of claims 1 to 7 is implemented. Steps in the flow of automated driving methods.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/100382 WO2021026768A1 (en) | 2019-08-13 | 2019-08-13 | Automatic driving method and apparatus based on data stream, and electronic device and storage medium |
CN201980066986.3A CN112840284A (en) | 2019-08-13 | 2019-08-13 | Automatic driving method and device based on data stream, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/100382 WO2021026768A1 (en) | 2019-08-13 | 2019-08-13 | Automatic driving method and apparatus based on data stream, and electronic device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021026768A1 true WO2021026768A1 (en) | 2021-02-18 |
Family
ID=74570847
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/100382 WO2021026768A1 (en) | 2019-08-13 | 2019-08-13 | Automatic driving method and apparatus based on data stream, and electronic device and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112840284A (en) |
WO (1) | WO2021026768A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108021395A (en) * | 2017-12-27 | 2018-05-11 | 北京金山安全软件有限公司 | Data parallel processing method and system for neural network |
CN108803604A (en) * | 2018-06-06 | 2018-11-13 | 深圳市易成自动驾驶技术有限公司 | Vehicular automatic driving method, apparatus and computer readable storage medium |
US20180373263A1 (en) * | 2017-06-23 | 2018-12-27 | Uber Technologies, Inc. | Collision-avoidance system for autonomous-capable vehicles |
CN109583462A (en) * | 2017-09-28 | 2019-04-05 | 幻视互动(北京)科技有限公司 | Data flow processing method, apparatus and system based on deep neural network |
CN109901574A (en) * | 2019-01-28 | 2019-06-18 | 华为技术有限公司 | Automatic Pilot method and device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106126481B (en) * | 2016-06-29 | 2019-04-12 | 华为技术有限公司 | A kind of computing system and electronic equipment |
CN107392189B (en) * | 2017-09-05 | 2021-04-30 | 百度在线网络技术(北京)有限公司 | Method and device for determining driving behavior of unmanned vehicle |
CN108012156B (en) * | 2017-11-17 | 2020-09-25 | 深圳市华尊科技股份有限公司 | Video processing method and control platform |
CN108520296B (en) * | 2018-03-20 | 2020-05-15 | 福州瑞芯微电子股份有限公司 | Deep learning chip-based dynamic cache allocation method and device |
CN110046704B (en) * | 2019-04-09 | 2022-11-08 | 深圳鲲云信息科技有限公司 | Deep network acceleration method, device, equipment and storage medium based on data stream |
-
2019
- 2019-08-13 WO PCT/CN2019/100382 patent/WO2021026768A1/en active Application Filing
- 2019-08-13 CN CN201980066986.3A patent/CN112840284A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180373263A1 (en) * | 2017-06-23 | 2018-12-27 | Uber Technologies, Inc. | Collision-avoidance system for autonomous-capable vehicles |
CN109583462A (en) * | 2017-09-28 | 2019-04-05 | 幻视互动(北京)科技有限公司 | Data flow processing method, apparatus and system based on deep neural network |
CN108021395A (en) * | 2017-12-27 | 2018-05-11 | 北京金山安全软件有限公司 | Data parallel processing method and system for neural network |
CN108803604A (en) * | 2018-06-06 | 2018-11-13 | 深圳市易成自动驾驶技术有限公司 | Vehicular automatic driving method, apparatus and computer readable storage medium |
CN109901574A (en) * | 2019-01-28 | 2019-06-18 | 华为技术有限公司 | Automatic Pilot method and device |
Also Published As
Publication number | Publication date |
---|---|
CN112840284A (en) | 2021-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111797893B (en) | Neural network training method, image classification system and related equipment | |
KR102595399B1 (en) | Detection of unknown classes and initialization of classifiers for unknown classes | |
EP3289529B1 (en) | Reducing image resolution in deep convolutional networks | |
JP6859332B2 (en) | Selective backpropagation | |
US10892050B2 (en) | Deep image classification of medical images | |
WO2019228358A1 (en) | Deep neural network training method and apparatus | |
KR20170140214A (en) | Filter specificity as training criterion for neural networks | |
US9906704B2 (en) | Managing crowd sourced photography in a wireless network | |
US10902288B2 (en) | Training set sufficiency for image analysis | |
CN110309856A (en) | Image classification method, the training method of neural network and device | |
CN110084281A (en) | Image generating method, the compression method of neural network and relevant apparatus, equipment | |
CN111325664B (en) | Style migration method and device, storage medium and electronic equipment | |
CN107690659A (en) | A kind of image identification system and image-recognizing method | |
CN111275107A (en) | Multi-label scene image classification method and device based on transfer learning | |
KR20170140228A (en) | Merging top-down information in deep neural networks through bias terms | |
CN111738403B (en) | Neural network optimization method and related equipment | |
KR20200078214A (en) | Image processing apparatus and method for transfering style | |
CN116739071A (en) | Model training method and related device | |
CN111126501B (en) | Image identification method, terminal equipment and storage medium | |
CN114187465A (en) | Method and device for training classification model, electronic equipment and storage medium | |
WO2021026768A1 (en) | Automatic driving method and apparatus based on data stream, and electronic device and storage medium | |
US20220383073A1 (en) | Domain adaptation using domain-adversarial learning in synthetic data systems and applications | |
Kang et al. | Inception network-based weather image classification with pre-filtering process | |
US20230085127A1 (en) | Electronic device and control method thereof | |
CN112149836B (en) | Machine learning program updating method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19941003 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19941003 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05.08.2022.) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19941003 Country of ref document: EP Kind code of ref document: A1 |