WO2021026768A1 - Automatic driving method and apparatus based on data stream, and electronic device and storage medium - Google Patents

Automatic driving method and apparatus based on data stream, and electronic device and storage medium Download PDF

Info

Publication number
WO2021026768A1
WO2021026768A1 PCT/CN2019/100382 CN2019100382W WO2021026768A1 WO 2021026768 A1 WO2021026768 A1 WO 2021026768A1 CN 2019100382 W CN2019100382 W CN 2019100382W WO 2021026768 A1 WO2021026768 A1 WO 2021026768A1
Authority
WO
WIPO (PCT)
Prior art keywords
automatic driving
neural network
parameters
data stream
data flow
Prior art date
Application number
PCT/CN2019/100382
Other languages
French (fr)
Chinese (zh)
Inventor
姜浩
蔡权雄
牛昕宇
Original Assignee
深圳鲲云信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳鲲云信息科技有限公司 filed Critical 深圳鲲云信息科技有限公司
Priority to PCT/CN2019/100382 priority Critical patent/WO2021026768A1/en
Priority to CN201980066986.3A priority patent/CN112840284A/en
Publication of WO2021026768A1 publication Critical patent/WO2021026768A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Definitions

  • This application relates to the field of artificial intelligence, and more specifically, to a data stream-based automatic driving method, device, electronic equipment, and storage medium.
  • Artificial neural network artificial neural network, abbreviation ANN
  • neural network neural network
  • NN neural network
  • Mathematical model or calculation model used to estimate or approximate the function.
  • the neural network is mainly composed of: input layer, hidden layer, and output layer.
  • the network is a two-layer neural network. Since the input layer has not undergone any transformation, it can not be regarded as a separate layer.
  • each neuron in the input layer of the network represents a feature, and the number of output layers represents the number of classification labels (when doing binary classification, if a sigmoid classifier is used, the number of neurons in the output layer is 1 ; If the softmax classifier is used, the number of neurons in the output layer is 2), and the number of hidden layers and hidden layer neurons are manually set.
  • LR linear SVM is more suitable for linear classification. If the data is non-linear and separable (mostly non-linear in real life), LR usually needs to rely on feature engineering to do feature mapping, adding Gaussian terms or combination terms; SVM needs to select a kernel. The addition of Gaussian terms and combination terms will produce many useless dimensions and increase the amount of calculation. GBDT can be combined into a strong classifier using weak linear classifiers, but the effect may not be good when the dimensionality is high. When the neural network has three or more layers, it can perform nonlinear separability well.
  • Deep learning has been applied to various fields at present, and the application scenarios are roughly divided into three categories: object recognition, target detection, and natural language processing.
  • Target detection can be understood as the integration of object recognition and object positioning, not only to identify which category the object belongs to, but more importantly to get the specific location of the object in the picture.
  • target detection models are divided into two categories.
  • One type is two-stage, which divides object recognition and object positioning into two steps and completes them separately.
  • Typical representatives of this type are R-CNN, fast R-CNN, and faster-RCNN families. They have low recognition error rates and low missed recognition rates, but they are slower and cannot meet real-time detection scenarios.
  • Their recognition speed is very fast, can meet the real-time requirements, and the accuracy rate can basically reach the level of faster R-CNN.
  • the first solution has problems such as insufficient recognition accuracy, difficulty in adapting to multiple scenes, and low robustness; the second solution uses neural networks for target detection and segmentation algorithms, which solves the problem of the first algorithm Shortcomings such as low accuracy, but higher calculations lead to higher power consumption, high hardware requirements, high cost, high power consumption, difficult heat dissipation, and low real-time performance.
  • the purpose of this application is to provide a data stream-based automatic driving method, device, electronic equipment, and storage medium in response to the above-mentioned defects in the prior art, which solves the problem of target detection when the existing neural network is used to make target automatic driving decisions.
  • the amount of calculation required is high, which leads to the problems of high cost, high power consumption, and difficult heat dissipation.
  • a data stream-based deep network acceleration method includes:
  • the image processing result is sent to the driving decision module to form a driving decision.
  • FIG. 5 is a schematic diagram of a specific structure of a configuration module 402 provided by an embodiment of the application.
  • the neural network diagram and parameters of the automatic driving model configure on the data flow architecture to obtain a data flow automatic driving model corresponding to the automatic driving model.
  • the above neural network diagram includes the connection relationship between the data flow engine, the first data flow storage module, and the global data flow network.
  • the above connection relationship may include the number of connections of the data flow engine, the connection sequence, etc., and the data flow engine Connect with the global data flow network through interconnection to form a corresponding autonomous driving model.
  • different neural networks can be formed according to different neural network diagrams.
  • the above-mentioned parameters correspond to each neural network layer. By allocating different data stream buffer areas in the first data stream storage module, the parameters of each neural network layer are allocated to different data stream buffer areas for reading.
  • the above-mentioned data flow model is based on a non-instruction set model, so there is no instruction idle overhead, which can improve the hardware acceleration efficiency of the neural network.
  • the above-mentioned image processing result may be the abstract result obtained by the data flow automatic driving model.
  • the abstract result is sent to the driving decision module set in the cloud server for processing, and the processed result includes the object category , Probability, coordinates and other results, in this way, post-processing and decision-making modules and other hardware can be deployed on the cloud server, thereby reducing the power consumption of the autonomous driving system in the vehicle.
  • the neural network diagram includes parallel or serial relationships between multiple neural network layers, and the multiple neural network layers under the data flow are configured according to the parallel or serial relationship between the multiple neural network layers Parallel or serial between.
  • Parallel or serial under the above data flow is the parallel or serial embodiment of the data flow engine, and the above data flow engine provides computing resources for the corresponding neural network layer.
  • the above-mentioned first data stream storage module may be a cache, a DDR or a high-speed access DDR. In the embodiment of the present application, it is preferably a cache.
  • a controllable read-write address generating unit may be provided in the cache. Depending on the input data format and the calculations required in the data path, the address generation unit will generate an adapted address sequence to index the data in the cache.
  • the data stream is stored through the first data stream storage module, and the data is controlled to flow to multiple neural network layers for calculation, so that the data processing can be processed in the data stream model like a pipeline, no instruction is idle, and the image processing is improved. effectiveness.
  • the configuration unit 4021 is configured to configure parallel or serial between multiple neural network layers according to the neural network diagram
  • the allocation unit 4022 is configured to allocate data stream memory corresponding to each neural network layer according to the parameters, and the data stream is internally used to store the parameters of the corresponding neural network layer;
  • the first path unit 4023 is configured to form a data flow path between the multiple neural network layers based on the parallel or serial between the multiple neural network layers and allocating data flow memory corresponding to each neural network layer;
  • the second path unit 4024 is configured to form the data flow automatic driving model according to the data flow path.
  • the allocating unit 4022 includes:
  • the address subunit 40221 is used to specify a starting memory address for the parameter data block preloaded in each neural network layer;
  • the apparatus 400 includes:
  • the post-processing module 407 is configured to perform post-processing on the result obtained after processing the data stream automatic driving model to obtain an image processing result.
  • an embodiment of the present application provides an electronic device, including: a memory, a processor, and a computer program stored on the memory and capable of running on the processor.
  • the processor executes the computer program The steps in the data stream-based automatic driving method provided in the embodiments of this application are implemented.
  • an embodiment of the present application provides a computer-readable storage medium with a computer program stored on the computer-readable storage medium.
  • the computer program is executed by a processor, the data stream-based Steps in an autonomous driving method. That is, in the specific embodiment of the present invention, when the computer program of the computer-readable storage medium is executed by the processor, the steps of the above-mentioned neural network processing method based on data flow are realized, which can reduce the nonlinearity of the digital circuit control capacitance.
  • the computer program in the computer-readable storage medium includes computer program code
  • the computer program code may be in the form of source code, object code, executable file, or some intermediate form.
  • the computer-readable medium may include: capable of carrying the computer program code
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • the disclosed device may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the program can be stored in a computer-readable memory, and the memory can include: flash disk , Read-only memory (English: Read-Only Memory, abbreviation: ROM), random access device (English: Random Access Memory, abbreviation: RAM), magnetic disk or optical disc, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

An automatic driving method and apparatus based on a data stream, and an electronic device and a storage medium. The method comprises: acquiring a neural network diagram and parameters of an automatic driving model, wherein the parameters are pre-trained parameters (201); performing, according to the neural network diagram and the parameters of the automatic driving model, configuration on a data stream architecture to obtain a data stream automatic driving model corresponding to the automatic driving model (202); acquiring image information used for automatic driving (203); inputting the image information into the data stream automatic driving model for processing to obtain an image processing result (204); and sending the image processing result to a driving decision-making module to create a driving decision (205). The method can increase the identification and detection speed for an image during automatic driving, so as to improve the real-time performance of automatic driving; in addition, a data stream automatic driving model has low requirements for hardware, such that the cost and power consumption of automatic driving can be reduced.

Description

基于数据流的自动驾驶方法、装置、电子设备及存储介质Automatic driving method, device, electronic equipment and storage medium based on data stream 技术领域Technical field
本申请涉及人工智能领域,更具体的说,是涉及一种基于数据流的自动驾驶方法、装置、电子设备及存储介质。This application relates to the field of artificial intelligence, and more specifically, to a data stream-based automatic driving method, device, electronic equipment, and storage medium.
背景技术Background technique
人工神经网络(artificial neural network,缩写ANN),简称神经网络(neural network,缩写NN)或类神经网络,是一种模仿生物神经网络(动物的中枢神经系统,特别是大脑)的结构和功能的数学模型或计算模型,用于对函数进行估计或近似。Artificial neural network (artificial neural network, abbreviation ANN), abbreviated as neural network (neural network, abbreviation NN) or neural network, is a kind of imitating the structure and function of biological neural network (animal's central nervous system, especially the brain) Mathematical model or calculation model, used to estimate or approximate the function.
神经网络主要由:输入层,隐藏层,输出层构成。当隐藏层只有一层时,该网络为两层神经网络,由于输入层未做任何变换,可以不看做单独的一层。实际中,网络输入层的每个神经元代表了一个特征,输出层个数代表了分类标签的个数(在做二分类时,如果采用sigmoid分类器,输出层的神经元个数为1个;如果采用softmax分类器,输出层神经元个数为2个),而隐藏层层数以及隐藏层神经元是由人工设定。The neural network is mainly composed of: input layer, hidden layer, and output layer. When there is only one hidden layer, the network is a two-layer neural network. Since the input layer has not undergone any transformation, it can not be regarded as a separate layer. In practice, each neuron in the input layer of the network represents a feature, and the number of output layers represents the number of classification labels (when doing binary classification, if a sigmoid classifier is used, the number of neurons in the output layer is 1 ; If the softmax classifier is used, the number of neurons in the output layer is 2), and the number of hidden layers and hidden layer neurons are manually set.
神经网络应用在分类问题中效果很好。工业界中分类问题居多。LR或者linear SVM更适用线性分类。如果数据非线性可分(现实生活中多是非线性的),LR通常需要靠特征工程做特征映射,增加高斯项或者组合项;SVM需要选择核。而增加高斯项、组合项会产生很多没有用的维度,增加计算量。GBDT可以使用弱的线性分类器组合成强分类器,但维度很高时效果可能并不好。而神经网络在三层及以上时,能够很好地进行非线性可分。Neural network application works well in classification problems. Classification problems are mostly in the industry. LR or linear SVM is more suitable for linear classification. If the data is non-linear and separable (mostly non-linear in real life), LR usually needs to rely on feature engineering to do feature mapping, adding Gaussian terms or combination terms; SVM needs to select a kernel. The addition of Gaussian terms and combination terms will produce many useless dimensions and increase the amount of calculation. GBDT can be combined into a strong classifier using weak linear classifiers, but the effect may not be good when the dimensionality is high. When the neural network has three or more layers, it can perform nonlinear separability well.
深度学习目前已经应用到了各个领域,应用场景大体分为三类:物体识别,目标检测,自然语言处理。Deep learning has been applied to various fields at present, and the application scenarios are roughly divided into three categories: object recognition, target detection, and natural language processing.
目标检测可以理解为是物体识别和物体定位的综合,不仅仅要识别出物体属于哪个分类,更重要的是得到物体在图片中的具体位置。为了完成这两个任务,目标检测模型分为两类。一类是two-stage,将物体识别和物体定位分为两个步骤,分别完成,这一类的典型代表是R-CNN,fast R-CNN,faster-RCNN家 族。它们识别错误率低,漏识别率也较低,但速度较慢,不能满足实时检测场景。为了解决这一问题,另一类方式出现了,称为one-stage,典型代表是Yolo,SSD,YoloV2等。它们识别速度很快,可以达到实时性要求,而且准确率也基本能达到faster R-CNN的水平。Target detection can be understood as the integration of object recognition and object positioning, not only to identify which category the object belongs to, but more importantly to get the specific location of the object in the picture. In order to accomplish these two tasks, target detection models are divided into two categories. One type is two-stage, which divides object recognition and object positioning into two steps and completes them separately. Typical representatives of this type are R-CNN, fast R-CNN, and faster-RCNN families. They have low recognition error rates and low missed recognition rates, but they are slower and cannot meet real-time detection scenarios. In order to solve this problem, another type of method appeared, called one-stage, typical representatives are Yolo, SSD, YoloV2, etc. Their recognition speed is very fast, can meet the real-time requirements, and the accuracy rate can basically reach the level of faster R-CNN.
目前市面上的自动驾驶的感知模块有基于摄像头的解决方案和基于其他传感器比如激光雷达的解决方案。使用摄像头的主要采用两种方式,一种是基于传统特征提取算法,利用DSP等硬件进行计算加速;另外一种是基于深度学习算法,主要利用GPU进行计算加速。第一种方案具有识别精度不够高,然后对多种场景的适配难,以及鲁棒性不高等问题;第二种解决方案采用神经网络进行目标检测分割等算法,解决了第一种算法的精度不高等缺点,但更高的计算量导致更高的功耗,对硬件要求高,存在成本高,功耗高,不易散热,实时性不高的问题。Currently, there are camera-based solutions and solutions based on other sensors such as lidar for sensing modules for autonomous driving on the market. There are two main ways to use the camera, one is based on traditional feature extraction algorithms, using DSP and other hardware for computational acceleration; the other is based on deep learning algorithms, mainly using GPU for computational acceleration. The first solution has problems such as insufficient recognition accuracy, difficulty in adapting to multiple scenes, and low robustness; the second solution uses neural networks for target detection and segmentation algorithms, which solves the problem of the first algorithm Shortcomings such as low accuracy, but higher calculations lead to higher power consumption, high hardware requirements, high cost, high power consumption, difficult heat dissipation, and low real-time performance.
申请内容Application content
本申请的目的是针对上述现有技术存在的缺陷,提供一种基于数据流的自动驾驶方法、装置、电子设备及存储介质,解决了采用现有神经网络进行目标自动驾驶决策时,对目标检测分割过程中,需要的计算量高,导致有成本高,功耗高,不易散热的问题。The purpose of this application is to provide a data stream-based automatic driving method, device, electronic equipment, and storage medium in response to the above-mentioned defects in the prior art, which solves the problem of target detection when the existing neural network is used to make target automatic driving decisions. During the segmentation process, the amount of calculation required is high, which leads to the problems of high cost, high power consumption, and difficult heat dissipation.
本申请的目的是通过以下技术方案来实现的:The purpose of this application is achieved through the following technical solutions:
第一方面,提供一种基于数据流的深度网络加速方法,所述方法包括:In a first aspect, a data stream-based deep network acceleration method is provided, the method includes:
获取自动驾驶模型的神经网络图以及参数,所述参数为预先训练的参数;Acquiring a neural network diagram and parameters of the autonomous driving model, where the parameters are pre-trained parameters;
根据所述自动驾驶模型的神经网络图以及参数,在数据流架构上配置得到所述自动驾驶模型对应的数据流自动驾驶模型;According to the neural network diagram and parameters of the automatic driving model, configure on the data flow architecture to obtain the data flow automatic driving model corresponding to the automatic driving model;
获取用于自动驾驶的图像信息;Obtain image information for automatic driving;
将所述图像信息输入到所述数据流自动驾驶模型中进行处理,得到图像处理结果;Inputting the image information into the data stream automatic driving model for processing to obtain an image processing result;
将所述图像处理结果发送到驾驶决策模块中,形成驾驶决策。The image processing result is sent to the driving decision module to form a driving decision.
可选的,所述根据所述自动驾驶模型的神经网络图以及参数,在数据流架构上配置得到所述自动驾驶模型对应的数据流自动驾驶模型,包括:Optionally, the configuring and obtaining the data flow automatic driving model corresponding to the automatic driving model on the data flow architecture according to the neural network diagram and parameters of the automatic driving model includes:
根据所述神经网络图,配置多个神经网络层之间的并行或串行;According to the neural network diagram, configure parallel or serial between multiple neural network layers;
根据所述参数,分配对应于各个神经网络层的数据流内存,所述数据流内 在用于存储对应神经网络层的参数;According to the parameters, allocate the data stream memory corresponding to each neural network layer, and the data stream is internally used to store the parameters of the corresponding neural network layer;
基于所述多个神经网络层之间的并行或串行以及分配对应于各个神经网络层的数据流内存,形成多个神经网络层之间的数据流路径;Forming a data flow path between the multiple neural network layers based on the parallel or serial between the multiple neural network layers and allocating data flow memory corresponding to each neural network layer;
根据所述数据流路径,形成所述数据流自动驾驶模型。According to the data flow path, the data flow automatic driving model is formed.
可选的,所述根据所述参数,分配对应于各个神经网络层的数据流内存,包括:Optionally, the allocating data stream memory corresponding to each neural network layer according to the parameters includes:
对每一个神经网络层预加载的参数数据块,指定一个起始的内存地址;Specify a starting memory address for each parameter data block preloaded in the neural network layer;
从所述指定一个起始的内存地址开始,开辟与所述参数数据块大小相同的内存空间,分配给所述参数数据块用于加载。Starting from the designated starting memory address, a memory space with the same size as the parameter data block is opened up and allocated to the parameter data block for loading.
可选的,所述获取用于自动驾驶的图像信息,包括:Optionally, the acquiring image information used for automatic driving includes:
从图像源获取图像信息,并将获取到的图像信息存储到图像内存;Obtain image information from the image source and store the obtained image information in the image memory;
从所述图像内存中读取图像信息。Read image information from the image memory.
可选的,所述方法还包括:Optionally, the method further includes:
若从所述图像内存中读取图像信息失败,则在预定时间内重新进行读取。If reading the image information from the image memory fails, the reading is performed again within a predetermined time.
可选的,在所述将所述图像信息输入到所述数据流自动驾驶模型中进行处理之后,所述方法还包括:Optionally, after the input of the image information into the data stream automatic driving model for processing, the method further includes:
将经过所述数据流自动驾驶模型处理后得到的结果进行后处理,得到图像处理结果。The result obtained after the data flow automatic driving model is processed is post-processed to obtain an image processing result.
可选的,所述图像处理结果包括物体特征的类别及坐标数据,所述将所述图像处理结果发送到驾驶决策模块中,形成驾驶决策,包括:Optionally, the image processing result includes the category and coordinate data of the object feature, and the sending the image processing result to the driving decision module to form a driving decision includes:
将所述物体特征的类别及坐标数据发送到驾驶决策模块中,形成驾驶决策。The category and coordinate data of the object features are sent to the driving decision module to form a driving decision.
第二方面,还提供一种基于数据流的自动驾驶装置,所述装置包括:In a second aspect, an automatic driving device based on data stream is also provided, the device including:
第一获取模块,用于获取自动驾驶模型的神经网络图以及参数,所述参数为预先训练的参数;The first acquisition module is used to acquire the neural network diagram and parameters of the automatic driving model, where the parameters are pre-trained parameters;
配置模块,用于根据所述自动驾驶模型的神经网络图以及参数,在数据流架构上配置得到所述自动驾驶模型对应的数据流自动驾驶模型;The configuration module is configured to configure the data flow automatic driving model corresponding to the automatic driving model on the data flow architecture according to the neural network diagram and parameters of the automatic driving model;
第二获取模块,用于获取用于自动驾驶的图像信息;The second acquisition module is used to acquire image information for automatic driving;
处理模块,用于将所述图像信息输入到所述数据流自动驾驶模型中进行处理,得到图像处理结果;A processing module, configured to input the image information into the data stream automatic driving model for processing to obtain an image processing result;
发送模块,用于将所述图像处理结果发送到驾驶决策模块中,形成驾驶决策。The sending module is used to send the image processing result to the driving decision module to form a driving decision.
第三方面,提供一种电子设备,包括:存储器、处理器及存储在所述存储 器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现本申请实施例提供的基于数据流的自动驾驶方法中的步骤。In a third aspect, an electronic device is provided, including: a memory, a processor, and a computer program stored on the memory and capable of running on the processor, and the processor implements the implementation of the application when the processor executes the computer program Examples provide steps in the data stream-based automatic driving method.
第四方面,提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现本申请实施例提供的基于数据流的自动驾驶方法中的步骤。In a fourth aspect, a computer-readable storage medium is provided, and a computer program is stored on the computer-readable storage medium. The computer program, when executed by a processor, implements the data stream-based automatic driving method provided in the embodiments of the present application A step of.
本申请带来的有益效果:通过神经网络图及参数,构建出对应的数据流自动驾驶模型,由于数据流自动驾驶模型是非指令集模型,没有指令空闲开销,可以加速自动驾驶中对图像的识别与检测速度,从而提高自动驾驶的实时性,另外,数据流自动驾驶模型对硬件要求不高,从而可以降低自动驾驶的成本以及功耗。The beneficial effects brought by this application: the corresponding data flow automatic driving model is constructed through the neural network graph and parameters. Since the data flow automatic driving model is a non-instruction set model, there is no instruction idle overhead, which can accelerate the recognition of images in automatic driving In addition, the data flow automatic driving model does not require high hardware requirements, which can reduce the cost and power consumption of automatic driving.
附图说明Description of the drawings
图1为本申请实施例提供的一种基于数据流的自动驾驶方法的可选实施架构示意图;FIG. 1 is a schematic diagram of an optional implementation architecture of a data stream-based automatic driving method provided by an embodiment of this application;
图2为本申请实施例提供的一种基于数据流的自动驾驶方法流程示意图;2 is a schematic flowchart of a data stream-based automatic driving method provided by an embodiment of the application;
图3为本申请实施例提供的另一种基于数据流的自动驾驶方法流程示意图;FIG. 3 is a schematic flowchart of another automatic driving method based on data flow provided by an embodiment of the application;
图4为本申请实施例提供的一种基于数据流的自动驾驶装置示意图;FIG. 4 is a schematic diagram of a data stream-based automatic driving device provided by an embodiment of the application;
图5为本申请实施例提供的配置模块402的具体结构示意图;FIG. 5 is a schematic diagram of a specific structure of a configuration module 402 provided by an embodiment of the application;
图6为本申请实施例提供的分配单元4022的具体结构示意图;FIG. 6 is a schematic diagram of a specific structure of an allocation unit 4022 provided by an embodiment of the application;
图7为本申请实施例提供的第一获取模块401的具体结构示意图;FIG. 7 is a schematic diagram of a specific structure of the first obtaining module 401 provided by an embodiment of the application;
图8为本申请实施例提供的另一种基于数据流的自动驾驶装置示意图;FIG. 8 is a schematic diagram of another automatic driving device based on data flow provided by an embodiment of the application;
图9为本申请实施例提供的另一种基于数据流的自动驾驶装置示意图。FIG. 9 is a schematic diagram of another automatic driving device based on data flow provided by an embodiment of the application.
具体实施方式detailed description
下面描述本申请的优选实施方式,本领域普通技术人员将能够根据下文所述用本领域的相关技术加以实现,并能更加明白本申请的创新之处和带来的益处。The preferred embodiments of the present application are described below. Those of ordinary skill in the art will be able to implement them with related technologies in the field according to the following description, and will be able to better understand the innovations and benefits of the present application.
为了进一步描述本申请的技术方案,请参照图1,图1为本申请实施例提供的一种基于数据流的自动驾驶方法的可选实施架构示意图,如图1所示,架 构103与片外存储模块(DDR)101以及处CPU通过互连进行连接,架构103包括:第一存储模块104、全局数据流网络105以及数据流引擎106,上述第一存储模块104通过互连连接上述片外存储模块101的同时,还通过互连连接上述全局数据流网络105,上述数据流引擎106通过互连连接上述全局数据流网络105以使上述数据流引擎106可以实现并行或串行。上述的数据流引擎106可以包括:计算核(或称为计算模块)、第二存储模块108以及局部数据流网络107,计算核可以包括用于计算的内核,比如卷积核109、池化核110以及激活函数核111等,当然,还可以包括除示例卷积核109、池化核110以及激活函数核111外的其他计算核,在此并不做限定,也可以包括在神经网络中所有用于计算的内核。上述的第一存储模块104与上述的第二存储模块108可以是片上缓存模块,也可以是DDR或高速DDR存储模块等。上述的数据流引擎106可以理解为支持数据流处理的计算引擎,也可以理解为专用于数据流处理的计算引擎。上述的数据流架构可以是在FPGA可编程门阵列上进行。In order to further describe the technical solution of the present application, please refer to FIG. 1. FIG. 1 is a schematic diagram of an optional implementation architecture of a data stream-based automatic driving method provided by an embodiment of the present application. As shown in FIG. The storage module (DDR) 101 and the CPU are connected by interconnection. The architecture 103 includes: a first storage module 104, a global data flow network 105 and a data flow engine 106. The first storage module 104 is connected to the off-chip storage through interconnection. The module 101 is also connected to the global data flow network 105 through interconnection, and the data flow engine 106 is connected to the global data flow network 105 through the interconnection so that the data flow engine 106 can realize parallel or serial. The aforementioned data flow engine 106 may include: a computing core (or called a computing module), a second storage module 108, and a local data flow network 107. The computing core may include a core for computing, such as a convolution core 109 and a pooling core. 110 and activation function core 111, etc. Of course, it can also include other calculation cores besides the example convolution core 109, pooling core 110, and activation function core 111, which are not limited here, and can also be included in the neural network. The kernel used for calculation. The above-mentioned first storage module 104 and the above-mentioned second storage module 108 may be on-chip cache modules, or may be DDR or high-speed DDR memory modules. The above-mentioned data stream engine 106 can be understood as a computing engine that supports data stream processing, and can also be understood as a computing engine dedicated to data stream processing. The above-mentioned data flow architecture can be implemented on an FPGA programmable gate array.
本申请提供了一种基于数据流的自动驾驶方法、装置、设备及存储介质。This application provides a data stream-based automatic driving method, device, equipment and storage medium.
本申请的目的是通过以下技术方案来实现的:The purpose of this application is achieved through the following technical solutions:
第一方面,请参见图2,图2是本申请实施例提供的一种基于数据流的自动驾驶方法的流程示意图,如图2所示,所述方法包括以下步骤:For the first aspect, please refer to FIG. 2. FIG. 2 is a schematic flowchart of a data stream-based automatic driving method provided by an embodiment of the present application. As shown in FIG. 2, the method includes the following steps:
201、获取自动驾驶模型的神经网络图以及参数,所述参数为预先训练的参数。201. Acquire a neural network diagram and parameters of an automatic driving model, where the parameters are pre-trained parameters.
上述的神经网络图可以理解为神经网络结构,进一步的,可以理解为用于自动驾驶模型的神经网络结构。上述的神经网络结构是以层为计算单元的,包含且不限于:卷积层、池化层、ReLU、全连接层等。上述的参数是指的神经网络结构中的每一层对应的参数,可以是权重参数、偏置参数等。上述的自动驾驶模型可以是预先训练好的自动驾驶模型,由于自动驾驶模型是预先训练好的,其参数的属性也是训练好的,因此,配置好的数据流自动驾驶模型可以根据配置的参数直接使用,不需要再对数据流自动驾驶模型进行训练,根据该预先训练好的自动驾驶模型,可以通过神经网络图以及参数进行统一描述。上述获取自动驾驶模型的神经网络图以及参数可以是在本地进行获取,也可以是云服务器上进行获取,比如:上述的神经网络图以及参数可以成套的存储在本地,在使用时自动进行选择或者用户进行选择,或者是将神经网络图以及参数上传到云服务器中,在使用时通过网络将云服务器中的神经网络图以及参数下载下来。The above-mentioned neural network diagram can be understood as a neural network structure, and further, can be understood as a neural network structure for an autonomous driving model. The above-mentioned neural network structure uses layers as calculation units, including but not limited to: convolutional layer, pooling layer, ReLU, fully connected layer, etc. The aforementioned parameters refer to the parameters corresponding to each layer in the neural network structure, and may be weight parameters, bias parameters, and so on. The aforementioned autopilot model can be a pre-trained autopilot model. Since the autopilot model is pre-trained, the attributes of its parameters are also trained. Therefore, the configured data flow autopilot model can be directly based on the configured parameters. In use, there is no need to train the data stream autopilot model. According to the pre-trained autopilot model, a unified description can be made through neural network graphs and parameters. The above-mentioned neural network diagram and parameters of the autopilot model can be obtained locally or on a cloud server. For example, the above-mentioned neural network diagram and parameters can be stored locally in complete sets, and they can be automatically selected during use or The user makes a selection, or uploads the neural network diagram and parameters to the cloud server, and downloads the neural network diagram and parameters in the cloud server through the network when in use.
202、根据所述自动驾驶模型的神经网络图以及参数,在数据流架构上配置 得到所述自动驾驶模型对应的数据流自动驾驶模型。202. According to the neural network diagram and parameters of the automatic driving model, configure on the data flow architecture to obtain a data flow automatic driving model corresponding to the automatic driving model.
上述的神经网络图中包括数据流引擎、第一数据流存储模块以及全局数据流网络之间的连接关系,上述的连接关系可以包括数据流引擎的连接数量,连接顺序等,可以将数据流引擎通过互连与全局数据流网络进行连接,形成对应的自动驾驶模型。另外,可以根据不同的神经网络图,形成不同的神经网络。上述的参数对应于各个神经网络层,通过在第一数据流存储模块中分配不同的数据流缓存区,将各个神经网络层的参数分配到不同的数据流缓存区中以待读取。上述的数据流模型是基于非指令集的模型,因此没有指令空闲的开销,可以提高神经网络的硬件加速效率。举例来说,一个图像的卷积算法为y i=x i*c+y i-1。基于指令集的算法为MULT(x i,c,r),ADD(r,y i-1,y i),代表先执行r=x i*c的指令,得到第一结果后进行存储,再读取出第一结果用于执行y i=r+y i-1的指令,在MULT执行时,ADD需要等待以及读取MULT的执行结果,导致ADD为空闲状态。而数据流模型则会从内存中将x i与c读取出来到乘法计算核中操作,同时,从内存中将y i-1读取出来到加法计算核中,与乘法操作的结果进行加法操作,没有指令空闲。上述的数据流引擎中包括第二数据流存储模块,以及对应于神经网络算子的计算核,上述的第二数据流存储模块中分区存储有对应计算核的参数,通过对第二数据流存储模块中的参数进行调用,使第二数据流存储模块与多个计算核形成数据流路径,从而形成数据流引擎。 The above neural network diagram includes the connection relationship between the data flow engine, the first data flow storage module, and the global data flow network. The above connection relationship may include the number of connections of the data flow engine, the connection sequence, etc., and the data flow engine Connect with the global data flow network through interconnection to form a corresponding autonomous driving model. In addition, different neural networks can be formed according to different neural network diagrams. The above-mentioned parameters correspond to each neural network layer. By allocating different data stream buffer areas in the first data stream storage module, the parameters of each neural network layer are allocated to different data stream buffer areas for reading. The above-mentioned data flow model is based on a non-instruction set model, so there is no instruction idle overhead, which can improve the hardware acceleration efficiency of the neural network. For example, the convolution algorithm of an image is y i =x i* c+y i-1 . The algorithm based on the instruction set is MULT(x i , c, r), ADD(r, y i-1 , y i ), which means that the instruction of r=x i* c is executed first, and the first result is obtained and then stored, and then The first result is read and used to execute the instruction of y i =r+y i-1 . When the MULT is executed, ADD needs to wait and read the execution result of the MULT, which results in the ADD being in an idle state. The data flow model reads x i and c from the memory to the multiplication calculation core for operation. At the same time, reads yi-1 from the memory to the addition calculation core, and adds the result of the multiplication operation. Operation, no instruction is idle. The aforementioned data stream engine includes a second data stream storage module and a calculation core corresponding to the neural network operator. The aforementioned second data stream storage module stores parameters corresponding to the calculation core in a partition, and stores the second data stream The parameters in the module are called, so that the second data flow storage module and multiple computing cores form a data flow path, thereby forming a data flow engine.
203、获取用于自动驾驶的图像信息。203. Acquire image information used for automatic driving.
该步骤中,上述的图像信息可以是通过摄像头实时拍摄的图片数据,也可以是本地文件夹读取图片数据,还可以是通过TCP等传输协议传输的图片数据。In this step, the above-mentioned image information may be picture data taken in real time by a camera, picture data read from a local folder, or picture data transmitted through a transmission protocol such as TCP.
204、将所述图像信息输入到所述数据流自动驾驶模型中进行处理,得到图像处理结果。204. Input the image information into the data stream automatic driving model for processing to obtain an image processing result.
该步骤中,上述的图像信息是通过步骤203获取的,上述图像信息输入数据流自动驾驶模型中进行物体检测与图像分割,提取到物体的抽象特征,再根据这些抽象特征得到物体类别、概率及坐标等结果。In this step, the above-mentioned image information is obtained through step 203. The above-mentioned image information is input into the data stream automatic driving model for object detection and image segmentation, and abstract features of the object are extracted, and then the object category, probability and Coordinates and other results.
205、将所述图像处理结果发送到驾驶决策模块中,形成驾驶决策。205. Send the image processing result to the driving decision module to form a driving decision.
上述的图像处理结果可以包括物体类别、概率及坐标等结果,上述的驾驶决策模块可以是现有驾驶决策模块,上述的驾驶决策模块可以是本地的驾驶决策模块,也可以是云服务器中的驾驶决策端。The above-mentioned image processing results may include the result of object category, probability and coordinates. The above-mentioned driving decision module may be an existing driving decision module, and the above-mentioned driving decision module may be a local driving decision module or a driving in a cloud server. Decision-making side.
在一种可能的实施例中,上述的图像处理结果可以是数据流自动驾驶模型得到的抽象结果,将抽象结果发送到设置在云服务器中的驾驶决策模块中再进行处理,处理得到包括物体类别、概率及坐标等结果,这样,可以将后处理以及决策模块等硬件部署到云服务器上,从而降低车辆中自动驾驶系统的功耗。In a possible embodiment, the above-mentioned image processing result may be the abstract result obtained by the data flow automatic driving model. The abstract result is sent to the driving decision module set in the cloud server for processing, and the processed result includes the object category , Probability, coordinates and other results, in this way, post-processing and decision-making modules and other hardware can be deployed on the cloud server, thereby reducing the power consumption of the autonomous driving system in the vehicle.
在本实施例中,通过神经网络图及参数,构建出对应的数据流自动驾驶模型,由于数据流自动驾驶模型是非指令集模型,没有指令空闲开销,可以加速自动驾驶中对图像中物体的识别与检测速度,从而使自动驾驶决策模块能够更快的获取到图像处理结果,更快地做出驾驶决策,提高自动驾驶的实时性,另外,数据流自动驾驶模型对硬件要求不高,从而可以降低自动驾驶的成本以及功耗。In this embodiment, the corresponding data flow automatic driving model is constructed through the neural network graph and parameters. Since the data flow automatic driving model is a non-instruction set model, there is no instruction idle overhead, which can accelerate the recognition of objects in the image during automatic driving And the detection speed, so that the automatic driving decision module can obtain the image processing results faster, make driving decisions faster, and improve the real-time performance of automatic driving. In addition, the data flow automatic driving model does not require high hardware requirements, which can Reduce the cost and power consumption of autonomous driving.
需要说明的是,本申请实施例提供的基于数据流的自动驾驶方法可以应用于数据流的自动驾驶的设备,例如:计算机、服务器、手机、车辆中控等可以进行基于数据流的自动驾驶的设备。It should be noted that the data stream-based automatic driving method provided in the embodiments of the present application can be applied to data stream automatic driving equipment, such as computers, servers, mobile phones, vehicle central control, etc., which can perform data stream-based automatic driving. equipment.
请参见图3,图3是本申请实施例提供的另一种基于数据流的自动驾驶方法的流程示意图,如图3所示,所述方法包括以下步骤:Please refer to FIG. 3. FIG. 3 is a schematic flowchart of another data stream-based automatic driving method provided by an embodiment of the present application. As shown in FIG. 3, the method includes the following steps:
301、获取自动驾驶模型的神经网络图以及参数,所述参数为预先训练的参数。301. Acquire a neural network diagram and parameters of an automatic driving model, where the parameters are pre-trained parameters.
302、根据所述神经网络图,配置多个神经网络层之间的并行或串行。302. Configure parallel or serial connections among multiple neural network layers according to the neural network diagram.
303、根据所述参数,分配对应于各个神经网络层的数据流内存,所述数据流内在用于存储对应神经网络层的参数。303. According to the parameters, allocate data stream memory corresponding to each neural network layer, and the data stream is internally used to store the parameters of the corresponding neural network layer.
304、基于所述多个神经网络层之间的并行或串行以及分配对应于各个神经网络层的数据流内存,形成多个神经网络层之间的数据流路径。304. Form a data flow path between the multiple neural network layers based on the parallel or serial between the multiple neural network layers and allocating data flow memory corresponding to each neural network layer.
305、根据所述数据流路径,形成所述数据流自动驾驶模型。305. Form the data flow automatic driving model according to the data flow path.
306、获取用于自动驾驶的图像信息。306. Acquire image information used for automatic driving.
307、将所述图像信息输入到所述数据流自动驾驶模型中进行处理,得到图像处理结果。307. Input the image information into the data stream automatic driving model for processing, to obtain an image processing result.
308、将所述图像处理结果发送到驾驶决策模块中,形成驾驶决策。308. Send the image processing result to the driving decision module to form a driving decision.
该实施例中,神经网络图中包括有多个神经网络层之间的并行或串行关系,根据该多个神经网络层之间的并行或串行关系,配置数据流下的多个神经网络层之间的并行或串行。上述的数据流下的并行或串行体现的数据流引擎的并行或串行,上述的数据流引擎为对应神经网络层提供计算资源。上述第一数据流存储模块可以是缓存、DDR或高速存取DDR,在本申请实施例中,优选为缓 存,具体的,缓存中可以设置具有可控制的读写地址生成单元。取决于输入数据格式和数据路径中所需的计算,地址生成单元将生成适应的地址序列以索引缓存中的数据。通过第一数据流存储模块将数据流进行存储,并控制数据流向多个神经网络层中进行计算,使得到数据处理如同流水线一般得以在数据流模型中进行处理,没有指令空闲,提高图像处理的效率。In this embodiment, the neural network diagram includes parallel or serial relationships between multiple neural network layers, and the multiple neural network layers under the data flow are configured according to the parallel or serial relationship between the multiple neural network layers Parallel or serial between. Parallel or serial under the above data flow is the parallel or serial embodiment of the data flow engine, and the above data flow engine provides computing resources for the corresponding neural network layer. The above-mentioned first data stream storage module may be a cache, a DDR or a high-speed access DDR. In the embodiment of the present application, it is preferably a cache. Specifically, a controllable read-write address generating unit may be provided in the cache. Depending on the input data format and the calculations required in the data path, the address generation unit will generate an adapted address sequence to index the data in the cache. The data stream is stored through the first data stream storage module, and the data is controlled to flow to multiple neural network layers for calculation, so that the data processing can be processed in the data stream model like a pipeline, no instruction is idle, and the image processing is improved. effectiveness.
可选的,所述根据所述参数,分配对应于各个神经网络层的数据流内存,包括:Optionally, the allocating data stream memory corresponding to each neural network layer according to the parameters includes:
对每一个神经网络层预加载的参数数据块,指定一个起始的内存地址;Specify a starting memory address for each parameter data block preloaded in the neural network layer;
从所述指定一个起始的内存地址开始,开辟与所述参数数据块大小相同的内存空间,分配给所述参数数据块用于加载。Starting from the designated starting memory address, a memory space with the same size as the parameter data block is opened up and allocated to the parameter data block for loading.
该实施方式中,上述的参数数据块可以是一个参数,也可以是一组参数,上述的起始的内存地址可以由第一数据流存储模块中的地址生成单元进行生成,并指定到对应的参数数据块。上述开辟的内存空间用于存储对应神经网络层需要预加载的参数数据块。这样,将参数数据块存储到内存空间,可以使得到数据的IO速度加快,各神经网络层之间的输入与输出不需要通过片外存储器进行IO,提高了图像数据的处理效率。In this embodiment, the aforementioned parameter data block can be a parameter or a group of parameters. The aforementioned initial memory address can be generated by the address generation unit in the first data stream storage module and assigned to the corresponding Parameter data block. The above-mentioned memory space is used to store the parameter data blocks that the corresponding neural network layer needs to be preloaded. In this way, storing the parameter data block in the memory space can speed up the IO speed of the data, and the input and output between the neural network layers does not need to be IO through the off-chip memory, which improves the processing efficiency of image data.
可选的,所述获取用于自动驾驶的图像信息,包括:Optionally, the acquiring image information used for automatic driving includes:
从图像源获取图像信息,并将获取到的图像信息存储到图像内存;Obtain image information from the image source and store the obtained image information in the image memory;
从所述图像内存中读取图像信息。Read image information from the image memory.
在该实施方式中,上述的图像源可以是摄像头,也可以是本地文件夹,还可以是通过TCP等传输协议传输的图片库。将从图像源获取到的图像信息存储到图像内存中,可以使图像信息的读取更加高效。另外,需要说明的是,上述的图像内存可以是第一数据流存储模块,或者是第一数据流存储模块中的一部分存储区域,也可以是另外的内存存储模块。这样,预先将图像信息存储到图像内存中,在根据神经网络图及参数配置好数据流自动驾驶模型后,可以快速从图像内存中读取出图像信息进行处理。In this embodiment, the above-mentioned image source can be a camera, a local folder, or a picture library transmitted through a transmission protocol such as TCP. Storing the image information obtained from the image source in the image memory can make the reading of the image information more efficient. In addition, it should be noted that the above-mentioned image memory may be the first data stream storage module, or a part of the storage area in the first data stream storage module, or another memory storage module. In this way, the image information is stored in the image memory in advance. After the data flow autopilot model is configured according to the neural network diagram and parameters, the image information can be quickly read from the image memory for processing.
可选的,所述方法还包括:Optionally, the method further includes:
若从所述图像内存中读取图像信息失败,则在预定时间内重新进行读取。If reading the image information from the image memory fails, the reading is performed again within a predetermined time.
在该实施方式中,当从图像内存中读取图像信息失败,说明数据流自动驾驶模型获取输入信息失败,可以重新在图像内存中进行读取,重新读取图像信息的预定时间可以是在个位数毫秒单位的时间。在一种可能的实施例中,在图像内存中连续读取失败次数超过一定的次数,则会从图像源进行图像信息的获 取。出现读取图像信息失败,再次在图像内存中进行读取,由于在内存中读取速度较快,重新读取的时间也很快。In this embodiment, when the image information fails to be read from the image memory, it means that the data flow automatic driving model fails to obtain the input information, and the image information can be read again in the image memory. The predetermined time for re-reading the image information can be The time in milliseconds. In a possible embodiment, if the number of consecutive reading failures in the image memory exceeds a certain number of times, the image information will be obtained from the image source. If there is a failure to read the image information, it is read in the image memory again. Since the reading speed in the memory is faster, the re-reading time is also very fast.
可选的,在所述将所述图像信息输入到所述数据流自动驾驶模型中进行处理之后,所述方法还包括:Optionally, after the input of the image information into the data stream automatic driving model for processing, the method further includes:
将经过所述数据流自动驾驶模型处理后得到的结果进行后处理,得到图像处理结果。The result obtained after the data flow automatic driving model is processed is post-processed to obtain an image processing result.
上述的后处理可以理解为是对神经网络输出的结果进行数据表征,神经网络输出的结果是特征值,可以理解为是对于输入图片或数据的一种抽象表征,上述的后处理可以是通过一些计算方法将抽象的表征转换为有意义的输出,如分类问题中物体类别及对应的概率,检测问题中,图片中包含的物体物类别,概率及坐标等。The above post-processing can be understood as the data characterization of the output result of the neural network. The output result of the neural network is the feature value, which can be understood as an abstract representation of the input picture or data. The above-mentioned post-processing can be achieved through some The calculation method converts the abstract representation into meaningful output, such as the object category and corresponding probability in the classification problem, the object category contained in the picture in the detection problem, the probability and the coordinates, etc.
可选的,所述图像处理结果包括物体特征的坐标数据,所述将所述图像处理结果发送到驾驶决策模块中,形成驾驶决策,包括:Optionally, the image processing result includes coordinate data of object features, and the sending the image processing result to a driving decision module to form a driving decision includes:
将所述物体特征的类别及坐标数据发送到驾驶决策模块中,形成驾驶决策。The category and coordinate data of the object features are sent to the driving decision module to form a driving decision.
上述的物体特征的类别包括车辆、动物、人、石头、树木、道路标识线、道路标识牌等,上述的坐标数据可以是基于图像的坐标数据,也可以是基于车辆环境的坐标数据。上述的驾驶决策模块可以是本地的驾驶决策模块,也可以是云服务器中部署的驾驶决策模块。上述的驾驶决策由驾驶决策模块根据物体特征的类别及坐标数据进行决策计算得到。The above-mentioned object feature categories include vehicles, animals, people, stones, trees, road markings, road signs, etc. The above-mentioned coordinate data may be image-based coordinate data or vehicle environment-based coordinate data. The aforementioned driving decision module may be a local driving decision module or a driving decision module deployed in a cloud server. The above-mentioned driving decision-making is obtained by the driving decision-making module according to the object feature category and coordinate data.
上述的可选实施方式,可以现实图1和图2对应实施例的基于数据流的自动驾驶方法,达到相同的效果,在此不再赘述。The above-mentioned optional implementation manners can implement the data stream-based automatic driving method in the corresponding embodiment of FIG. 1 and FIG. 2 to achieve the same effect, and will not be repeated here.
第二方面,请参见图4,图4是本申请实施例提供的一种基于数据流的自动驾驶装置的结构示意图,如图4所示,所述装置400包括:For the second aspect, please refer to FIG. 4. FIG. 4 is a schematic structural diagram of a data stream-based automatic driving device provided by an embodiment of the present application. As shown in FIG. 4, the device 400 includes:
第一获取模块401,用于获取自动驾驶模型的神经网络图以及参数,所述参数为预先训练的参数;The first obtaining module 401 is configured to obtain a neural network diagram and parameters of an automatic driving model, where the parameters are pre-trained parameters;
配置模块402,用于根据所述自动驾驶模型的神经网络图以及参数,在数据流架构上配置得到所述自动驾驶模型对应的数据流自动驾驶模型;The configuration module 402 is configured to configure the data flow automatic driving model corresponding to the automatic driving model on the data flow architecture according to the neural network diagram and parameters of the automatic driving model;
第二获取模块403,用于获取用于自动驾驶的图像信息;The second acquisition module 403 is used to acquire image information for automatic driving;
处理模块404,用于将所述图像信息输入到所述数据流自动驾驶模型中进行处理,得到图像处理结果;The processing module 404 is configured to input the image information into the data flow automatic driving model for processing, and obtain an image processing result;
发送模块405,用于将所述图像处理结果发送到驾驶决策模块中,形成驾驶决策。The sending module 405 is configured to send the image processing result to the driving decision module to form a driving decision.
可选的,如图5所示,所述配置模块402包括:Optionally, as shown in FIG. 5, the configuration module 402 includes:
配置单元4021,用于根据所述神经网络图,配置多个神经网络层之间的并行或串行;The configuration unit 4021 is configured to configure parallel or serial between multiple neural network layers according to the neural network diagram;
分配单元4022,用于根据所述参数,分配对应于各个神经网络层的数据流内存,所述数据流内在用于存储对应神经网络层的参数;The allocation unit 4022 is configured to allocate data stream memory corresponding to each neural network layer according to the parameters, and the data stream is internally used to store the parameters of the corresponding neural network layer;
第一路径单元4023,用于基于所述多个神经网络层之间的并行或串行以及分配对应于各个神经网络层的数据流内存,形成多个神经网络层之间的数据流路径;The first path unit 4023 is configured to form a data flow path between the multiple neural network layers based on the parallel or serial between the multiple neural network layers and allocating data flow memory corresponding to each neural network layer;
第二路径单元4024,用于根据所述数据流路径,形成所述数据流自动驾驶模型。The second path unit 4024 is configured to form the data flow automatic driving model according to the data flow path.
可选的,如图6所示,所述分配单元4022包括:Optionally, as shown in FIG. 6, the allocating unit 4022 includes:
地址子单元40221,用于对每一个神经网络层预加载的参数数据块,指定一个起始的内存地址;The address subunit 40221 is used to specify a starting memory address for the parameter data block preloaded in each neural network layer;
分配子单元40222,从所述指定一个起始的内存地址开始,开辟与所述参数数据块大小相同的内存空间,分配给所述参数数据块用于加载。The allocation subunit 40222 starts from the designated starting memory address, opens up a memory space the same size as the parameter data block, and allocates it to the parameter data block for loading.
可选的,如图7所示,所述第一获取模块401包括:Optionally, as shown in FIG. 7, the first obtaining module 401 includes:
存储单元4011,从图像源获取图像信息,并将获取到的图像信息存储到图像内存;The storage unit 4011 obtains image information from an image source, and stores the obtained image information in the image memory;
读取单元4012,用于从所述图像内存中读取图像信息。The reading unit 4012 is used to read image information from the image memory.
可选的,如图8所示,所述装置400包括:Optionally, as shown in FIG. 8, the apparatus 400 includes:
第三获取模块406,用于若从所述图像内存中读取图像信息失败,则在预定时间内重新进行读取。The third acquisition module 406 is configured to re-read the image information within a predetermined time if the image information from the image memory fails to be read.
可选的,如图9所示,所述装置400还包括:Optionally, as shown in FIG. 9, the apparatus 400 further includes:
后处理模块407,用于将经过所述数据流自动驾驶模型处理后得到的结果进行后处理,得到图像处理结果。The post-processing module 407 is configured to perform post-processing on the result obtained after processing the data stream automatic driving model to obtain an image processing result.
可选的,所述发送模块405还用于将所述物体特征的类别及坐标数据发送到驾驶决策模块中,形成驾驶决策。Optionally, the sending module 405 is further configured to send the category and coordinate data of the object feature to the driving decision module to form a driving decision.
第三方面,本申请实施例提供一种电子设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现本申请实施例提供的基于数据流的自动驾驶方法中的步骤。In a third aspect, an embodiment of the present application provides an electronic device, including: a memory, a processor, and a computer program stored on the memory and capable of running on the processor. When the processor executes the computer program The steps in the data stream-based automatic driving method provided in the embodiments of this application are implemented.
第四方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读 存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现本申请实施例提供的基于数据流的自动驾驶方法中的步骤。即,在本发明的具体实施例中,计算机可读存储介质的计算机程序被处理器执行时实现上述的基于数据流的神经网络处理方法的步骤,能降低数字电路控制电容的非线性。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium with a computer program stored on the computer-readable storage medium. When the computer program is executed by a processor, the data stream-based Steps in an autonomous driving method. That is, in the specific embodiment of the present invention, when the computer program of the computer-readable storage medium is executed by the processor, the steps of the above-mentioned neural network processing method based on data flow are realized, which can reduce the nonlinearity of the digital circuit control capacitance.
示例性的,计算机可读存储介质的计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的Exemplarily, the computer program in the computer-readable storage medium includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate form. The computer-readable medium may include: capable of carrying the computer program code
任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。Any entity or device, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signal, Telecommunications signals and software distribution media, etc.
需要说明的是,由于计算机可读存储介质的计算机程序被处理器执行时实现上述的基于数据流的神经网络处理方法的步骤,因此上述基于数据流的神经网络处理方法的所有实施例均适用于该计算机可读存储介质,且均能达到相同或相似的有益效果。It should be noted that, since the computer program of the computer-readable storage medium is executed by the processor to implement the steps of the aforementioned neural network processing method based on data flow, all the embodiments of the aforementioned neural network processing method based on data flow are applicable to The computer-readable storage medium can achieve the same or similar beneficial effects.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment methods can be implemented by instructing relevant hardware through a computer program. The program can be stored in a computer readable storage medium. During execution, it may include the procedures of the above-mentioned method embodiments.
此外,本发明的具体实施例还提供了一种可以与处理器301进行交互用于数据流加速神经网络的加速硬件板卡303,应用于对于自动驾驶感知模块的算法加速。In addition, the specific embodiment of the present invention also provides an acceleration hardware board 303 that can interact with the processor 301 for data flow acceleration neural network, which is applied to the algorithm acceleration of the automatic driving perception module.
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本申请所必须的。It should be noted that for the foregoing method embodiments, for the sake of simple description, they are all expressed as a series of action combinations, but those skilled in the art should know that this application is not limited by the described sequence of actions. Because according to this application, some steps can be performed in other order or simultaneously. Secondly, those skilled in the art should also know that the embodiments described in the specification are all optional embodiments, and the involved actions and modules are not necessarily required by this application.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above-mentioned embodiments, the description of each embodiment has its own focus. For parts that are not described in detail in an embodiment, reference may be made to related descriptions of other embodiments.
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可 以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed device may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件程序模块的形式实现。In addition, the functional units in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit can be realized in the form of hardware or software program module.
所述集成的单元如果以软件程序模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software program module and sold or used as an independent product, it can be stored in a computer readable memory. Based on this understanding, the technical solution of the present application essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a memory, A number of instructions are included to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the method described in each embodiment of the present application. The aforementioned memory includes: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other various media that can store program codes.
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、只读存储器(英文:Read-Only Memory,简称:ROM)、随机存取器(英文:Random Access Memory,简称:RAM)、磁盘或光盘等。Those of ordinary skill in the art can understand that all or part of the steps in the various methods of the above-mentioned embodiments can be completed by instructing relevant hardware through a program. The program can be stored in a computer-readable memory, and the memory can include: flash disk , Read-only memory (English: Read-Only Memory, abbreviation: ROM), random access device (English: Random Access Memory, abbreviation: RAM), magnetic disk or optical disc, etc.
以上内容是结合具体的优选实施方式对本申请所作的进一步详细说明,不能认定本申请的具体实施方式只局限于这些说明。对于本申请所属技术领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本申请的保护范围。The above content is a further detailed description of the application in conjunction with specific preferred embodiments, and it cannot be considered that the specific embodiments of the application are limited to these descriptions. For those of ordinary skill in the technical field to which this application belongs, a number of simple deductions or substitutions can be made without departing from the concept of this application, which should be regarded as falling within the protection scope of this application.

Claims (10)

  1. 一种基于数据流的自动驾驶方法,其特征在于,所述方法包括:A data stream-based automatic driving method, characterized in that the method includes:
    获取自动驾驶模型的神经网络图以及参数,所述参数为预先训练的参数;Acquiring a neural network diagram and parameters of the autonomous driving model, where the parameters are pre-trained parameters;
    根据所述自动驾驶模型的神经网络图以及参数,在数据流架构上配置得到所述自动驾驶模型对应的数据流自动驾驶模型;According to the neural network diagram and parameters of the automatic driving model, configure on the data flow architecture to obtain the data flow automatic driving model corresponding to the automatic driving model;
    获取用于自动驾驶的图像信息;Obtain image information for automatic driving;
    将所述图像信息输入到所述数据流自动驾驶模型中进行处理,得到图像处理结果;Inputting the image information into the data stream automatic driving model for processing to obtain an image processing result;
    将所述图像处理结果发送到驾驶决策模块中,形成驾驶决策。The image processing result is sent to the driving decision module to form a driving decision.
  2. 如权利要求1所述的方法,其特征在于,所述根据所述自动驾驶模型的神经网络图以及参数,在数据流架构上配置得到所述自动驾驶模型对应的数据流自动驾驶模型,包括:The method according to claim 1, wherein the configuring on a data flow architecture to obtain the data flow automatic driving model corresponding to the automatic driving model according to the neural network diagram and parameters of the automatic driving model comprises:
    根据所述神经网络图,配置多个神经网络层之间的并行或串行;According to the neural network diagram, configure parallel or serial between multiple neural network layers;
    根据所述参数,分配对应于各个神经网络层的数据流内存,所述数据流内在用于存储对应神经网络层的参数;According to the parameters, allocate data stream memory corresponding to each neural network layer, and the data stream is internally used to store the parameters of the corresponding neural network layer;
    基于所述多个神经网络层之间的并行或串行以及分配对应于各个神经网络层的数据流内存,形成多个神经网络层之间的数据流路径;Forming a data flow path between the multiple neural network layers based on the parallel or serial between the multiple neural network layers and allocating data flow memory corresponding to each neural network layer;
    根据所述数据流路径,形成所述数据流自动驾驶模型。According to the data flow path, the data flow automatic driving model is formed.
  3. 如权利要求2所述的方法,其特征在于,所述根据所述参数,分配对应于各个神经网络层的数据流内存,包括:The method according to claim 2, wherein the allocating data flow memory corresponding to each neural network layer according to the parameter comprises:
    对每一个神经网络层预加载的参数数据块,指定一个起始的内存地址;Specify a starting memory address for each parameter data block preloaded in the neural network layer;
    从所述指定一个起始的内存地址开始,开辟与所述参数数据块大小相同的内存空间,分配给所述参数数据块用于加载。Starting from the designated starting memory address, a memory space with the same size as the parameter data block is opened up and allocated to the parameter data block for loading.
  4. 如权利要求1所述的方法,其特征在于,所述获取用于自动驾驶的图像信息,包括:The method according to claim 1, wherein said acquiring image information for automatic driving comprises:
    从图像源获取图像信息,并将获取到的图像信息存储到图像内存;Obtain image information from the image source and store the obtained image information in the image memory;
    从所述图像内存中读取图像信息。Read image information from the image memory.
  5. 如权利要求4所述的方法,其特征在于,所述方法还包括:The method according to claim 4, wherein the method further comprises:
    若从所述图像内存中读取图像信息失败,则在预定时间内重新进行读取。If reading the image information from the image memory fails, the reading is performed again within a predetermined time.
  6. 如权利要求1所述的方法,其特征在于,在所述将所述图像信息输入到所述数据流自动驾驶模型中进行处理之后,所述方法还包括:The method according to claim 1, wherein after said inputting said image information into said data stream automatic driving model for processing, said method further comprises:
    将经过所述数据流自动驾驶模型处理后得到的结果进行后处理,得到图像处理结果。The result obtained after the data flow automatic driving model is processed is post-processed to obtain an image processing result.
  7. 如权利要求1中所述的方法,其特征在于,所述图像处理结果包括物体特征的类别及坐标数据,所述将所述图像处理结果发送到驾驶决策模块中,形成驾驶决策,包括:The method according to claim 1, wherein the image processing result includes the category of object characteristics and coordinate data, and the sending the image processing result to the driving decision module to form a driving decision includes:
    将所述物体特征的类别及坐标数据发送到驾驶决策模块中,形成驾驶决策。The category and coordinate data of the object features are sent to the driving decision module to form a driving decision.
  8. 一种基于数据流的自动驾驶装置,其特征在于,所述装置包括:A data stream-based automatic driving device, characterized in that the device includes:
    第一获取模块,用于获取自动驾驶模型的神经网络图以及参数,所述参数为预先训练的参数;The first acquisition module is used to acquire the neural network diagram and parameters of the automatic driving model, where the parameters are pre-trained parameters;
    配置模块,用于根据所述自动驾驶模型的神经网络图以及参数,在数据流架构上配置得到目标自动驾驶模型对应的数据流自动驾驶模型;The configuration module is used to configure the data flow automatic driving model corresponding to the target automatic driving model on the data flow architecture according to the neural network diagram and parameters of the automatic driving model;
    第二获取模块,用于获取用于自动驾驶的图像信息;The second acquisition module is used to acquire image information for automatic driving;
    处理模块,用于将所述图像信息输入到所述数据流自动驾驶模型中进行处理,得到图像处理结果;A processing module, configured to input the image information into the data stream automatic driving model for processing to obtain an image processing result;
    发送模块,用于将所述图像处理结果发送到驾驶决策模块中,形成驾驶决策。The sending module is used to send the image processing result to the driving decision module to form a driving decision.
  9. 一种电子设备,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至7中任一项所述的基于数据流的自动驾驶方法中的步骤。An electronic device, characterized by comprising: a memory, a processor, and a computer program stored on the memory and capable of running on the processor. The processor executes the computer program as claimed in claim 1. Steps in the data stream-based automatic driving method described in any one of to 7.
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的基于数据流的自动驾驶方法中的步骤。A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the data-based system according to any one of claims 1 to 7 is implemented. Steps in the flow of automated driving methods.
PCT/CN2019/100382 2019-08-13 2019-08-13 Automatic driving method and apparatus based on data stream, and electronic device and storage medium WO2021026768A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/100382 WO2021026768A1 (en) 2019-08-13 2019-08-13 Automatic driving method and apparatus based on data stream, and electronic device and storage medium
CN201980066986.3A CN112840284A (en) 2019-08-13 2019-08-13 Automatic driving method and device based on data stream, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/100382 WO2021026768A1 (en) 2019-08-13 2019-08-13 Automatic driving method and apparatus based on data stream, and electronic device and storage medium

Publications (1)

Publication Number Publication Date
WO2021026768A1 true WO2021026768A1 (en) 2021-02-18

Family

ID=74570847

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/100382 WO2021026768A1 (en) 2019-08-13 2019-08-13 Automatic driving method and apparatus based on data stream, and electronic device and storage medium

Country Status (2)

Country Link
CN (1) CN112840284A (en)
WO (1) WO2021026768A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021395A (en) * 2017-12-27 2018-05-11 北京金山安全软件有限公司 Data parallel processing method and system for neural network
CN108803604A (en) * 2018-06-06 2018-11-13 深圳市易成自动驾驶技术有限公司 Vehicular automatic driving method, apparatus and computer readable storage medium
US20180373263A1 (en) * 2017-06-23 2018-12-27 Uber Technologies, Inc. Collision-avoidance system for autonomous-capable vehicles
CN109583462A (en) * 2017-09-28 2019-04-05 幻视互动(北京)科技有限公司 Data flow processing method, apparatus and system based on deep neural network
CN109901574A (en) * 2019-01-28 2019-06-18 华为技术有限公司 Automatic Pilot method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126481B (en) * 2016-06-29 2019-04-12 华为技术有限公司 A kind of computing system and electronic equipment
CN107392189B (en) * 2017-09-05 2021-04-30 百度在线网络技术(北京)有限公司 Method and device for determining driving behavior of unmanned vehicle
CN108012156B (en) * 2017-11-17 2020-09-25 深圳市华尊科技股份有限公司 Video processing method and control platform
CN108520296B (en) * 2018-03-20 2020-05-15 福州瑞芯微电子股份有限公司 Deep learning chip-based dynamic cache allocation method and device
CN110046704B (en) * 2019-04-09 2022-11-08 深圳鲲云信息科技有限公司 Deep network acceleration method, device, equipment and storage medium based on data stream

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180373263A1 (en) * 2017-06-23 2018-12-27 Uber Technologies, Inc. Collision-avoidance system for autonomous-capable vehicles
CN109583462A (en) * 2017-09-28 2019-04-05 幻视互动(北京)科技有限公司 Data flow processing method, apparatus and system based on deep neural network
CN108021395A (en) * 2017-12-27 2018-05-11 北京金山安全软件有限公司 Data parallel processing method and system for neural network
CN108803604A (en) * 2018-06-06 2018-11-13 深圳市易成自动驾驶技术有限公司 Vehicular automatic driving method, apparatus and computer readable storage medium
CN109901574A (en) * 2019-01-28 2019-06-18 华为技术有限公司 Automatic Pilot method and device

Also Published As

Publication number Publication date
CN112840284A (en) 2021-05-25

Similar Documents

Publication Publication Date Title
CN111797893B (en) Neural network training method, image classification system and related equipment
KR102595399B1 (en) Detection of unknown classes and initialization of classifiers for unknown classes
EP3289529B1 (en) Reducing image resolution in deep convolutional networks
JP6859332B2 (en) Selective backpropagation
US10892050B2 (en) Deep image classification of medical images
WO2019228358A1 (en) Deep neural network training method and apparatus
KR20170140214A (en) Filter specificity as training criterion for neural networks
US9906704B2 (en) Managing crowd sourced photography in a wireless network
US10902288B2 (en) Training set sufficiency for image analysis
CN110309856A (en) Image classification method, the training method of neural network and device
CN110084281A (en) Image generating method, the compression method of neural network and relevant apparatus, equipment
CN111325664B (en) Style migration method and device, storage medium and electronic equipment
CN107690659A (en) A kind of image identification system and image-recognizing method
CN111275107A (en) Multi-label scene image classification method and device based on transfer learning
KR20170140228A (en) Merging top-down information in deep neural networks through bias terms
CN111738403B (en) Neural network optimization method and related equipment
KR20200078214A (en) Image processing apparatus and method for transfering style
CN116739071A (en) Model training method and related device
CN111126501B (en) Image identification method, terminal equipment and storage medium
CN114187465A (en) Method and device for training classification model, electronic equipment and storage medium
WO2021026768A1 (en) Automatic driving method and apparatus based on data stream, and electronic device and storage medium
US20220383073A1 (en) Domain adaptation using domain-adversarial learning in synthetic data systems and applications
Kang et al. Inception network-based weather image classification with pre-filtering process
US20230085127A1 (en) Electronic device and control method thereof
CN112149836B (en) Machine learning program updating method, device and equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19941003

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19941003

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05.08.2022.)

122 Ep: pct application non-entry in european phase

Ref document number: 19941003

Country of ref document: EP

Kind code of ref document: A1