CN112840284A - Automatic driving method and device based on data stream, electronic equipment and storage medium - Google Patents

Automatic driving method and device based on data stream, electronic equipment and storage medium Download PDF

Info

Publication number
CN112840284A
CN112840284A CN201980066986.3A CN201980066986A CN112840284A CN 112840284 A CN112840284 A CN 112840284A CN 201980066986 A CN201980066986 A CN 201980066986A CN 112840284 A CN112840284 A CN 112840284A
Authority
CN
China
Prior art keywords
automatic driving
neural network
parameters
data flow
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980066986.3A
Other languages
Chinese (zh)
Inventor
姜浩
蔡权雄
牛昕宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Corerain Technologies Co Ltd
Original Assignee
Shenzhen Corerain Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Corerain Technologies Co Ltd filed Critical Shenzhen Corerain Technologies Co Ltd
Publication of CN112840284A publication Critical patent/CN112840284A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions

Abstract

A data flow-based automatic driving method, a device, an electronic device and a storage medium are provided, and the method comprises the following steps: acquiring a neural network diagram and parameters of an automatic driving model, wherein the parameters are pre-trained parameters (201); according to the neural network diagram and the parameters of the automatic driving model, configuring a data flow automatic driving model (202) corresponding to the automatic driving model on a data flow architecture; acquiring image information (203) for automatic driving; inputting the image information into the data flow automatic driving model for processing to obtain an image processing result (204); and sending the image processing result to a driving decision module to form a driving decision (205). The method can accelerate the recognition and detection speed of the image in automatic driving, thereby improving the real-time performance of automatic driving, and in addition, the requirement of a data flow automatic driving model on hardware is not high, thereby reducing the cost and power consumption of automatic driving.

Description

Automatic driving method and device based on data stream, electronic equipment and storage medium Technical Field
The present application relates to the field of artificial intelligence, and more particularly, to an automatic driving method and apparatus based on data stream, an electronic device, and a storage medium.
Background
An Artificial Neural Network (ANN), abbreviated as Neural Network (NN) or neural network-like network, is a mathematical model or computational model that mimics the structure and function of a biological neural network (the central nervous system of an animal, especially the brain) and is used to estimate or approximate functions.
The neural network is mainly composed of: the input layer, the hidden layer and the output layer. When the hidden layer has only one layer, the network is a two-layer neural network, and the input layer does not have any transformation and can not be regarded as a single layer. In practice, each neuron of the network input layer represents a feature, the number of output layers represents the number of classification labels (in the case of binary classification, if a sigmoid classifier is adopted, the number of neurons of the output layers is 1, and if a softmax classifier is adopted, the number of neurons of the output layers is 2), and the number of layers of the hidden layers and the number of neurons of the hidden layers are manually set.
Neural networks work well in classification problems. The classification problem in the industry is many. The LR or linear SVM is more suitable for linear classification. If the data is non-linear separable (mostly non-linear in real life), the LR usually needs to make feature mapping by means of feature engineering, and a Gaussian term or a combined term is added; the SVM requires a selection kernel. And the addition of Gaussian terms and combination terms can generate a plurality of useless dimensions and increase the calculation amount. GBDTs can use weak linear classifiers to combine into strong classifiers, but may not work well when the dimensions are high. And when the neural network has three layers or more, nonlinear division can be well performed.
Deep learning has been applied to various fields, and application scenarios are generally divided into three categories: object recognition, target detection and natural language processing.
Object detection is understood to be a combination of object recognition and object localization, not only to identify which classification an object belongs to, but more importantly to obtain the specific location of the object in the picture. To accomplish these two tasks, the object detection models fall into two categories. One is two stages, and the object identification and the object positioning are respectively completed in two steps, and the typical representatives of the one are R-CNN, fast R-CNN and fast-RCNN families. The recognition error rate and the recognition missing rate are low, but the speed is low, and the real-time detection scene cannot be met. To solve this problem, another type of approach has emerged, called one-stage, and is typically represented by Yolo, SSD, Yolo v2, etc. The recognition speed is high, the real-time requirement can be met, and the accuracy rate can basically reach the level of the master R-CNN.
The perception modules of automatic driving currently on the market have solutions based on cameras and solutions based on other sensors such as lidar. The camera is mainly used in two modes, one mode is based on a traditional feature extraction algorithm, and calculation acceleration is performed by using hardware such as a DSP (digital signal processor); and the other method is based on a deep learning algorithm, and mainly utilizes a GPU to perform calculation acceleration. The first scheme has the problems of low identification precision, difficult adaptation to various scenes, low robustness and the like; the second solution adopts algorithms such as target detection and segmentation by using a neural network, and solves the defects that the accuracy of the first algorithm is not high, but the higher calculation amount causes higher power consumption, has high requirements on hardware, and has the problems of high cost, high power consumption, difficulty in heat dissipation and low real-time performance.
Content of application
The present application aims to overcome the above drawbacks of the prior art, and provides an automatic driving method, an automatic driving device, an electronic device, and a storage medium based on data streams, which solve the problems of high cost, high power consumption, and difficulty in heat dissipation caused by high required computation amount in the process of target detection and segmentation when the existing neural network is used for target automatic driving decision.
The purpose of the application is realized by the following technical scheme:
in a first aspect, a deep network acceleration method based on data flow is provided, where the method includes:
acquiring a neural network diagram and parameters of an automatic driving model, wherein the parameters are pre-trained parameters;
according to the neural network diagram and the parameters of the automatic driving model, configuring a data flow automatic driving model corresponding to the automatic driving model on a data flow architecture;
acquiring image information for automatic driving;
inputting the image information into the data flow automatic driving model for processing to obtain an image processing result;
and sending the image processing result to a driving decision module to form a driving decision.
Optionally, the configuring, on a data flow architecture, a data flow automatic driving model corresponding to the automatic driving model according to the neural network diagram and the parameters of the automatic driving model includes:
configuring parallel or serial among a plurality of neural network layers according to the neural network diagram;
distributing data stream memories corresponding to the neural network layers according to the parameters, wherein the data stream memories are used for storing the parameters of the corresponding neural network layers;
forming a data flow path between the plurality of neural network layers based on the parallelism or the series among the plurality of neural network layers and allocating data flow memories corresponding to the respective neural network layers;
and forming the data flow automatic driving model according to the data flow path.
Optionally, the allocating, according to the parameter, a data stream memory corresponding to each neural network layer includes:
appointing a starting memory address for each parameter data block pre-loaded by the neural network layer;
and starting from the specified initial memory address, opening up a memory space with the same size as the parameter data block, and allocating the memory space to the parameter data block for loading.
Optionally, the acquiring image information for automatic driving includes:
acquiring image information from an image source, and storing the acquired image information into an image memory;
and reading image information from the image memory.
Optionally, the method further includes:
and if the image information is failed to be read from the image memory, reading again within preset time.
Optionally, after the inputting the image information into the data stream automatic driving model for processing, the method further includes:
and post-processing the result obtained after the data stream automatic driving model processing to obtain an image processing result.
Optionally, the image processing result includes a category of the object feature and coordinate data, and the sending of the image processing result to the driving decision module forms a driving decision, including:
and sending the class and the coordinate data of the object characteristics to a driving decision module to form a driving decision.
In a second aspect, there is also provided a data-stream based autopilot apparatus, the apparatus comprising:
the first acquisition module is used for acquiring a neural network diagram and parameters of the automatic driving model, wherein the parameters are pre-trained parameters;
the configuration module is used for configuring a data flow automatic driving model corresponding to the automatic driving model on a data flow architecture according to the neural network diagram and the parameters of the automatic driving model;
the second acquisition module is used for acquiring image information for automatic driving;
the processing module is used for inputting the image information into the data flow automatic driving model for processing to obtain an image processing result;
and the sending module is used for sending the image processing result to the driving decision module to form a driving decision.
In a third aspect, an electronic device is provided, including: the automatic driving method based on the data stream comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the automatic driving method based on the data stream provided by the embodiment of the application.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the data stream-based automatic driving method provided by the embodiments of the present application.
The beneficial effect that this application brought: the corresponding data flow automatic driving model is constructed through the neural network diagram and the parameters, and the data flow automatic driving model is a non-instruction set model and has no instruction idle overhead, so that the recognition and detection speed of the image in automatic driving can be accelerated, the real-time performance of automatic driving is improved, in addition, the requirement of the data flow automatic driving model on hardware is not high, and the cost and the power consumption of automatic driving can be reduced.
Drawings
Fig. 1 is a schematic diagram of an alternative implementation architecture of an automatic driving method based on data flow according to an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating a data flow-based automatic driving method according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of another data flow-based automatic driving method provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of an automatic steering apparatus based on data flow according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a configuration module 402 according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a distribution unit 4022 provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a first obtaining module 401 according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of another data-flow-based autopilot system according to an embodiment of the present application;
fig. 9 is a schematic view of another data flow-based automatic driving device according to an embodiment of the present application.
Detailed Description
The following describes preferred embodiments of the present application, and those skilled in the art will be able to realize the invention and its advantages by using the related art in the following description.
To further describe the technical solution of the present application, please refer to fig. 1, where fig. 1 is a schematic diagram of an alternative implementation architecture of an automatic driving method based on data flow provided in an embodiment of the present application, as shown in fig. 1, an architecture 103 is connected to an off-chip memory module (DDR)101 and a CPU through an interconnect, where the architecture 103 includes: the first storage module 104 is connected to the off-chip storage module 101 through an interconnect, and is also connected to the global data stream network 105 through an interconnect, and the data stream engine 106 is connected to the global data stream network 105 through an interconnect, so that the data stream engine 106 can implement parallel or serial connection. The data flow engine 106 described above may include: the computing kernel (or referred to as a computing module), the second storage module 108, and the local data stream network 107, where the computing kernel may include kernels for computing, such as a convolution kernel 109, a pooling kernel 110, an activation function kernel 111, and the like, and of course, may also include other computing kernels besides the example convolution kernel 109, pooling kernel 110, and activation function kernel 111, which is not limited herein, and may also include all kernels for computing in a neural network. The first memory module 104 and the second memory module 108 may be on-chip cache modules, and may also be DDR or high-speed DDR memory modules. The data stream engine 106 described above may be understood as a computational engine that supports data stream processing, and may also be understood as a computational engine that is dedicated to data stream processing. The dataflow architecture described above may be performed on an array of FPGA programmable gates.
The application provides an automatic driving method, device and equipment based on data flow and a storage medium.
The purpose of the application is realized by the following technical scheme:
in a first aspect, please refer to fig. 2, fig. 2 is a schematic flowchart of an automatic driving method based on data flow according to an embodiment of the present application, and as shown in fig. 2, the method includes the following steps:
201. and acquiring a neural network diagram and parameters of the automatic driving model, wherein the parameters are pre-trained parameters.
The above-described neural network diagram may be understood as a neural network structure, and further, may be understood as a neural network structure for an automatic driving model. The neural network structure is based on layers as computing units, and includes but is not limited to: convolutional layers, pooling layers, ReLU, full link layers, etc. The parameters refer to parameters corresponding to each layer in the neural network structure, and may be weight parameters, bias parameters, and the like. The automatic driving model can be a pre-trained automatic driving model, and the automatic driving model is pre-trained, and the attributes of the parameters of the automatic driving model are also trained, so that the configured data flow automatic driving model can be directly used according to the configured parameters without training the data flow automatic driving model, and can be uniformly described through a neural network diagram and the parameters according to the pre-trained automatic driving model. The above-mentioned obtaining the neural network diagram and the parameters of the automatic driving model may be obtained locally, or may be obtained on a cloud server, for example: the neural network map and the parameters can be stored locally in a set, and can be automatically selected or selected by a user when in use, or the neural network map and the parameters are uploaded to a cloud server, and the neural network map and the parameters in the cloud server are downloaded through a network when in use.
202. And configuring a data flow automatic driving model corresponding to the automatic driving model on a data flow architecture according to the neural network diagram and the parameters of the automatic driving model.
The neural network diagram includes connection relationships among the data stream engines, the first data stream storage module and the global data stream network, the connection relationships may include the number of connections of the data stream engines, the connection order, and the like, and the data stream engines may be connected with the global data stream network through interconnections to form a corresponding automatic driving model. In addition, different neural networks may be formed according to different neural network diagrams. The parameters correspond to the neural network layers, and the parameters of the neural network layers are allocated to different data stream cache regions to be read by allocating different data stream cache regions in the first data stream storage module. The data flow model is based on a non-instruction set model, so that the overhead of instruction idle is avoided, and the hardware acceleration efficiency of the neural network can be improved. For example, the convolution algorithm for one image is yi=x i*c+y i-1. The instruction set based algorithm is MULT (x)i,c,r),ADD(r,y i-1,y i) Stands for executing first r ═ xi*c, storing the obtained first result, and reading out the first result for executing yi=r+y i-1ADD needs to wait and read the execution result of MULT while MULT executes, resulting in ADD being in an idle state. The data flow model will store x from the memoryiAnd c reads out to operate in the multiplication core, and at the same time, reads out y from the memoryi-1And reading out and coming into the addition calculation core, and performing addition operation with the result of the multiplication operation, wherein no instruction is idle. The data stream engine comprises a second data stream storage module and a computing core corresponding to the neural network operator, wherein parameters corresponding to the computing core are stored in the second data stream storage module in a partitioning mode, and the second data stream storage module and the multiple computing cores form a data stream path by calling the parameters in the second data stream storage module, so that the data stream engine is formed.
203. Image information for automatic driving is acquired.
In this step, the image information may be picture data shot by a camera in real time, or picture data read by a local folder, or picture data transmitted by a transmission protocol such as TCP.
204. And inputting the image information into the data flow automatic driving model for processing to obtain an image processing result.
In this step, the image information is obtained in step 203, and the image information is input into the data stream autopilot model to perform object detection and image segmentation, to extract abstract features of the object, and to obtain results such as object type, probability, coordinates, and the like based on the abstract features.
205. And sending the image processing result to a driving decision module to form a driving decision.
The image processing result may include results of object type, probability, coordinates, and the like, the driving decision module may be an existing driving decision module, and the driving decision module may be a local driving decision module or a driving decision terminal in a cloud server.
In a possible embodiment, the image processing result may be an abstract result obtained by a data stream automatic driving model, and the abstract result is sent to a driving decision module arranged in a cloud server for further processing, so as to obtain results including object types, probabilities, coordinates and the like, and thus, hardware such as a post-processing and decision module can be deployed on the cloud server, thereby reducing power consumption of an automatic driving system in a vehicle.
In this embodiment, a corresponding data flow automatic driving model is constructed through a neural network diagram and parameters, and the data flow automatic driving model is a non-instruction set model and has no instruction idle overhead, so that the speed of identifying and detecting objects in an image in automatic driving can be accelerated, an automatic driving decision module can acquire an image processing result more quickly, a driving decision can be made more quickly, the real-time performance of automatic driving is improved, in addition, the requirement of the data flow automatic driving model on hardware is not high, and the cost and the power consumption of automatic driving can be reduced.
It should be noted that the data stream-based automatic driving method provided by the embodiment of the present application may be applied to a device for automatic driving of a data stream, for example: and the computer, the server, the mobile phone, the vehicle central control and the like can be used for carrying out automatic driving based on data flow.
Referring to fig. 3, fig. 3 is a schematic flow chart of another data flow-based automatic driving method according to an embodiment of the present application, and as shown in fig. 3, the method includes the following steps:
301. and acquiring a neural network diagram and parameters of the automatic driving model, wherein the parameters are pre-trained parameters.
302. Configuring parallel or serial between multiple neural network layers according to the neural network graph.
303. And distributing data stream memories corresponding to the neural network layers according to the parameters, wherein the data stream memories are used for storing the parameters of the corresponding neural network layers.
304. And forming a data flow path among the plurality of neural network layers based on the parallelism or the series among the plurality of neural network layers and the allocation of the data flow memory corresponding to each neural network layer.
305. And forming the data flow automatic driving model according to the data flow path.
306. Image information for automatic driving is acquired.
307. And inputting the image information into the data flow automatic driving model for processing to obtain an image processing result.
308. And sending the image processing result to a driving decision module to form a driving decision.
In this embodiment, the neural network diagram includes a parallel or serial relationship between a plurality of neural network layers, and the parallel or serial relationship between the plurality of neural network layers under which the data stream flows is configured. The data flow engine is parallel or serial to the data flow engine which is embodied in parallel or serial under the data flow, and the data flow engine provides computing resources for the corresponding neural network layer. The first data stream storage module may be a cache, a DDR or a high-speed access DDR, in this embodiment of the application, the cache is preferably a cache, and specifically, a controllable read-write address generation unit may be disposed in the cache. Depending on the input data format and the required computations in the data path, the address generation unit will generate an adapted address sequence to index the data in the buffer. The data flow is stored through the first data flow storage module, and the data flow is controlled to be calculated in the plurality of neural network layers, so that the data processing is performed in a data flow model like a pipeline, no instruction is idle, and the image processing efficiency is improved.
Optionally, the allocating, according to the parameter, a data stream memory corresponding to each neural network layer includes:
appointing a starting memory address for each parameter data block pre-loaded by the neural network layer;
and starting from the specified initial memory address, opening up a memory space with the same size as the parameter data block, and allocating the memory space to the parameter data block for loading.
In this embodiment, the parameter data block may be a parameter or a set of parameters, and the initial memory address may be generated by the address generating unit in the first data stream storage module and assigned to the corresponding parameter data block. The memory space is used for storing the parameter data block which needs to be preloaded and corresponds to the neural network layer. Therefore, the parameter data block is stored in the memory space, so that the IO speed of the data can be accelerated, the IO is not required to be performed on the input and output between the neural network layers through an off-chip memory, and the processing efficiency of the image data is improved.
Optionally, the acquiring image information for automatic driving includes:
acquiring image information from an image source, and storing the acquired image information into an image memory;
and reading image information from the image memory.
In this embodiment, the image source may be a camera, a local folder, or a picture library transmitted through a transmission protocol such as TCP. The image information acquired from the image source is stored in the image memory, so that the reading of the image information can be more efficient. In addition, the image memory may be the first data stream storage module, or a part of the storage area in the first data stream storage module, or may be another memory storage module. Therefore, the image information is stored in the image memory in advance, and after the data flow automatic driving model is configured according to the neural network diagram and the parameters, the image information can be quickly read from the image memory for processing.
Optionally, the method further includes:
and if the image information is failed to be read from the image memory, reading again within preset time.
In this embodiment, when the image information is not read from the image memory, which indicates that the data stream autopilot model fails to acquire the input information, the image information may be read again from the image memory, and the predetermined time for reading again the image information may be a time in the unit of one-digit milliseconds. In a possible embodiment, if the number of consecutive read failures in the image memory exceeds a certain number, the image information is obtained from the image source. When image information fails to be read, the image information is read again in the image memory, and the time for re-reading is also fast due to the fact that the reading speed in the image memory is high.
Optionally, after the inputting the image information into the data stream automatic driving model for processing, the method further includes:
and post-processing the result obtained after the data stream automatic driving model processing to obtain an image processing result.
The post-processing may be to perform data characterization on the result output by the neural network, where the result output by the neural network is a feature value, may be to perform an abstract characterization on the input image or data, and may be to convert the abstract characterization into a meaningful output by some calculation method, such as classifying object types and corresponding probabilities in the problem, detecting object types, probabilities, and coordinates included in the image in the problem, and the like.
Optionally, the image processing result includes coordinate data of object features, and the sending of the image processing result to the driving decision module forms a driving decision, including:
and sending the class and the coordinate data of the object characteristics to a driving decision module to form a driving decision.
The categories of the object features include vehicles, animals, people, stones, trees, road signs, and the like, and the coordinate data may be image-based coordinate data or vehicle environment-based coordinate data. The driving decision module can be a local driving decision module, and can also be a driving decision module deployed in a cloud server. The driving decision is obtained by performing decision calculation by the driving decision module according to the type and the coordinate data of the object characteristics.
The above optional implementation manner can implement the data flow-based automatic driving method according to the embodiment corresponding to fig. 1 and fig. 2, and achieve the same effect, which is not described herein again.
In a second aspect, please refer to fig. 4, fig. 4 is a schematic structural diagram of an automatic driving apparatus based on data flow according to an embodiment of the present application, and as shown in fig. 4, the apparatus 400 includes:
a first obtaining module 401, configured to obtain a neural network map of an automatic driving model and parameters, where the parameters are pre-trained parameters;
a configuration module 402, configured to obtain a data flow automatic driving model corresponding to the automatic driving model on a data flow architecture according to the neural network diagram and the parameters of the automatic driving model;
a second acquisition module 403 for acquiring image information for automatic driving;
a processing module 404, configured to input the image information into the data stream automatic driving model for processing, so as to obtain an image processing result;
and a sending module 405, configured to send the image processing result to a driving decision module to form a driving decision.
Optionally, as shown in fig. 5, the configuration module 402 includes:
a configuration unit 4021, configured to configure parallel or serial connections between a plurality of neural network layers according to the neural network diagram;
an allocating unit 4022, configured to allocate a data stream memory corresponding to each neural network layer according to the parameter, where the data stream memory is used to store the parameter of the corresponding neural network layer;
a first path unit 4023, configured to form a data flow path between the plurality of neural network layers based on parallel or serial connections between the plurality of neural network layers and allocating data flow memories corresponding to the respective neural network layers;
the second path unit 4024 is configured to form the data flow automatic driving model according to the data flow path.
Optionally, as shown in fig. 6, the allocating unit 4022 includes:
the address subunit 40221 is configured to specify an initial memory address for the parameter data block preloaded in each neural network layer;
the allocating subunit 40222 opens up a memory space having the same size as the parameter data block from the specified initial memory address, and allocates the memory space to the parameter data block for loading.
Optionally, as shown in fig. 7, the first obtaining module 401 includes:
the storage unit 4011 acquires image information from an image source, and stores the acquired image information in an image memory;
and the reading unit 4012 is configured to read image information from the image memory.
Optionally, as shown in fig. 8, the apparatus 400 includes:
a third obtaining module 406, configured to, if reading of the image information from the image memory fails, perform reading again within a predetermined time.
Optionally, as shown in fig. 9, the apparatus 400 further includes:
and the post-processing module 407 is configured to perform post-processing on a result obtained after the data stream automatic driving model processing is performed, so as to obtain an image processing result.
Optionally, the sending module 405 is further configured to send the category of the object feature and the coordinate data to a driving decision module to form a driving decision.
In a third aspect, an embodiment of the present application provides an electronic device, including: the automatic driving method based on the data stream comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the automatic driving method based on the data stream provided by the embodiment of the application.
In a fourth aspect, the present application provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps in the data stream-based automatic driving method provided by the present application. That is, in an embodiment of the present invention, the computer program of the computer readable storage medium, when executed by the processor, implements the steps of the data stream based neural network processing method described above, which can reduce the nonlinearity of the digital circuit control capacitance.
Illustratively, the computer program of the computer-readable storage medium comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, and the like. The computer-readable medium may include: capable of carrying said computer program code
Any entity or device, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier signal, telecommunications signal, and software distribution medium, etc.
It should be noted that, since the computer program of the computer-readable storage medium is executed by the processor to implement the steps of the data flow-based neural network processing method, all the embodiments of the data flow-based neural network processing method are applicable to the computer-readable storage medium, and can achieve the same or similar beneficial effects.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above.
In addition, the embodiment of the present invention further provides an acceleration hardware board 303 that can interact with the processor 301 and is used for a data flow acceleration neural network, and the acceleration hardware board is applied to algorithm acceleration of an automatic driving perception module.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing is a more detailed description of the present application in connection with specific preferred embodiments, and it is not intended that the present application be limited to the specific embodiments shown. For those skilled in the art to which the present application pertains, several simple deductions or substitutions may be made without departing from the concept of the present application, and all should be considered as belonging to the protection scope of the present application.

Claims (10)

  1. A data-flow-based autopilot method, the method comprising:
    acquiring a neural network diagram and parameters of an automatic driving model, wherein the parameters are pre-trained parameters;
    according to the neural network diagram and the parameters of the automatic driving model, configuring a data flow automatic driving model corresponding to the automatic driving model on a data flow architecture;
    acquiring image information for automatic driving;
    inputting the image information into the data flow automatic driving model for processing to obtain an image processing result;
    and sending the image processing result to a driving decision module to form a driving decision.
  2. The method of claim 1, wherein configuring a data flow autopilot model corresponding to the autopilot model on a data flow architecture according to the neural network map and the parameters of the autopilot model comprises:
    configuring parallel or serial among a plurality of neural network layers according to the neural network diagram;
    distributing data stream memories corresponding to the neural network layers according to the parameters, wherein the data stream memories are used for storing the parameters of the corresponding neural network layers;
    forming a data flow path between the plurality of neural network layers based on the parallelism or the series among the plurality of neural network layers and allocating data flow memories corresponding to the respective neural network layers;
    and forming the data flow automatic driving model according to the data flow path.
  3. The method of claim 2, wherein said allocating memory for data streams corresponding to respective neural network layers based on said parameters comprises:
    appointing a starting memory address for each parameter data block pre-loaded by the neural network layer;
    and starting from the specified initial memory address, opening up a memory space with the same size as the parameter data block, and allocating the memory space to the parameter data block for loading.
  4. The method of claim 1, wherein said obtaining image information for autonomous driving comprises:
    acquiring image information from an image source, and storing the acquired image information into an image memory;
    and reading image information from the image memory.
  5. The method of claim 4, wherein the method further comprises:
    and if the image information is failed to be read from the image memory, reading again within preset time.
  6. The method of claim 1, wherein after said inputting said image information into said data-flow autopilot model for processing, said method further comprises:
    and post-processing the result obtained after the data stream automatic driving model processing to obtain an image processing result.
  7. The method of claim 1, wherein the image processing results include class and coordinate data of object features, and wherein sending the image processing results to a driving decision module to form a driving decision comprises:
    and sending the class and the coordinate data of the object characteristics to a driving decision module to form a driving decision.
  8. A dataflow-based autopilot device, the device including:
    the first acquisition module is used for acquiring a neural network diagram and parameters of the automatic driving model, wherein the parameters are pre-trained parameters;
    the configuration module is used for configuring and obtaining a data flow automatic driving model corresponding to the target automatic driving model on a data flow architecture according to the neural network diagram and the parameters of the automatic driving model;
    the second acquisition module is used for acquiring image information for automatic driving;
    the processing module is used for inputting the image information into the data flow automatic driving model for processing to obtain an image processing result;
    and the sending module is used for sending the image processing result to the driving decision module to form a driving decision.
  9. An electronic device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the data stream based autopilot method according to one of claims 1 to 7 when executing the computer program.
  10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps in the data-stream based autopilot method according to one of the claims 1 to 7.
CN201980066986.3A 2019-08-13 2019-08-13 Automatic driving method and device based on data stream, electronic equipment and storage medium Pending CN112840284A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/100382 WO2021026768A1 (en) 2019-08-13 2019-08-13 Automatic driving method and apparatus based on data stream, and electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN112840284A true CN112840284A (en) 2021-05-25

Family

ID=74570847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980066986.3A Pending CN112840284A (en) 2019-08-13 2019-08-13 Automatic driving method and device based on data stream, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112840284A (en)
WO (1) WO2021026768A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126481A (en) * 2016-06-29 2016-11-16 华为技术有限公司 A kind of computing engines and electronic equipment
CN107392189A (en) * 2017-09-05 2017-11-24 百度在线网络技术(北京)有限公司 For the method and apparatus for the driving behavior for determining unmanned vehicle
CN108012156A (en) * 2017-11-17 2018-05-08 深圳市华尊科技股份有限公司 A kind of method for processing video frequency and control platform
CN108520296A (en) * 2018-03-20 2018-09-11 福州瑞芯微电子股份有限公司 A kind of method and apparatus based on the distribution of deep learning chip dynamic cache
CN109583462A (en) * 2017-09-28 2019-04-05 幻视互动(北京)科技有限公司 Data flow processing method, apparatus and system based on deep neural network
CN109901574A (en) * 2019-01-28 2019-06-18 华为技术有限公司 Automatic Pilot method and device
CN110046704A (en) * 2019-04-09 2019-07-23 深圳鲲云信息科技有限公司 Depth network accelerating method, device, equipment and storage medium based on data flow

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10007269B1 (en) * 2017-06-23 2018-06-26 Uber Technologies, Inc. Collision-avoidance system for autonomous-capable vehicle
CN108021395B (en) * 2017-12-27 2022-04-29 北京金山安全软件有限公司 Data parallel processing method and system for neural network
CN108803604A (en) * 2018-06-06 2018-11-13 深圳市易成自动驾驶技术有限公司 Vehicular automatic driving method, apparatus and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126481A (en) * 2016-06-29 2016-11-16 华为技术有限公司 A kind of computing engines and electronic equipment
CN107392189A (en) * 2017-09-05 2017-11-24 百度在线网络技术(北京)有限公司 For the method and apparatus for the driving behavior for determining unmanned vehicle
CN109583462A (en) * 2017-09-28 2019-04-05 幻视互动(北京)科技有限公司 Data flow processing method, apparatus and system based on deep neural network
CN108012156A (en) * 2017-11-17 2018-05-08 深圳市华尊科技股份有限公司 A kind of method for processing video frequency and control platform
CN108520296A (en) * 2018-03-20 2018-09-11 福州瑞芯微电子股份有限公司 A kind of method and apparatus based on the distribution of deep learning chip dynamic cache
CN109901574A (en) * 2019-01-28 2019-06-18 华为技术有限公司 Automatic Pilot method and device
CN110046704A (en) * 2019-04-09 2019-07-23 深圳鲲云信息科技有限公司 Depth network accelerating method, device, equipment and storage medium based on data flow

Also Published As

Publication number Publication date
WO2021026768A1 (en) 2021-02-18

Similar Documents

Publication Publication Date Title
EP3289529B1 (en) Reducing image resolution in deep convolutional networks
US20220092351A1 (en) Image classification method, neural network training method, and apparatus
CN111797893B (en) Neural network training method, image classification system and related equipment
US10902302B2 (en) Stacked neural network framework in the internet of things
KR102591961B1 (en) Model training method and device, and terminal and storage medium for the same
CN107533669B (en) Filter specificity as a training criterion for neural networks
CN107430705B (en) Sample selection for retraining classifiers
KR102582194B1 (en) Selective backpropagation
US9906704B2 (en) Managing crowd sourced photography in a wireless network
US11423323B2 (en) Generating a sparse feature vector for classification
KR20180037192A (en) Detection of unknown classes and initialization of classifiers for unknown classes
AU2016256315A1 (en) Incorporating top-down information in deep neural networks via the bias term
US11176427B2 (en) Overlapping CNN cache reuse in high resolution and streaming-based deep learning inference engines
CN113330450A (en) Method for identifying objects in an image
CN110222718B (en) Image processing method and device
KR102140805B1 (en) Neural network learning method and apparatus for object detection of satellite images
US11568543B2 (en) Attention masks in neural network video processing
US20220012502A1 (en) Activity detection device, activity detection system, and activity detection method
CN112418327A (en) Training method and device of image classification model, electronic equipment and storage medium
CN109977875A (en) Gesture identification method and equipment based on deep learning
CN113743426A (en) Training method, device, equipment and computer readable storage medium
CN114168768A (en) Image retrieval method and related equipment
CN111126501A (en) Image identification method, terminal equipment and storage medium
CN112840284A (en) Automatic driving method and device based on data stream, electronic equipment and storage medium
CN112419249B (en) Special clothing picture conversion method, terminal device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination