CN116331251A - End-to-end automatic driving method and system under complex road conditions - Google Patents

End-to-end automatic driving method and system under complex road conditions Download PDF

Info

Publication number
CN116331251A
CN116331251A CN202211717444.8A CN202211717444A CN116331251A CN 116331251 A CN116331251 A CN 116331251A CN 202211717444 A CN202211717444 A CN 202211717444A CN 116331251 A CN116331251 A CN 116331251A
Authority
CN
China
Prior art keywords
complex road
road conditions
training
network model
circuit network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211717444.8A
Other languages
Chinese (zh)
Inventor
孙国梁
王洪剑
郑四发
陈涛
林江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Suzhou Automotive Research Institute of Tsinghua University
Original Assignee
Tsinghua University
Suzhou Automotive Research Institute of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Suzhou Automotive Research Institute of Tsinghua University filed Critical Tsinghua University
Priority to CN202211717444.8A priority Critical patent/CN116331251A/en
Publication of CN116331251A publication Critical patent/CN116331251A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/10Historical data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an end-to-end automatic driving method and system under complex road conditions, wherein the method comprises the following steps: acquiring historical vehicle driving data under complex road conditions, and manufacturing a data set; dividing the data set into a training set and a testing set; inputting the training set into a feature extraction network to obtain a high-dimensional feature vector; establishing a pulse neural network, and performing pulse mapping on the extracted high-dimensional feature vector by adopting the pulse neural network; inputting the high-dimensional feature vector after pulse mapping into a neural circuit network model to obtain a predicted value; calculating a loss between the predicted value and the test set; adjusting training parameters in the neural circuit network model according to the loss and training the neural circuit network model to obtain a prediction model; and acquiring a real-time vehicle front image under the complex road condition, and inputting the real-time vehicle front image into a prediction model to obtain a decision result.

Description

End-to-end automatic driving method and system under complex road conditions
Technical Field
The invention relates to the technical field of automatic driving, in particular to an end-to-end automatic driving method and system under complex road conditions.
Background
In recent years, convolutional neural networks (Convolution Neural Network, CNN) have achieved tremendous success in the field of computer vision, such as in the fields of image classification, object detection, and semantic segmentation, and CNN has achieved surprising results under huge data drives. The end-to-end automatic driving mainly comprises the following steps: firstly, a large amount of human driving data is collected by utilizing a vehicle-mounted camera or radar and a data sensor, the data comprise a foreground image of a driver visual angle and a plurality of control measures taken by a driver corresponding to the foreground image, such as steering wheel rotation angle, accelerator, brake and the like, the foreground image and the control information are used as input images and labels corresponding to the input images to be made into a data set, then, a design algorithm learns and generalizes the data set, and finally, a prediction of corresponding control measures is given on a new road or by inputting a new foreground image.
Most of the end-to-end automatic driving algorithms are based on CNNs, which are huge in calculation and lack of interpretability, just as needed for automatic driving. In addition, CNN can obtain good predictive control results when input data is ideal, but when the data is noisy or noisy, or the input is damaged, such as sudden direct sunlight, etc., very unstable predictions will result.
However, the end-to-end automatic driving under complex road conditions has more complex tasks, the simple CNN cannot extract more useful information, and the 19 LTC control neurons cannot cope with more complex high-dimensional information.
Disclosure of Invention
In view of the above, the present invention provides an end-to-end automatic driving method and system under complex road conditions, which are used for solving the problem of automatic driving under complex road conditions.
The first aspect of the present invention provides an end-to-end automatic driving method under complex road conditions, the method comprising: acquiring historical vehicle driving data under complex road conditions, and manufacturing a data set; dividing the data set into a training set and a testing set; inputting the training set into a feature extraction network to obtain a high-dimensional feature vector; establishing a pulse neural network, and performing pulse mapping on the extracted high-dimensional feature vector by adopting the pulse neural network; inputting the high-dimensional feature vector after pulse mapping into a neural circuit network model to obtain a predicted value; calculating a loss between the predicted value and the test set; adjusting training parameters in the neural circuit network model according to the loss and training the neural circuit network model to obtain a prediction model; acquiring a real-time vehicle front image under a complex road condition, inputting the real-time vehicle front image into a prediction model to obtain steering wheel rotation angle, vehicle speed, accelerator pedal depth and brake pedal depth information, and taking the steering wheel rotation angle, the vehicle speed, the accelerator pedal depth and the brake pedal depth information as decision results.
Further, the historical vehicle driving data comprises historical vehicle front images and corresponding control information; the control information at least comprises steering wheel angle, vehicle speed, accelerator pedal depth and brake pedal depth data under complex road conditions.
Further, a data set is manufactured by utilizing the historical vehicle front image and the corresponding control information; and dividing the historical vehicle front images in the data set into training sets, and dividing control information corresponding to the historical vehicle front images in the data set into test sets.
Further, the neural circuit network model comprises a perception layer, a middle layer, a command layer and an action layer, wherein the perception layer is used for receiving the high-dimensional feature vector after pulse mapping, the middle layer and the command layer are used for processing the high-dimensional feature vector, and the action layer is used for outputting a predicted value.
Further, the calculation method of the loss between the predicted value and the test set is as follows:
Figure BDA0004026652320000031
where n refers to the number of data sets, y pre Refers to the predicted value, y, of control information real Refers to control information in the test set.
Further, the adjusting training parameters in the neural circuit network model according to the loss and training the neural circuit network model, and obtaining the prediction model includes: feeding back the loss to the neural circuit network model and adjusting training parameters in the neural circuit network model; training the neural circuit network model according to the loss until the loss is lower than a preset threshold value, and stopping training; the neural circuit network model including training parameters is determined as a predictive model.
A second aspect of the present invention provides an end-to-end automatic driving system under complex road conditions, the system comprising: a memory for storing a computer program; and the processor is used for realizing the end-to-end automatic driving method under the complex road conditions when executing the computer program.
The end-to-end automatic driving method and the end-to-end automatic driving system under the complex road conditions solve the problem of biological unexplainability based on the CNN algorithm under the complex road conditions, and can also select different feature extraction networks, different neuron numbers and different sparsity neural circuit network models according to the complexity of the road conditions.
Drawings
For purposes of illustration and not limitation, the invention will now be described in accordance with its preferred embodiments, particularly with reference to the accompanying drawings, in which:
fig. 1 is a flowchart of an end-to-end automatic driving method under a complex road condition according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a PINet network selected in an embodiment of the present application for feature extraction of an input image;
fig. 3 is a schematic diagram of a feature map display result of a PINet network extracted feature selected in an embodiment of the present application;
fig. 4 is a network architecture diagram provided in an embodiment of the present application;
fig. 5 is a schematic diagram of an end-to-end automatic driving system under a complex road condition according to an embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will be more clearly understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It should be noted that, without conflict, the embodiments of the present invention and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, and the described embodiments are merely some, rather than all, embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
The following describes in detail an end-to-end automatic driving method under complex road conditions according to the embodiments of the present application with reference to the accompanying drawings.
Referring to fig. 1, the end-to-end automatic driving method under the complex road condition includes:
step 101: and acquiring historical vehicle driving data under complex road conditions, and making a data set.
In this embodiment of the present application, the historical vehicle driving data includes a historical vehicle front image and corresponding control information, where the control information includes at least steering wheel angle, vehicle speed, accelerator pedal depth, and brake pedal depth data.
In the embodiment of the application, the vehicle front image under the complex road condition is acquired through the vehicle-mounted camera. The laser radar and the sensor are used for acquiring steering wheel angle, vehicle speed, accelerator pedal depth, brake pedal depth data and the like under complex road conditions.
In some embodiments, the historical vehicle front image is also used as an input image, the steering wheel angle, the vehicle speed, the accelerator pedal depth, and the brake pedal depth data are used as labels corresponding to the input image, and a dataset is created based on the input image and the labels corresponding thereto. And dividing the historical vehicle front images in the data set into training sets, and dividing control information corresponding to the historical vehicle front images in the data set into test sets.
Step 102: and inputting the historical vehicle front images in the data set into a feature extraction network to obtain corresponding high-dimensional feature vectors.
In the embodiment of the application, a CNN network is adopted as a feature extraction network. And inputting the front image of the vehicle in the data set into a feature extraction network to extract corresponding high-dimensional feature vectors.
In the embodiment of the present application, the CNN network used to extract the features may be any network that obtains the best performance in the computer vision task, for example, resNet, retinaNet, PINet, HRNet, etc., only the part of the CNN network that extracts the features is reserved, and the last decision part is discarded.
Fig. 2 is a schematic diagram of a PINet network selected in an embodiment of the present application for feature extraction of an input image. Referring to fig. 2, the pinet network mainly includes five layers of convolution modules, where the convolution Layer is composed of three convolution blocks with step length of 2, batch normalization and activation operations, so as to achieve the purpose of performing size reduction and feature extraction on an original large-size image, the output image size is 80×64 pixels, and the channel number is 128.Layer 1 is a funnel-shaped structure in the PINet network, and mainly consists of four downsampled convolution layers and four upsampled convolution layers, through which the input size and the channel number are not changed, and in this embodiment, the feature map of the minimum size of the fourth downsampling is reserved, and the feature map of the minimum size is 5*4 pixels, and the channel number is 128.Layer 2, layer3 and Layer 4 are all funnel-shaped convolution modules as in Layer 1, and all preserve the minimum size feature map. The four reserved characteristic diagrams are spliced according to the number of channels to form a 5*4 pixel characteristic diagram with the size of 512 channels, a 5*4 pixel characteristic diagram with the size of 128 channels is formed through a convolution layer, a 2560 vector is formed through a straightening layer, the operation amount is reduced through pulse mapping of a pulse neural network, and then the vector is input into a neural loop network for prediction.
Fig. 3 is a schematic diagram showing a result of feature map display of a PINet network extracted feature selected in the embodiment of the present application. In this embodiment, 20 feature images are selected from one randomly selected input image, and the selected image is enlarged to 640×412 pixels to form a feature display image. It can be seen that as the number of network layers increases, the information contained in the feature map is more and more, and the feature map becomes abstract.
Step 103: and (3) establishing a pulse neural network, and performing pulse mapping on the high-dimensional feature vector extracted in the step (102) by adopting the pulse neural network.
In the embodiment of the application, the pulse neural network SNN is built by using the high-dimensional feature vector obtained by the feature extraction network.
In the embodiment of the application, the pulse neural network SNN is used for carrying out pulse mapping on the high-dimensional feature vector extracted in the step 102, so that the operand is reduced. The impulse neural network SNN is a third generation neural network that uses an impulse neuron model to simulate and interpret the information processing process of biological neurons.
Inputting the high-dimensional feature vector extracted in the step 102 into a pulse neural network SNN, and processing by a pulse neuron model in the pulse neural network SNN to obtain a feature vector after pulse mapping.
Step 104: and inputting the high-dimensional feature vector after pulse mapping into a neural circuit network model to obtain a predicted value of the control information.
In the embodiment of the application, the neural circuit network model NCP is adopted to predict the control information of the input image. The neural circuit network (Neural Circuit Policies, NCP) has the main functions of: learning maps the high-dimensional input into a steering command, which has 19 control neurons and 253 synaptic neurons, enables the decision of the steering command from 32 packaged input high-dimensional features.
Fig. 4 is a network architecture diagram provided in an embodiment of the present application. Referring to fig. 4, the neural circuit network model NCP mainly includes four layered networks, including a sensing layer (sensing Neuron), a middle layer (inter Neuron), a command layer (CommandN Neuron), and an action layer (Montanorneuron), where the sensing layer is configured to receive the high-dimensional feature vector after pulse mapping, the middle layer and the command layer are configured to process the high-dimensional feature vector, and the action layer is configured to output a predicted value of control information. In the embodiment of the application, the manner of signal transmission of the sensing layer, the middle layer, the command layer and the action layer in the neural circuit network model NCP is based on LTC (liquid time constant ) neurons, and the neural dynamics is given by a continuous ordinary differential equation, so that the neural circuit model NCP has a nonlinear time-varying synaptic transmission mechanism, and the expression capability of a time sequence is greatly enriched. The network shows superior generalization, interpretability, and robustness compared to black-box learning CNNs of larger orders of magnitude. The resulting neugent imparts high fidelity autonomy to task-specific portions of the complex autonomous system.
The number of the neurons of the NCP network is dynamically adjusted according to the complexity of road condition data, and the sparsity of the NCP network can be dynamically adjusted until an ideal decision result is achieved. The neuron is conducted by LTC neuron, each neuron x i The state dynamics of (t), when linked to neuron j by a nerve synapse, are represented as follows:
Figure BDA0004026652320000071
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004026652320000072
with leakage conductance->
Figure BDA0004026652320000073
Time constant, w, of neuron i of (2) ij Is the salient weight of neurons i through j, < +.>
Figure BDA0004026652320000074
Is a mode capacitance>
Figure BDA0004026652320000075
Figure BDA0004026652320000076
Is at rest potential E ij Is the inverted synaptic potential defining the polarity of the protrusion.
Step 105, calculating step 104 predicts a loss between the predicted value of the control information and the control information in the data set in step 101.
In the embodiment of the application, the control information in the data set includes steering wheel angle, vehicle speed, accelerator pedal depth and brake pedal depth, which are presented in numbers. The present embodiments therefore use the mean square error to calculate the loss between the predicted value of the control information and the control information in the data set. The loss calculation method is as follows:
Figure BDA0004026652320000081
where n refers to the number of data sets, y pre Refers to the predicted value, y, of control information real Is control information in the data set.
And step 106, adjusting training parameters in the neural circuit network model according to the loss, and training the neural circuit network model to obtain a prediction model.
In some embodiments, adjusting training parameters in the neural circuit network model and training the neural circuit network model according to the loss, the deriving the predictive model includes:
feeding back the loss to the neural circuit network model and adjusting training parameters in the neural circuit network model;
training a neural circuit network model according to the loss until the loss is lower than a preset threshold value, and stopping training;
and determining the neural circuit network model containing the training parameters as a prediction model.
In the embodiment of the application, in the training process, when the loss meets the preset requirement, the performance of the neural circuit network model is considered to be higher at the moment, and the use requirement is met. For example, a preset threshold may be set, and when the value of the loss is smaller than the preset threshold, the requirement is satisfied, the training is ended, and the neural circuit network model including the training parameters is determined as the prediction model.
In the embodiment of the application, after the loss between the predicted value of the control information and the control information in the data set is calculated, the neural circuit network model is updated by utilizing back propagation, and the optimal predicted model is obtained.
Step 107, acquiring a real-time vehicle front image under a complex road condition, inputting the real-time vehicle front image into a prediction model to obtain steering wheel rotation angle, vehicle speed, accelerator pedal depth and brake pedal depth information, and taking the steering wheel rotation angle, the vehicle speed, the accelerator pedal depth and the brake pedal depth information as decision results.
The end-to-end automatic driving method under the complex road conditions solves the problem that the biological unexplained performance based on the CNN algorithm under the complex road conditions is not realized, and different characteristic extraction networks, different neuron numbers and different sparsity neural circuit network models can be selected according to the complexity of the road conditions.
According to the end-to-end automatic driving method under the complex road conditions, the strong characteristic extraction performance of CNN and the time sequence of RNN are fused, and the RNN neurons imitate a network wiring diagram with bionic characteristics based on an LTC neuron model, so that the built network wiring diagram has better biological interpretability and sparsity, the adverse effect of an RNN time sequence band is eliminated, and the operation complexity is reduced and the operation speed is improved by utilizing the impulse neural network SNN.
Corresponding to the above method embodiment, referring to fig. 5, the embodiment of the present application further provides an end-to-end automatic driving system under complex road conditions. The system 200 may include:
a memory 201 for storing a computer program;
the processor 202 is configured to execute the computer program stored in the memory 201, and implement the following steps:
acquiring historical vehicle driving data under complex road conditions, and manufacturing a data set; dividing the data set into a training set and a testing set; inputting the training set into a feature extraction network to obtain a high-dimensional feature vector; establishing a pulse neural network, and performing pulse mapping on the extracted high-dimensional feature vector by adopting the pulse neural network; inputting the high-dimensional feature vector after pulse mapping into a neural circuit network model to obtain a predicted value; calculating a loss between the predicted value and the test set; adjusting training parameters in the neural circuit network model according to the loss and training the neural circuit network model to obtain a prediction model; acquiring a real-time vehicle front image under a complex road condition, inputting the real-time vehicle front image into a prediction model to obtain steering wheel rotation angle, vehicle speed, accelerator pedal depth and brake pedal depth information, and taking the steering wheel rotation angle, the vehicle speed, the accelerator pedal depth and the brake pedal depth information as decision results.
For the description of the apparatus provided in the embodiment of the present application, reference is made to the above method embodiment, and the description is omitted herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (7)

1. An end-to-end automatic driving method under complex road conditions is characterized by comprising the following steps:
acquiring historical vehicle driving data under complex road conditions, and manufacturing a data set;
dividing the data set into a training set and a testing set;
inputting the training set into a feature extraction network to obtain a high-dimensional feature vector;
establishing a pulse neural network, and performing pulse mapping on the extracted high-dimensional feature vector by adopting the pulse neural network;
inputting the high-dimensional feature vector after pulse mapping into a neural circuit network model to obtain a predicted value;
calculating a loss between the predicted value and the test set;
adjusting training parameters in the neural circuit network model according to the loss and training the neural circuit network model to obtain a prediction model;
acquiring a real-time vehicle front image under a complex road condition, inputting the real-time vehicle front image into a prediction model to obtain steering wheel rotation angle, vehicle speed, accelerator pedal depth and brake pedal depth information, and taking the steering wheel rotation angle, the vehicle speed, the accelerator pedal depth and the brake pedal depth information as decision results.
2. The end-to-end automatic driving method under complex road conditions according to claim 1, wherein the historical vehicle driving data comprises historical vehicle front images and corresponding control information; the control information at least comprises steering wheel angle, vehicle speed, accelerator pedal depth and brake pedal depth data under complex road conditions.
3. The end-to-end automatic driving method under complex road conditions according to claim 2, wherein a dataset is made using historical vehicle front images and corresponding control information; and dividing the historical vehicle front images in the data set into training sets, and dividing control information corresponding to the historical vehicle front images in the data set into test sets.
4. The end-to-end automatic driving method under complex road conditions according to claim 1, wherein the neural circuit network model comprises a perception layer, a middle layer, a command layer and an action layer, the perception layer is used for receiving the high-dimensional feature vector after pulse mapping, the middle layer and the command layer are used for processing the high-dimensional feature vector, and the action layer is used for outputting a predicted value.
5. The end-to-end automatic driving method under complex road conditions according to claim 3, wherein the method for calculating the loss between the predicted value and the test set is as follows:
Figure FDA0004026652310000021
where n refers to the number of data sets, y pre Refers to the predicted value, y, of control information real Refers to control information in the test set.
6. The end-to-end automatic driving method under complex road conditions according to claim 1, wherein the adjusting training parameters in the neural circuit network model and training the neural circuit network model according to the loss, obtaining the prediction model comprises:
feeding back the loss to the neural circuit network model and adjusting training parameters in the neural circuit network model;
training the neural circuit network model according to the loss until the loss is lower than a preset threshold value, and stopping training;
the neural circuit network model including training parameters is determined as a predictive model.
7. An end-to-end autopilot system under complex road conditions, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the end-to-end autopilot method under complex road conditions according to any one of claims 1 to 6 when executing said computer program.
CN202211717444.8A 2022-12-29 2022-12-29 End-to-end automatic driving method and system under complex road conditions Pending CN116331251A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211717444.8A CN116331251A (en) 2022-12-29 2022-12-29 End-to-end automatic driving method and system under complex road conditions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211717444.8A CN116331251A (en) 2022-12-29 2022-12-29 End-to-end automatic driving method and system under complex road conditions

Publications (1)

Publication Number Publication Date
CN116331251A true CN116331251A (en) 2023-06-27

Family

ID=86893663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211717444.8A Pending CN116331251A (en) 2022-12-29 2022-12-29 End-to-end automatic driving method and system under complex road conditions

Country Status (1)

Country Link
CN (1) CN116331251A (en)

Similar Documents

Publication Publication Date Title
JP7289918B2 (en) Object recognition method and device
EP3289529B1 (en) Reducing image resolution in deep convolutional networks
CN110070107B (en) Object recognition method and device
KR102224253B1 (en) Teacher-student framework for light weighted ensemble classifier combined with deep network and random forest and the classification method based on thereof
CN111507378A (en) Method and apparatus for training image processing model
CN113284054A (en) Image enhancement method and image enhancement device
EP3923233A1 (en) Image denoising method and apparatus
WO2022001805A1 (en) Neural network distillation method and device
CN114255361A (en) Neural network model training method, image processing method and device
CN111696110B (en) Scene segmentation method and system
CN112561027A (en) Neural network architecture searching method, image processing method, device and storage medium
CN112581379A (en) Image enhancement method and device
EP4006777A1 (en) Image classification method and device
CN113570029A (en) Method for obtaining neural network model, image processing method and device
CN111340190A (en) Method and device for constructing network structure, and image generation method and device
CN112257759A (en) Image processing method and device
CN112464930A (en) Target detection network construction method, target detection method, device and storage medium
CN113807183A (en) Model training method and related equipment
CN112529904A (en) Image semantic segmentation method and device, computer readable storage medium and chip
CN113066018A (en) Image enhancement method and related device
CN113627163A (en) Attention model, feature extraction method and related device
CN115796025A (en) System and method for deep multi-task learning for embedded machine vision applications
CN110705564B (en) Image recognition method and device
US20230073175A1 (en) Method and system for processing image based on weighted multiple kernels
WO2022179599A1 (en) Perceptual network and data processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination