CN112364774A - Unmanned vehicle brain autonomous obstacle avoidance method and system based on impulse neural network - Google Patents

Unmanned vehicle brain autonomous obstacle avoidance method and system based on impulse neural network Download PDF

Info

Publication number
CN112364774A
CN112364774A CN202011258252.6A CN202011258252A CN112364774A CN 112364774 A CN112364774 A CN 112364774A CN 202011258252 A CN202011258252 A CN 202011258252A CN 112364774 A CN112364774 A CN 112364774A
Authority
CN
China
Prior art keywords
unmanned vehicle
neural network
obstacle avoidance
processor
impulse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011258252.6A
Other languages
Chinese (zh)
Inventor
杨双鸣
张靖轩
胡植才
王江
邓斌
李会艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202011258252.6A priority Critical patent/CN112364774A/en
Publication of CN112364774A publication Critical patent/CN112364774A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Neurology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unmanned vehicle brain autonomous obstacle avoidance method and system based on a pulse neural network, and the method comprises the following steps: s1 collecting video data simulating the driving of the camera carried by the unmanned vehicle as training data; s2, preprocessing the training data; s3, constructing a convolutional neural network by adopting a convolutional neural network architecture, converting each neuron into an LIF impulse neuron, and inputting preprocessed training data into the impulse neural network for training; s4, the trained pulse neural network is applied to a processor of the unmanned vehicle, video frame data collected in real time are output through the pulse neural network in the processor to obtain the probability of obstacle blocking, and the processor adjusts the advancing speed and the turning gain of the current unmanned vehicle according to the output probability of obstacle blocking. The invention has the beneficial effects that: the calculation efficiency is improved, the video frame processing time is reduced, and autonomous obstacle avoidance of the unmanned vehicle is realized; the method can be applied to the environment with dynamic obstacles.

Description

Unmanned vehicle brain autonomous obstacle avoidance method and system based on impulse neural network
Technical Field
The invention relates to the technical field of autonomous obstacle avoidance of unmanned vehicle brains, in particular to an autonomous obstacle avoidance method and system of the unmanned vehicle brains based on a pulse neural network.
Background
In recent years, with the continuous progress of unmanned technology, there is an increasing demand for unmanned vehicles. From the very first remote-controlled driving to the tracking driving, the aim is finally to achieve completely autonomous driving. In the unmanned driving process, the autonomous avoidance of various obstacles is the most important part, and if the unmanned vehicle cannot successfully identify various obstacles on a road, serious safety accidents can be caused, and huge losses and injuries are caused to human bodies and economy.
Therefore, how to realize autonomous obstacle avoidance becomes a big problem which the unmanned technology has to solve.
Disclosure of Invention
The invention aims to provide an unmanned vehicle brain autonomous obstacle avoidance method based on a pulse neural network so as to realize autonomous obstacle avoidance in an unmanned driving process. The impulse neural network has better bioanalysis and bionic properties, and can greatly improve the generalization ability, so that the artificial neuron is changed into the LIF impulse neuron.
In order to solve the technical problem, the invention provides an unmanned vehicle brain autonomous obstacle avoidance method based on a pulse neural network, which comprises the following steps:
s1: collecting video data simulating the driving of the camera carried by the unmanned vehicle as training data;
s2: preprocessing the training data of S1;
s3: firstly, constructing a convolutional neural network by adopting a convolutional neural network architecture Alexnet, then converting each neuron into an LIF impulse neuron, and inputting the preprocessed training data S2 into the impulse neural network for training;
s4: and S3, applying the trained pulse neural network to a processor of the unmanned vehicle, transmitting a video frame data stream acquired in real time to the processor by a camera in the unmanned vehicle, outputting the video frame data after passing through the pulse neural network in the processor to obtain the probability of obstacle blocking, and adjusting the advancing speed and the turning gain of the current unmanned vehicle by the processor according to the output probability of obstacle blocking to realize speed reduction and realize autonomous obstacle avoidance by left-right turning.
In the technical scheme, a large number of video sequences are collected by the unmanned vehicle to serve as training data, and are preprocessed at first, so that the generalization capability of the training impulse neural network is improved, and overfitting is avoided. And (3) combining a light-weight type pulse neural network, inputting a video sequence acquired in real time into the pulse neural network to obtain a corresponding collision probability, and then sending an instruction for controlling the speed and turning gain of the unmanned vehicle through a processor according to the blocking probability obtained by the output end of the pulse neural network so as to realize autonomous obstacle avoidance.
Preferably, in step S1, a monocular camera is fixed on the unmanned vehicle to collect video data, and a large number of obstacle pictures in different terrain environments are collected, so as to realize training data collection in different areas and under a situation that the environments of different obstacles are variable.
Preferably, in the step S2, the step of preprocessing the training data includes:
s21: performing frame-by-frame manual labeling on the video data, wherein a video frame which is more than 2m away from the obstacle is labeled as 0, and a video frame which is less than or equal to 2m away from the obstacle is labeled as 1;
s22: and adding random noise to the image in the marked video frame, further expanding the data set by using image augmentation means such as turning, random cutting and the like, and finally obtaining the training data after the preprocessing.
Preferably, in step S3, the structure of the spiking neural network includes a neural network architecture Alexnet, where the Alexnet has 60million parameters and 65000 neurons, and the total number of layers of the network is 8, 5 convolution layers, and 3 full-connection layers. The final output layer is changed to a 2-channel softmax classifier. Each convolution layer consists of a point-by-point convolution layer, a BN normalization layer and a Relu activation layer; and simultaneously using a dropout method in each convolution layer, connecting the output end of the convolution layer adopting the dropout method with the input end of a full connection layer, finally obtaining two types of probabilities of each frame of image by a softmax classifier, and finally determining whether an obstacle exists in front. The Relu activation function is adopted in the convolutional layer, so that the problems of gradient explosion and gradient disappearance can be effectively avoided.
Preferably, the dropout value in the convolution layer using the dropout method is preset to 0.5 to prevent overfitting.
Preferably, the channel-by-channel convolution layers are 5 x 5 and 3 x 3 convolution kernels.
Preferably, the step of S3 further includes the steps of: and optimizing parameters of each layer of the impulse neural network by adopting a softmax classifier and a cross entropy loss function, wherein the calculation formula is as follows:
Figure BDA0002773764820000031
wherein the content of the first and second substances,
Figure BDA0002773764820000032
the obstacle blocking probability of the output of the convolutional neural network is represented, and y represents a mark corresponding to a video frame input to the impulse neural network.
Preferably, in the step S4, the step of modulating the current operating speed of the unmanned vehicle by the processor according to the output collision probability includes modulating the forward speed of the unmanned vehicle according to the output collision probability to implement autonomous obstacle avoidance of the unmanned vehicle; the unmanned vehicle comprises a forward speed modulation formula, a speed control formula and a speed control formula, wherein the forward speed modulation formula of the unmanned vehicle is as follows:
vk=(1-α)vk-1+α(1-pt)Vmax
vk represents a modulation speed, pt represents a collision probability, Vmax represents the maximum advancing speed of the unmanned vehicle, alpha represents a modulation coefficient, and alpha is more than or equal to 0 and less than or equal to 1.
The invention also provides an unmanned vehicle brain autonomous obstacle avoidance system based on the light weight neural network, which comprises: unmanned vehicle, motion control system and the corresponding FPGA hardware realization platform that carry on monocular camera, wherein: the unmanned vehicle acquires current real-time video sequence data through the monocular camera and transmits the current real-time video sequence data to the processor; the display card is used for training the impulse neural network and then transplanting the trained impulse neural network into the processor for application; the processor obtains the collision probability corresponding to the current video sequence according to the output of the pulse neural network obtained by the transplantation of the display card, obtains the speed of the unmanned vehicle according to a preset modulation formula, and sends a control instruction to the motion control system; and the motion control system adjusts the speed and turning gain of the unmanned vehicle according to the instruction sent from the processor, so as to realize autonomous obstacle avoidance.
The invention has the beneficial effects that: based on a convolutional neural network architecture Alexnet, common neurons are changed into pulse neurons with better bionic property, and the pulse neurons are realized based on an FPGA hardware platform. The method has the advantages that the calculation efficiency is greatly improved while the barriers can be accurately identified, the video frame processing time is reduced, and the autonomous obstacle avoidance of the unmanned vehicle is effectively realized; and meanwhile, the method can be applied to the environment with dynamic obstacles.
Drawings
FIG. 1 is a flow chart of an unmanned vehicle brain autonomous obstacle avoidance method based on a pulse neural network according to the present invention;
FIG. 2 is a schematic diagram of the architecture of the spiking neural network of the present invention;
FIG. 3 is an equivalent circuit diagram of the LIF pulse neuron model used in the present invention;
fig. 4 is a schematic structural diagram of an unmanned vehicle brain autonomous obstacle avoidance system based on a pulse neural network.
Detailed Description
The invention is described in further detail below with reference to the following figures and detailed description: the drawings are for illustrative purposes only and are not to be construed as limiting the patent; for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
Example 1
The present embodiment 1 provides an autonomous obstacle avoidance method for an unmanned vehicle based on a spiking neural network, and as shown in fig. 1, is a flowchart of the autonomous obstacle avoidance method for an unmanned vehicle based on a spiking neural network in the present embodiment 1.
The method for autonomous obstacle avoidance of the unmanned vehicle brain based on the impulse neural network provided by the embodiment 1 comprises the following steps:
s1: collecting video data simulating the driving of the camera carried by the unmanned vehicle as training data;
in this embodiment 1, video data acquisition is performed by a monocular camera mounted on an unmanned vehicle, so that acquisition of video data simulating operation of the monocular camera mounted on the unmanned vehicle is realized, and training data in scenes of obstacles with different terrain environments and different sizes and shapes are obtained.
As shown in fig. 2, the training data image of the present embodiment 1 is shown.
S2: and preprocessing the training data.
In this step, the step of preprocessing the training data includes:
s21: performing frame-by-frame manual labeling on the video data, wherein a video frame which is more than 2m away from an obstacle is marked as 0, which indicates that no obstacle exists in front; marking a video frame with the distance less than or equal to 2m from the obstacle as 1, and indicating that the obstacle exists in front of the video frame;
s22: and adding random noise to the image in the marked video frame, further expanding the data set by using image augmentation means such as turning, random cutting and the like, and finally obtaining the training data after the preprocessing.
S3: firstly, constructing a convolutional neural network by adopting a convolutional neural network architecture Al exnet, then converting each neuron into an LIF impulse neuron, and inputting the preprocessed training data into the impulse neural network for training;
in this step, the structure of the impulse neural network includes a neural network architecture Alexnet, the Alexnet has 60million parameters and 65000 neurons, the total number of layers of the network is 8, 5 layers of convolution, and 3 layers of full connection layers. The final output layer is changed to a 2-channel softmax classifier. Each convolution layer consists of a point-by-point convolution layer, a BN normalization layer and a Relu activation layer; and simultaneously using a dropout method in each convolution layer, connecting the output end of the convolution layer adopting the dropout method with the input end of a full connection layer, finally obtaining two types of probabilities of each frame of image by a softmax classifier, and finally determining whether an obstacle exists in front. The Relu activation function is adopted in the convolutional layer, so that the problems of gradient explosion and gradient disappearance can be effectively avoided.
Fig. 2 is a schematic structural diagram of the impulse neural network of this embodiment 1. The method takes Alexnet as a main framework to convert neurons in the Alexnet into LIF impulse neurons.
As shown in fig. 3, an equivalent circuit diagram of the LIF pulse neuron model used in this example 1.
In this embodiment 1, the dropout value in the convolution layer using the dropout method is preset to 0.5; the channel-by-channel convolution layers are 5 x 5 and 3 x 3 convolution kernels.
The method also comprises a convolutional neural network optimization step, wherein each layer of parameter of the convolutional neural network is optimized by adopting a binary cross entropy loss function, and the calculation formula is as follows:
Figure BDA0002773764820000051
wherein the content of the first and second substances,
Figure BDA0002773764820000052
the collision probability of the output of the convolutional neural network is represented, and y represents a mark corresponding to a video frame input into the convolutional neural network.
The convolutional neural network training in this example 1 uses a random majority descent SGD as the optimizer, with a learning rate of 0.001, a batch _ s ize of 64, and an epochs of 20.
Wherein a generic neuron model is converted into a LIF neuron model. The current equation is I (t) ═ IR+ICThe current of the resistor R is IRThe voltage across the resistor is uRThe current flowing through the capacitor is ICThe voltage across the capacitor is uCThe magnitude of the resting potential is urest. From ohm's law and kirchhoff's law:
Figure BDA0002773764820000061
defining the membrane time constant τmRC, available by identity substitution:
Figure BDA0002773764820000062
at this point we assume that when initial t is 0, the membrane potential is urest+ Δ u, when a sufficient amount of time has elapsed, the neuron returns to a resting state, during which the membrane potential decays exponentially, regardless of the absolute refractory period. We can obtain a solution of the above formula:
Figure BDA0002773764820000063
s4: the trained pulse neural network is applied to a processor of the unmanned vehicle, a camera in the unmanned vehicle transmits a video frame data stream acquired in real time to the processor, the video frame data is output after passing through the pulse neural network in the processor to obtain the probability of obstacle blocking, and the processor adjusts the advancing speed and the turning gain of the current unmanned vehicle according to the output probability of obstacle blocking, so that deceleration is realized, and autonomous obstacle avoidance is realized through left-right turning.
In the step, the specific step of modulating the speed of the current unmanned vehicle by the processor according to the output collision probability comprises modulating the forward speed of the unmanned vehicle according to the output collision probability to realize autonomous obstacle avoidance of the unmanned vehicle; the forward speed modulation formula of the unmanned vehicle is as follows:
vk=(1-α)vk-1+α(1-pt)Vmax
vk represents a modulation speed, pt represents a collision probability, Vmax represents the maximum advancing speed of the unmanned vehicle, alpha represents a modulation coefficient, and alpha is more than or equal to 0 and less than or equal to 1.
In the embodiment 1, the maximum forward speed Vmax of the unmanned vehicle is set to 2m/s, the running height of the unmanned vehicle is controlled to about 2m, the modulation factor α is set to 0.7, and the minimum speed Vmin of the unmanned vehicle is set to 0.01 m/s.
In the specific implementation process, when the unmanned vehicle encounters an obstacle, the monocular camera mounted on the unmanned vehicle processes a video frame acquired currently by the unmanned vehicle through a trained pulse convolution neural network, a collision probability pt is output, a corresponding modulation speed vk is obtained according to the collision probability pt and a speed modulation formula, the modulation speed vk is gradually reduced through modulation when the unmanned vehicle gets closer to the obstacle, when the modulation speed vk is reduced to a preset value Vmin which is 0.01m/s, the unmanned vehicle translates along the y axis of the vehicle body, when the unmanned vehicle translates to the front of the monocular camera without the obstacle, the collision probability pt output by the convolution neural network is reduced, the modulation speed vk is increased, and the unmanned vehicle continues to move forwards.
Example 2
Fig. 4 is a schematic structural diagram of the unmanned vehicle brain autonomous obstacle avoidance system based on the lightweight neural network of this embodiment 2.
In the unmanned vehicle brain autonomous obstacle avoidance system based on the lightweight neural network proposed in this embodiment 2, the unmanned vehicle equipped with a monocular camera, a motion control system, and a corresponding FPGA hardware implementation platform are included, where:
the unmanned vehicle acquires a current video sequence through a monocular camera carried by the unmanned vehicle and transmits the current video sequence to an NVIDIA GPU hardware platform; the GPU hardware platform is used for training the pulse convolution neural network and then transplanting the trained pulse neural network into an ARM CPU processor for application;
the ARM CPU processor obtains the collision probability corresponding to the current video sequence according to the output of the pulse neural network, obtains the running modulation speed of the unmanned vehicle according to a preset modulation formula, and sends a modulation command to a running control system; and the operation control system adjusts the operation speed of the unmanned vehicle according to the modulation command sent from the processor to realize autonomous obstacle avoidance.
In this embodiment 2, an FPGA hardware platform is used for training, and the adopted evaluation indexes are accurve and F-1score, where:
F-1=(2*precison*recall)/(precison+recall)
where precison denotes accuracy and recall denotes recall.
And transplanting the trained pulse convolution neural network to a mobile development platform of the unmanned vehicle for inference.
In the embodiment 2, the maximum forward speed Vmax of the unmanned vehicle 1 is set to 2m/s, the running height of the unmanned vehicle 1 is controlled to be about 2m, the modulation factor alpha is set to 0.7, and the Vmin is set to 0.01 m/s.
In a specific implementation process, when the unmanned vehicle 1 encounters an obstacle, the monocular camera 2 mounted on the unmanned vehicle 1 transmits a video frame acquired currently to the processor 4 for processing, a convolutional neural network trained in the display card 3 is preset in the processor 4, the convolutional neural network outputs a collision probability pt of the current unmanned vehicle 1, and the processor 4 obtains a corresponding modulation speed vk according to the collision probability pt and a speed modulation formula and transmits the modulation speed vk to the operation control platform 5 to control the operation speed of the unmanned vehicle 1.
When the unmanned vehicle 1 gets closer to the obstacle, the modulation speed vk is gradually reduced through modulation, when the modulation speed vk is reduced to the preset lowest speed Vmin of the unmanned vehicle 1, the unmanned vehicle 1 translates along the y axis of the vehicle body, when the unmanned vehicle 1 translates to the place in front of the monocular camera without the obstacle, the collision probability pt output by the convolutional neural network is reduced, the modulation speed vk is increased, and the unmanned vehicle 1 continues to move forwards, so that the autonomous obstacle avoidance function of the unmanned vehicle 1 is realized.
In the present invention, the same or similar reference numerals correspond to the same or similar parts; the terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the invention; it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (9)

1. An unmanned vehicle brain autonomous obstacle avoidance method based on a pulse neural network comprises the following steps:
s1: collecting video data simulating the driving of the camera carried by the unmanned vehicle as training data;
s2: preprocessing the training data obtained in the step S1;
s3: firstly, constructing a convolutional neural network by adopting a convolutional neural network architecture Alexnet, then converting each neuron into an LIF impulse neuron, and inputting the preprocessed training data obtained in the step S2 into the impulse neural network for training;
s4: and (4) applying the pulse neural network which is trained in the step (S3) to a processor of the unmanned vehicle, transmitting a video frame data stream which is acquired in real time to the processor by a camera in the unmanned vehicle, outputting the video frame data after passing through the pulse neural network in the processor to obtain the probability of obstacle blocking, and adjusting the advancing speed and the turning gain of the current unmanned vehicle by the processor according to the output probability of obstacle blocking to realize speed reduction and realize autonomous obstacle avoidance by left-right turning.
2. The unmanned vehicle brain autonomous obstacle avoidance method based on the impulse neural network as claimed in claim 1, wherein the method comprises the following steps: in the step S1, a monocular camera is fixed on the unmanned vehicle to acquire video data, and a large number of obstacle pictures in different terrain environments are acquired, so as to acquire training data in different areas and under the situation that the environments of different obstacles are variable.
3. The unmanned vehicle brain autonomous obstacle avoidance method based on the impulse neural network as claimed in claim 1, wherein the method comprises the following steps: in the step S2, the step of preprocessing the training data includes:
s21: performing frame-by-frame manual labeling on the video data, wherein a video frame which is more than 2m away from the obstacle is labeled as 0, and a video frame which is less than or equal to 2m away from the obstacle is labeled as 1;
s22: and adding random noise to the image in the marked video frame, further expanding the data set by using image augmentation means such as turning, random cutting and the like, and finally obtaining the training data after the preprocessing.
4. The unmanned vehicle brain autonomous obstacle avoidance method based on the impulse neural network as claimed in claim 1, wherein the method comprises the following steps: in the step of S3, the structure of the impulse neural network includes a neural network architecture Alexnet, the Alexnet has 60million parameters and 65000 neurons, the total number of layers of the network is 8, 5 layers of convolution, and 3 layers of full connection layers; the final output layer is changed into a 2-channel softmax classifier, each convolution layer consists of a point-by-point convolution layer, a BN normalization layer and a Relu activation layer, a dropout method is used in each convolution layer, the output end of the convolution layer adopting the dropout method is connected with the input end of a full connection layer, finally, the softmax classifier obtains two types of probabilities of each frame image, and finally, whether an obstacle exists in the front is determined; the Relu activation function is adopted in the convolutional layer to effectively avoid gradient explosion and gradient disappearance.
5. The unmanned vehicle brain autonomous obstacle avoidance method based on the impulse neural network as claimed in claim 4, wherein the method comprises the following steps: the dropout value in the convolution layer using the dropout method is preset to 0.5 to prevent overfitting.
6. The unmanned vehicle brain autonomous obstacle avoidance method based on the impulse neural network as claimed in claim 4, wherein the method comprises the following steps: the channel-by-channel convolution layers are 5 x 5 and 3 x 3 convolution kernels.
7. The unmanned vehicle brain autonomous obstacle avoidance method based on the impulse neural network as claimed in claim 1, wherein the method comprises the following steps: in the step S3, the method further includes the steps of: and optimizing parameters of each layer of the impulse neural network by adopting a softmax classifier and a cross entropy loss function, wherein the calculation formula is as follows:
Figure FDA0002773764810000021
wherein the content of the first and second substances,
Figure FDA0002773764810000022
the obstacle blocking probability of the output of the convolutional neural network is represented, and y represents a mark corresponding to a video frame input to the impulse neural network.
8. The unmanned vehicle brain autonomous obstacle avoidance method based on the impulse neural network as claimed in claim 1, wherein the method comprises the following steps: in the step of S4, the specific step of modulating the running speed of the current unmanned vehicle by the processor according to the output collision probability comprises modulating the running speed of the unmanned vehicle according to the output collision probability to realize autonomous obstacle avoidance of the unmanned vehicle; the unmanned vehicle comprises a forward speed modulation formula, a speed control formula and a speed control formula, wherein the forward speed modulation formula of the unmanned vehicle is as follows:
vk=(1-α)vk-1+α(1-pt)Vmax
vk represents a modulation speed, pt represents a collision probability, Vmax represents the maximum advancing speed of the unmanned vehicle, alpha represents a modulation coefficient, and alpha is more than or equal to 0 and less than or equal to 1.
9. The system of the unmanned vehicle brain autonomous obstacle avoidance method based on the impulse neural network as claimed in claim 1, comprising: unmanned vehicle, motion control system and the corresponding FPGA hardware realization platform that load with monocular camera, characterized by: the unmanned vehicle acquires current real-time video sequence data through the monocular camera and transmits the current real-time video sequence data to the processor; the display card is used for training the impulse neural network and then transplanting the trained impulse neural network into the processor for application; the processor obtains the collision probability corresponding to the current video sequence according to the output of the pulse neural network obtained by the transplantation of the display card, obtains the speed of the unmanned vehicle according to a preset modulation formula, and sends a control instruction to the motion control system; and the motion control system adjusts the speed and turning gain of the unmanned vehicle according to the instruction sent from the processor, so as to realize autonomous obstacle avoidance.
CN202011258252.6A 2020-11-12 2020-11-12 Unmanned vehicle brain autonomous obstacle avoidance method and system based on impulse neural network Pending CN112364774A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011258252.6A CN112364774A (en) 2020-11-12 2020-11-12 Unmanned vehicle brain autonomous obstacle avoidance method and system based on impulse neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011258252.6A CN112364774A (en) 2020-11-12 2020-11-12 Unmanned vehicle brain autonomous obstacle avoidance method and system based on impulse neural network

Publications (1)

Publication Number Publication Date
CN112364774A true CN112364774A (en) 2021-02-12

Family

ID=74516033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011258252.6A Pending CN112364774A (en) 2020-11-12 2020-11-12 Unmanned vehicle brain autonomous obstacle avoidance method and system based on impulse neural network

Country Status (1)

Country Link
CN (1) CN112364774A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705115A (en) * 2021-11-01 2021-11-26 北京理工大学 Ground unmanned vehicle chassis motion and target striking cooperative control method and system
CN114037050A (en) * 2021-10-21 2022-02-11 大连理工大学 Robot degradation environment obstacle avoidance method based on internal plasticity of pulse neural network
CN116080688A (en) * 2023-03-03 2023-05-09 北京航空航天大学 Brain-inspiring-like intelligent driving vision assisting method, device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169956A (en) * 2017-04-28 2017-09-15 西安工程大学 Yarn dyed fabric defect detection method based on convolutional neural networks
CN108133188A (en) * 2017-12-22 2018-06-08 武汉理工大学 A kind of Activity recognition method based on motion history image and convolutional neural networks
CN110908399A (en) * 2019-12-02 2020-03-24 广东工业大学 Unmanned aerial vehicle autonomous obstacle avoidance method and system based on light weight type neural network
CN111709967A (en) * 2019-10-28 2020-09-25 北京大学 Target detection method, target tracking device and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169956A (en) * 2017-04-28 2017-09-15 西安工程大学 Yarn dyed fabric defect detection method based on convolutional neural networks
CN108133188A (en) * 2017-12-22 2018-06-08 武汉理工大学 A kind of Activity recognition method based on motion history image and convolutional neural networks
CN111709967A (en) * 2019-10-28 2020-09-25 北京大学 Target detection method, target tracking device and readable storage medium
CN110908399A (en) * 2019-12-02 2020-03-24 广东工业大学 Unmanned aerial vehicle autonomous obstacle avoidance method and system based on light weight type neural network

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114037050A (en) * 2021-10-21 2022-02-11 大连理工大学 Robot degradation environment obstacle avoidance method based on internal plasticity of pulse neural network
CN114037050B (en) * 2021-10-21 2022-08-16 大连理工大学 Robot degradation environment obstacle avoidance method based on internal plasticity of pulse neural network
CN113705115A (en) * 2021-11-01 2021-11-26 北京理工大学 Ground unmanned vehicle chassis motion and target striking cooperative control method and system
CN113705115B (en) * 2021-11-01 2022-02-08 北京理工大学 Ground unmanned vehicle chassis motion and target striking cooperative control method and system
CN116080688A (en) * 2023-03-03 2023-05-09 北京航空航天大学 Brain-inspiring-like intelligent driving vision assisting method, device and storage medium

Similar Documents

Publication Publication Date Title
CN112364774A (en) Unmanned vehicle brain autonomous obstacle avoidance method and system based on impulse neural network
CN110908399B (en) Unmanned aerial vehicle autonomous obstacle avoidance method and system based on lightweight neural network
Kaiser et al. Towards a framework for end-to-end control of a simulated vehicle with spiking neural networks
US11561544B2 (en) Indoor monocular navigation method based on cross-sensor transfer learning and system thereof
JP7258137B2 (en) Fast CNN Classification of Multiframe Semantic Signals
US20220156576A1 (en) Methods and systems for predicting dynamic object behavior
CN111860269B (en) Multi-feature fusion series RNN structure and pedestrian prediction method
CN110850877A (en) Automatic driving trolley training method based on virtual environment and deep double Q network
CN110490136A (en) A kind of human body behavior prediction method of knowledge based distillation
CN114067166A (en) Apparatus and method for determining physical properties of a physical object
CN108288038A (en) Night robot motion's decision-making technique based on scene cut
CN111881802A (en) Traffic police gesture recognition method based on double-branch space-time graph convolutional network
Dong et al. Real-time survivor detection in UAV thermal imagery based on deep learning
CN108921044A (en) Driver's decision feature extracting method based on depth convolutional neural networks
Dong et al. A vision-based method for improving the safety of self-driving
Prasetyo et al. Spatial Based Deep Learning Autonomous Wheel Robot Using CNN
Li A hierarchical autonomous driving framework combining reinforcement learning and imitation learning
Dangskul et al. Real-Time Control Using Convolution Neural Network for Self-Driving Cars
CN111552294A (en) Outdoor robot path-finding simulation system and method based on time dependence
CN116080688A (en) Brain-inspiring-like intelligent driving vision assisting method, device and storage medium
DE102022109385A1 (en) Reward feature for vehicles
AU2019100967A4 (en) An environment perception system for unmanned driving vehicles based on deep learning
Yu et al. MAVRL: Learn to Fly in Cluttered Environments with Varying Speed
Aphiratsakun et al. PID control for a path-following error-producing neural network
Yin Design of Deep Learning Based Autonomous Driving Control Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Yang Shuangming

Inventor after: Zhang Jingxuan

Inventor after: Hu Zhicai

Inventor after: Wang Jiang

Inventor after: Cheng Jian

Inventor after: Deng Bin

Inventor after: Li Huiyan

Inventor before: Yang Shuangming

Inventor before: Zhang Jingxuan

Inventor before: Hu Zhicai

Inventor before: Wang Jiang

Inventor before: Deng Bin

Inventor before: Li Huiyan

CB03 Change of inventor or designer information
RJ01 Rejection of invention patent application after publication

Application publication date: 20210212

RJ01 Rejection of invention patent application after publication