CN109960278B - LGMD-based bionic obstacle avoidance control system and method for unmanned aerial vehicle - Google Patents

LGMD-based bionic obstacle avoidance control system and method for unmanned aerial vehicle Download PDF

Info

Publication number
CN109960278B
CN109960278B CN201910281845.5A CN201910281845A CN109960278B CN 109960278 B CN109960278 B CN 109960278B CN 201910281845 A CN201910281845 A CN 201910281845A CN 109960278 B CN109960278 B CN 109960278B
Authority
CN
China
Prior art keywords
lgmd
membrane potential
layer
neuron
obstacle avoidance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910281845.5A
Other languages
Chinese (zh)
Other versions
CN109960278A (en
Inventor
马兴灶
赵剑楠
岳士岗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dragon Totem Technology Hefei Co ltd
Original Assignee
Lingnan Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lingnan Normal University filed Critical Lingnan Normal University
Priority to CN201910281845.5A priority Critical patent/CN109960278B/en
Publication of CN109960278A publication Critical patent/CN109960278A/en
Application granted granted Critical
Publication of CN109960278B publication Critical patent/CN109960278B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/0265Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
    • G05B13/027Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/042Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/048Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators using a predictor
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0088Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft

Abstract

The invention provides an LGMD (laser marker detection) -based bionic obstacle avoidance control system of an unmanned aerial vehicle, which comprises a flight control subsystem, an optical flow sensor, a driving motor, an embedded LGMD detector, a camera, a wireless communication module and a ground station PC (personal computer); the optical flow sensor and the embedded LGMD detector are electrically connected with the flight control subsystem; the wireless communication module and the driving motor are electrically connected with the output end of the flight control subsystem; the input end of the embedded LGMD detector is in signal connection with the output end of the camera; the wireless communication module is in wireless communication connection with the ground station PC. The invention further provides an LGMD-based bionic obstacle avoidance control method for the unmanned aerial vehicle, which is characterized in that an LGMD neural network is built, and a field image is segmented, so that space direction selection and scene prediction are realized in the flight process of the unmanned aerial vehicle, and real-time and efficient obstacle avoidance flight of the unmanned aerial vehicle in an unknown environment is realized.

Description

LGMD-based bionic obstacle avoidance control system and method for unmanned aerial vehicle
Technical Field
The invention relates to the technical field of unmanned aerial vehicle aircrafts, in particular to an LGMD (light-gauge mechanical system) -based unmanned aerial vehicle bionic obstacle avoidance control system and an LGMD-based unmanned aerial vehicle bionic obstacle avoidance control method.
Background
Unmanned aerial vehicles have wide application prospects in a plurality of scenes such as geographic measurement, agricultural aviation, danger detection and the like, and safety is always the focus of attention of people, particularly in complex environments. Traditional unmanned aerial vehicles use GPS and optical flow for path planning and detection of collision in combination with detection methods of sensors such as ultrasonic waves, infrared rays and laser, however, such methods depend greatly on the complexity of barrier materials, textures and backgrounds and can only be used in simple and specific environments. In recent years, the obstacle avoidance method based on biological vision is a hot spot for studying by scholars at home and abroad due to high efficiency and flexibility. At present, foreign scholars have performed a great deal of biological experiments on insect visual navigation and obstacle avoidance mechanisms, and deep research summary is performed to form a great deal of borrowable research results. Anatomical experiments on a locust visual system prove that a Lobular Giant Movement Detector (LGMD) is a main neuron for finishing a collision early warning function, and Rind F.C. and Bramwell D.I. and the like provide a classic 4-layer LGMD input neural network structure, adopt an excitation inhibition mode to detect the approach of an object, and start the research of collision early warning by means of the LGMD network. Yue S, et al, introduce new neurons to enhance the LGMD collision early warning capability, prove the validity and robustness of the LGMD network to the collision early warning, and transplant the LGMD network to a micro-robot to study the collision early warning capability under the population condition. However, at present, the application research of the LGMD on the unmanned aerial vehicle is less, random selection is still adopted in the aspect of selecting the obstacle avoidance direction, and prediction is lacked in unknown scenes.
Disclosure of Invention
The invention provides an LGMD-based unmanned aerial vehicle bionic obstacle avoidance control system for overcoming the technical defects that the traditional unmanned aerial vehicle collision detection method depends on the complexity of barrier materials, textures and backgrounds and can only be used in simple and specific environments, and improving the flexibility and efficiency of unmanned aerial vehicle obstacle avoidance.
And the system also provides an LGMD-based bionic obstacle avoidance control method for the unmanned aerial vehicle.
In order to solve the technical problems, the technical scheme of the invention is as follows:
an LGMD-based bionic obstacle avoidance control system of an unmanned aerial vehicle comprises a flight control subsystem, an optical flow sensor, a driving motor, an embedded LGMD detector, a camera, a wireless communication module and a ground station PC; wherein:
the optical flow sensor and the embedded LGMD detector are electrically connected with the flight control subsystem for information interaction;
the wireless communication module and the driving motor are electrically connected with the output end of the flight control subsystem;
the input end of the embedded LGMD detector is in signal connection with the output end of the camera;
the wireless communication module is in wireless communication connection with the ground station PC.
The embedded LGMD detector is provided with an LGMD neural network, video information collected by the camera is calculated through the LGMD neural network, an obstacle avoidance control instruction is obtained through the LGMD neural network and output to the flight control subsystem, and obstacle avoidance control of the unmanned aerial vehicle is achieved.
Wherein the LGMD neural network comprises P layer neurons, E layer neurons, I layer neurons, S layer neurons, G layer neurons, LGMD neurons, and feed forward inhibitory FFI neurons; wherein:
the P-layer neuron acquires the field image information of an input video and responds to frame difference to obtain a P-layer neuron membrane potential;
the P layer neuronal membrane potential acts directly as the excitatory membrane potential of the E layer neurons;
the I layer neuron receives the output of a frame on the P layer neuron and performs local inhibition to obtain an inhibition membrane potential;
the neuron of the S layer gathers the output in the corresponding position field of the neuron of the I layer, and the neuron of the E layer is inhibited through inhibiting membrane potential to obtain membrane potential of the neuron of the S layer;
the G-layer neuron is used for enhancing and extracting collision objects under a complex background, and the G-layer neuron membrane potential is obtained through calculation according to the S-layer neuron membrane potential;
the LGMD neuron carries out diagonal segmentation on the image information of the visual field to obtain 4 pieces of azimuth information; calculating according to the G-layer neuron membrane potential and the 4 azimuth information to obtain the membrane potential of 4 azimuth C-LGMD, and adding the membrane potentials of the 4 azimuth C-LGMD to obtain the LGMD neuron membrane potential;
the feed-forward inhibition FFI neuron directly acquires field image information from the P layer neuron and inhibits the LGMD neuron membrane potential, so that an obstacle avoidance control instruction is obtained and output to the flight control subsystem.
The system further comprises an inertial sensor, wherein the inertial sensor is integrated on the flight control subsystem and is electrically connected with the flight control subsystem.
The system further comprises a laser sensor, and the laser sensor is electrically connected with the input end of the optical flow sensor.
In the scheme, the core of the flight control subsystem is STM32F 407V; the inertial sensor main body is an MPU6050 chip; the main body of the optical Flow sensor is a Pix4Flow chip; the main body of the wireless communication module is nRF24L 01; the inertial sensor is used for recording attitude information of the unmanned aerial vehicle, the optical flow sensor is used as a horizontal plane position and speed feedback device, and the camera is used for collecting video information in real time; the flight control subsystem calculates PWM values corresponding to the driving motors according to instructions output by the embedded LGMD detector, and finally outputs the PWM values to four driving motors of the unmanned aerial vehicle to realize obstacle avoidance control; and the wireless communication module returns real-time data to the ground station PC.
An LGMD-based bionic obstacle avoidance control method for an unmanned aerial vehicle comprises the following steps:
s1: performing real-time video acquisition through a camera to obtain an input video;
s2: the embedded LGMD detector acquires the view field image information of an input video, obtains an obstacle avoidance control instruction through LGMD neural network calculation, and outputs the obstacle avoidance control instruction to the flight control subsystem to realize the obstacle avoidance control of the unmanned aerial vehicle.
The specific process of calculating the obstacle avoidance control instruction by the LGMD neural network comprises the following steps:
s21: the P-layer neuron acquires the visual field image information of the input video, responds to the frame difference and obtains the membrane potential P of the P-layer neuronf(x, y), specifically:
Pf(x,y)=Lf(x,y)-Lf-1(x,y); 1)
wherein: f represents the f-th frame in the video sequence, (x, y) is the position of the pixel point in the network layer, Lf(x, y) are pixel values of the input field-of-view image;
s22: the output of the P layer neuron is used as the input of the E layer neuron and the I layer neuron, the E layer neuron directly receives the output of the P layer neuron, and the I layer neuron receives the output of a frame on the P layer neuron and performs local inhibition, which is specifically represented as:
Ef(x,y)=Pf(x,y); 2)
If(x,y)=∑ijPf-1(x+i,y+j)wi(i,j)(if i=j,j≠0); 3)
Figure BDA0002021925780000031
wherein: ef(x, y) is the excitatory membrane potential, i.e.the E-layer neuronal membrane potential; i isf(x, y) is the inhibitory membrane potential, i.e., the layer I neuronal membrane potential; w is ai(i, j) is the local suppression weight; i and j are not 0 at the same time;
s23: the method comprises the following steps that (1) the neurons in the S layer converge output in the corresponding position field of the neurons in the I layer, and the neurons in the E layer are inhibited through inhibiting membrane potential to obtain the membrane potential of the neurons in the S layer, and specifically the membrane potential of the neurons in the S layer is as follows:
Sf(x,y)=Ef(x,y)-If(x,y)WI; 5)
wherein: sf(x, y) is the membrane potential of S-layer neurons; wIIs a suppression weight matrix;
s24: the G-layer neuron is used for enhancing and extracting collision objects under the complex background, and the G-layer neuron membrane potential is obtained through calculation according to the S-layer neuron membrane potential, and the method specifically comprises the following steps:
Figure BDA00020219257800000411
Figure BDA0002021925780000041
wherein: gf(x, y) is G-layer neuronal membrane potential; [ w ]e]Is a convolution kernel; r is the convolution kernel radius, and r is 1;
setting a threshold TdeTo filter weak excitation points, specifically:
Figure BDA0002021925780000042
wherein:
Figure BDA0002021925780000043
is a filtered G-layer neuronal membrane potential, CdeHas a coefficient of weakness of [0, 1%];TdeIs a filtering threshold;
s25: when the field image information reaches the LGMD neuron, the field image information is diagonally segmented to obtain coordinates on a y axis of two diagonal lines, which are specifically expressed as:
Figure BDA0002021925780000044
Figure BDA0002021925780000045
wherein: diag1, Diag2 are y coordinates of x corresponding to two diagonal lines, respectively; w is the width of the image; h is the height of the image; thereby obtaining the membrane potential of the 4-direction C-LGMD, which is specifically as follows:
Figure BDA0002021925780000046
Figure BDA0002021925780000047
Figure BDA0002021925780000048
Figure BDA0002021925780000049
wherein: u shapeLGMD,DLGMD,LLGMD,RLGMDMembrane potentials of 4 directions of upper, lower, left and right of the image respectively; adding the membrane potentials of the 4 orientations C-LGMD to obtain the LGMD neuron membrane potential KfThe method specifically comprises the following steps:
Figure BDA00020219257800000410
will KfNormalized and mapped to [0,255 ]]The range of (a) is specifically:
Figure BDA0002021925780000051
wherein n iscellThe total number of pixel points of the image;
obtaining the membrane potential after mapping in 4 directions according to the proportion of the membrane potential of the C-LGMD in the whole image in the 4 directions, which specifically comprises the following steps:
Figure BDA0002021925780000052
Figure BDA0002021925780000053
Figure BDA0002021925780000054
Figure BDA0002021925780000055
wherein the content of the first and second substances,
Figure BDA0002021925780000056
respectively mapping the membrane potentials of the upper, lower, left and right 4 azimuths of the image; when k isfExceeds its threshold value TsThen an LGMD peak pulse is generated
Figure BDA0002021925780000057
The method specifically comprises the following steps:
Figure BDA0002021925780000058
if n is continuoustsPulse is not less than nspThen, it is determined that a collision is about to occur, which is specifically expressed as:
Figure BDA0002021925780000059
s26: the feedforward inhibition FFI neuron directly acquires field-of-view image information from the P layer neuron, which is specifically represented as:
Figure BDA00020219257800000510
wherein, FfInhibiting the membrane potential of an FFI neuron for feedforward; t isFFIIs a preset threshold value; when F is presentfExceeds a threshold value TFFILGMD neuronThe membrane potential is immediately suppressed;
s27: selecting the direction: FFI membrane potential F is inhibited by current feedfGreater than a threshold value TFFIAll obstacle avoidance instructions are invalid; when C is presentfinalTurn and feed forward suppression of FFI membrane potential FfLess than threshold TFFIThen, the obstacle is judged to appear, and 4 directions are compared
Figure BDA00020219257800000511
The magnitude of the membrane potential takes the direction with the minimum membrane potential among the 4 directions as the safest obstacle avoidance direction;
s28: predicting a flight scene: when C is presentfinalFALSE and feed forward suppression of FFI Membrane potential FfLess than threshold TFFIWhen the unmanned aerial vehicle flies normally, predicting an unknown flying scene of the unmanned aerial vehicle by acquiring membrane potentials of 4 orientations of the FFI;
s29: and forming an obstacle avoidance control command by using the signals obtained in the steps S25, S27 and S28 as obstacle avoidance control signals.
The flight scene prediction process in step S28 specifically includes:
setting a scene prediction threshold TFFIIFFI average membrane potential when N frames are ahead
Figure BDA0002021925780000061
Less than threshold TFFIIIn time, the unmanned aerial vehicle flies normally; FFI average membrane potential when N frames are ahead
Figure BDA0002021925780000062
Greater than a threshold value TFFIIIs less than a threshold value TFFIBy comparison
Figure BDA0002021925780000063
Finding out the direction with the minimum membrane potential so as to predict the direction with the minimum future flight direction barrier in the front scene;
wherein, the FFI average membrane potential is calculated by N frames ahead
Figure BDA0002021925780000064
The concrete expression is as follows:
Figure BDA0002021925780000065
mean value of membrane potential in 4 azimuths
Figure BDA0002021925780000066
Respectively expressed as:
Figure BDA0002021925780000067
Figure BDA0002021925780000068
Figure BDA0002021925780000069
Figure BDA00020219257800000610
in the formula (I), the compound is shown in the specification,
Figure BDA00020219257800000611
represents the average value of the membrane potential of N frames before 4 orientations of FFI, including the upper, lower, left and right orientations.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
according to the system and the method for the bionic obstacle avoidance control system of the unmanned aerial vehicle based on the LGMD, provided by the invention, the LGMD neural network is built, and the field image is segmented, so that the space direction selection and the scene prediction are realized in the flight process of the unmanned aerial vehicle, and the real-time and efficient obstacle avoidance flight of the unmanned aerial vehicle in an unknown environment is realized.
Drawings
FIG. 1 is a schematic diagram of the structural connections of the system of the present invention;
FIG. 2 is a schematic diagram of an LGMD neural network;
FIG. 3 is a schematic flow diagram of the process of the present invention;
FIG. 4 is a view of field image segmentation;
wherein: 1. a flight control subsystem; 2. an optical flow sensor; 3. a drive motor; 4. an embedded LGMD detector; 5. a camera; 6. a wireless communication module; 7. a ground station PC; 8. a laser sensor.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, an LGMD-based unmanned aerial vehicle bionic obstacle avoidance control system includes a flight control subsystem 1, an optical flow sensor 2, a driving motor 3, an embedded LGMD detector 4, a camera 5, a wireless communication module 6 and a ground station PC 7; wherein:
the optical flow sensor 2 and the embedded LGMD detector 4 are electrically connected with the flight control subsystem 1 for information interaction;
the wireless communication module 6 and the driving motor 3 are electrically connected with the output end of the flight control subsystem 1;
the input end of the embedded LGMD detector 4 is in signal connection with the output end of the camera 5;
the wireless communication module 6 is in wireless communication connection with a ground station PC 7.
More specifically, the embedded LGMD detector 4 is provided with an LGMD neural network, video information acquired by the camera 5 is calculated through the LGMD neural network, and an obstacle avoidance control instruction is obtained by the LGMD neural network and output to the flight control subsystem 1, so that obstacle avoidance control of the unmanned aerial vehicle is realized.
More specifically, as shown in fig. 2, the LGMD neural network includes P-layer neurons, E-layer neurons, I-layer neurons, S-layer neurons, G-layer neurons, LGMD neurons, and feed-forward inhibitory FFI neurons; wherein:
the P-layer neuron acquires the field image information of an input video and responds to frame difference to obtain a P-layer neuron membrane potential;
the P layer neuronal membrane potential acts directly as the excitatory membrane potential of the E layer neurons;
the I layer neuron receives the output of a frame on the P layer neuron to obtain an inhibition membrane potential;
the neuron of the S layer gathers the output in the corresponding position field of the neuron of the I layer, and the neuron of the E layer is inhibited through inhibiting membrane potential to obtain membrane potential of the neuron of the S layer;
the G-layer neuron is used for enhancing and extracting collision objects under a complex background, and the G-layer neuron membrane potential is obtained through calculation according to the S-layer neuron membrane potential;
the LGMD neuron carries out diagonal segmentation on the image information of the visual field to obtain 4 pieces of azimuth information; calculating according to the G-layer neuron membrane potential and the 4 azimuth information to obtain the membrane potential of 4 azimuth C-LGMD, and adding the membrane potentials of the 4 azimuth C-LGMD to obtain the LGMD neuron membrane potential;
the feed-forward inhibition FFI neuron directly acquires field image information from the P layer neuron, and inhibits the LGMD neuron membrane potential, so that an obstacle avoidance control instruction is obtained and output to the flight control subsystem 1.
More specifically, the system further comprises an inertial sensor integrated on the flight control subsystem and electrically connected with the flight control subsystem.
More specifically, the system still includes laser sensor 8, laser sensor 8 with optical flow sensor 2 input electric connection, with optical flow sensor 2 cooperation is used, mainly used unmanned aerial vehicle's height finding, height-fixing.
In a specific implementation process, the core of the flight control subsystem 1 is STM32F 407V; the inertial sensor main body is an MPU6050 chip; the main body of the optical Flow sensor 2 is a Pix4Flow chip; the main body of the wireless communication module 6 is nRF24L 01; the inertial sensor is used for recording attitude information of the unmanned aerial vehicle, the optical flow sensor 2 is used as a horizontal plane position and speed feedback device, and the camera 5 is used for collecting video information in real time; the flight control subsystem 1 calculates the corresponding PWM value of the driving motor 3 according to the instruction output by the embedded LGMD detector 4, and finally outputs the PWM value to the four driving motors 3 of the unmanned aerial vehicle to realize obstacle avoidance control; the wireless communication module 6 returns real-time data to the ground station PC 7.
Example 2
More specifically, on the basis of embodiment 1, an LGMD-based unmanned aerial vehicle bionic obstacle avoidance control method is provided, which includes the following steps:
s1: performing real-time video acquisition through a camera 5 to obtain an input video;
s2: the embedded LGMD detector 4 acquires the view field image information of an input video, obtains an obstacle avoidance control instruction through LGMD neural network calculation, and outputs the obstacle avoidance control instruction to the flight control subsystem 1 to realize the obstacle avoidance control of the unmanned aerial vehicle.
More specifically, as shown in fig. 3, the specific process of calculating the obstacle avoidance control instruction by the LGMD neural network includes:
s21: the P-layer neuron acquires the visual field image information of the input video, responds to the frame difference and obtains the membrane potential P of the P-layer neuronf(x, y), specifically:
Pf(x,y)=Lf(x,y)-Lf-1(x,y); 1)
wherein: f represents the f-th frame in the video sequence, (x, y) is the position of the pixel point in the network layer, Lf(x, y) are pixel values of the input field-of-view image;
s22: the output of the P layer neuron is used as the input of the E layer neuron and the I layer neuron, the E layer neuron directly receives the output of the P layer neuron, and the I layer neuron receives the output of a frame on the P layer neuron and performs local inhibition, which is specifically represented as:
Ef(x,y)=Pf(x,y); 2)
If(x,y)=∑ijPf-1(x+i,y+j)wi(i,j)(if i=j,j≠0); 3)
Figure BDA0002021925780000091
wherein: ef(x, y) is the excitatory membrane potential, i.e.the E-layer neuronal membrane potential; i isf(x, y) is the inhibitory membrane potential, i.e., the layer I neuronal membrane potential; w is ai(i, j) is the local suppression weight; i and j are not 0 at the same time, and the local suppression weight matrix of the embodiment is only one form.
S23: the method comprises the following steps that (1) the neurons in the S layer converge output in the corresponding position field of the neurons in the I layer, and the neurons in the E layer are inhibited through inhibiting membrane potential to obtain the membrane potential of the neurons in the S layer, and specifically the membrane potential of the neurons in the S layer is as follows:
Sf(x,y)=Ef(x,y)-If(x,y)WI; 5)
wherein: sf(x, y) is the membrane potential of S-layer neurons; wIIs a suppression weight matrix;
s24: the G-layer neuron is used for enhancing and extracting collision objects under the complex background, and the G-layer neuron membrane potential is obtained through calculation according to the S-layer neuron membrane potential, and the method specifically comprises the following steps:
Figure BDA0002021925780000094
Figure BDA0002021925780000095
wherein: gf(x, y) is G-layer neuronal membrane potential; [ w ]e]The convolution kernel and the radius thereof in the embodiment are only one form; r is the convolution kernel radius, and r is 1;
setting a threshold TdeTo filter weak excitation points, specifically:
Figure BDA0002021925780000092
wherein:
Figure BDA0002021925780000093
is a filtered G-layer neuronal membrane potential, CdeHas a coefficient of weakness of [0, 1%];TdeIs a filtering threshold;
s25: when the field image information reaches the LGMD neurons, the field image information is diagonally segmented, as shown in fig. 4, and the coordinates on the y-axis of the two diagonal lines are obtained as specifically expressed as:
Figure BDA0002021925780000101
Figure BDA0002021925780000102
wherein: diag1, Diag2 are y coordinates of x corresponding to two diagonal lines, respectively; w is the width of the image; h is the height of the image; thereby obtaining the membrane potential of the 4-direction C-LGMD, which is specifically as follows:
Figure BDA0002021925780000103
Figure BDA0002021925780000104
Figure BDA0002021925780000105
Figure BDA0002021925780000106
wherein: u shapeLGMD,DLGMD,LLGMD,RLGMDMembrane potentials of 4 directions of upper, lower, left and right of the image respectively; adding the membrane potentials of the 4 orientations C-LGMD to obtain the LGMD neuron membrane potential KfThe method specifically comprises the following steps:
Figure BDA0002021925780000107
will KfNormalized and mapped to [0,255 ]]The range of (a) is specifically:
Figure BDA0002021925780000108
wherein n iscellThe total number of pixel points of the image;
obtaining the membrane potential after mapping in 4 directions according to the proportion of the membrane potential of the C-LGMD in the whole image in the 4 directions, which specifically comprises the following steps:
Figure BDA0002021925780000109
Figure BDA00020219257800001010
Figure BDA00020219257800001011
Figure BDA00020219257800001012
wherein the content of the first and second substances,
Figure BDA00020219257800001013
respectively mapping the membrane potentials of the upper, lower, left and right 4 azimuths of the image; when k isfExceeds its threshold value TsThen an LGMD peak pulse is generated
Figure BDA00020219257800001014
The method specifically comprises the following steps:
Figure BDA0002021925780000111
if n is continuoustsPulse is not less than nspThen, it is determined that a collision is about to occur, which is specifically expressed as:
Figure BDA0002021925780000112
s26: the feedforward inhibition FFI neuron directly acquires field-of-view image information from the P layer neuron, which is specifically represented as:
Figure BDA0002021925780000113
wherein, FfInhibiting the membrane potential of an FFI neuron for feedforward; t isFFIIs a preset threshold value; when F is presentfExceeds a threshold value TFFIThe LGMD neuron membrane potential is immediately suppressed;
s27: selecting the direction: FFI membrane potential F is inhibited by current feedfGreater than a threshold value TFFIAll obstacle avoidance instructions are invalid; when C is presentfinalTurn and feed forward suppression of FFI membrane potential FfLess than threshold TFFIThen, the obstacle is judged to appear, and 4 directions are compared
Figure BDA0002021925780000114
The magnitude of the membrane potential takes the direction with the minimum membrane potential among the 4 directions as the safest obstacle avoidance direction;
in the specific implementation process, the direction with the minimum film potential in 4 directions is taken as the safest obstacle avoidance direction, and the output function constructed by the C-LGMD film potential, the change speed and the acceleration in the obstacle avoidance direction is taken asNumber UfAnd giving out obstacle avoidance speed, collecting the calculation result of the LGMD neural network model by the motion decision neuron, outputting the calculation result as an embedded LGMD detector, and giving out a corresponding obstacle avoidance signal.
In the implementation process, when an obstacle avoidance signal is generated, and an obstacle avoidance signal is detected
Figure BDA0002021925780000115
If the membrane potential is minimum, the direction is the optimal obstacle avoidance direction, and an output function U constructed by combining the membrane potential, the change speed and the acceleration is combinedfAnd giving the obstacle avoidance speed. In special cases, when the obstacle is just in the middle of the image, namely the 4 azimuthal membrane potentials are equal, the obstacle preferentially flies leftwards; if 2 or 3 membrane potentials in the same direction are present, the obstacle avoidance is preferentially carried out in the direction opposite to the direction of the strongest membrane potential.
More specifically, the output function is represented as:
Figure BDA0002021925780000116
wherein: k is a radical of1,k2,k3Is a proportionality coefficient;
s28: predicting a flight scene: when C is presentfinalFALSE and feed forward suppression of FFI Membrane potential FfLess than threshold TFFIWhen the unmanned aerial vehicle flies normally, predicting an unknown flying scene of the unmanned aerial vehicle by collecting 4 orientation membrane potentials of the FFI;
s29: and (5) taking the signals obtained in the steps S25, S27 and S28 as obstacle avoidance control signals to form obstacle avoidance control commands, and outputting the commands to the flight control subsystem.
More specifically, the flight scenario prediction process in step S28 specifically includes:
setting a scene prediction threshold TFFIIFFI average membrane potential when N frames are ahead
Figure BDA0002021925780000121
Less than threshold TFFIIIn time, the unmanned aerial vehicle flies normally; FFI average membrane potential when N frames are ahead
Figure BDA0002021925780000122
Greater than a threshold value TFFIIIs less than a threshold value TFFIBy comparison
Figure BDA0002021925780000123
Finding out the direction with the minimum membrane potential so as to predict the direction with the minimum future flight direction barrier in the front scene;
wherein, the FFI average membrane potential is calculated by N frames ahead
Figure BDA0002021925780000124
The concrete expression is as follows:
Figure BDA0002021925780000125
mean value of membrane potential in 4 azimuths
Figure BDA0002021925780000126
Respectively expressed as:
Figure BDA0002021925780000127
Figure BDA0002021925780000128
Figure BDA0002021925780000129
Figure BDA00020219257800001210
in the formula (I), the compound is shown in the specification,
Figure BDA00020219257800001211
represents FFI upper, lower, left and rightAverage of the membrane potentials of the N frames before 4 azimuths.
In the specific implementation process, the LGMD neural network model [1] consists of 5 layers of neuron cells, namely P layer neurons, E layer neurons, I layer neurons, S layer neurons, G layer neurons, LGMD neurons and feed-forward inhibition FFI neurons, is very sensitive to fast approaching objects, and is essentially characterized in that excitation in the whole network is rapidly increased due to edge stimulation of continuous expansion of the objects when the objects approach, so that the membrane potential of the LGMD neurons is rapidly increased, and the network has the function of early warning of collision due to the change; by dividing the field-of-view image into 4 orientations: the upper, lower, left and right sides form 4-direction competitive LGMD neurons, namely C-LGMD, and the optimal obstacle avoidance direction is obtained by comparing the obstacle avoidance responses of the LGMD in different directions, so that the unmanned aerial vehicle can avoid the obstacle more efficiently and flexibly.
In the implementation process, the invention records the image information of 4 orientations of the C-LGMD in real time
Figure BDA00020219257800001212
Figure BDA00020219257800001213
Further, the first derivative v is obtained from the image information of each positionU、vD、vL、vRAnd the second derivative αU、αD、αL、αRRespectively representing the velocity and acceleration of the image signal changes at each orientation. Meanwhile, accumulating the FFI membrane potential of the near N frames forward and obtaining the average value
Figure BDA00020219257800001214
And 4 values of orientation
Figure BDA00020219257800001215
Figure BDA00020219257800001216
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
[1]Yue S,Rind F C.Collision detection in complex dynamic scenes using an LGMD-based visual neural network with feature enhancement[J].IEEE Transactions on Neural Networks,2006,17(3):705-716。

Claims (6)

1. The utility model provides a bionical obstacle control system that keeps away of unmanned aerial vehicle based on LGMD which characterized in that: the system comprises a flight control subsystem (1), an optical flow sensor (2), a driving motor (3), an embedded LGMD detector (4), a camera (5), a wireless communication module (6) and a ground station PC (7); wherein:
the optical flow sensor (2) and the embedded LGMD detector (4) are electrically connected with the flight control subsystem (1) for information interaction;
the wireless communication module (6) and the driving motor (3) are electrically connected with the output end of the flight control subsystem (1);
the input end of the embedded LGMD detector (4) is in signal connection with the output end of the camera (5);
the wireless communication module (6) is in wireless communication connection with a ground station PC (7);
an LGMD neural network is arranged on the embedded LGMD detector (4), video information acquired by the camera (5) is calculated through the LGMD neural network, an obstacle avoidance control instruction is obtained through the LGMD neural network and output to the flight control subsystem (1), and obstacle avoidance control of the unmanned aerial vehicle is achieved;
the LGMD neural network comprises P layer neurons, E layer neurons, I layer neurons, S layer neurons, G layer neurons, LGMD neurons and feed forward inhibitory FFI neurons; wherein:
the P-layer neuron acquires the field image information of an input video and responds to frame difference to obtain a P-layer neuron membrane potential;
the P layer neuronal membrane potential acts directly as the excitatory membrane potential of the E layer neurons;
the I layer neuron receives the output of a frame on the P layer neuron and performs local inhibition to obtain an inhibition membrane potential;
the neuron of the S layer gathers the output in the corresponding position field of the neuron of the I layer, and the neuron of the E layer is inhibited through inhibiting membrane potential to obtain membrane potential of the neuron of the S layer;
the G-layer neuron is used for enhancing and extracting collision objects under a complex background, and the G-layer neuron membrane potential is obtained through calculation according to the S-layer neuron membrane potential;
the LGMD neuron carries out diagonal segmentation on the image information of the visual field to obtain 4 pieces of azimuth information; calculating according to the G-layer neuron membrane potential and the 4 azimuth information to obtain the membrane potential of 4 azimuth C-LGMD, and adding the membrane potentials of the 4 azimuth C-LGMD to obtain the LGMD neuron membrane potential;
the feed-forward inhibition FFI neuron directly acquires field image information from the P layer neuron, and inhibits the LGMD neuron membrane potential, so that an obstacle avoidance control instruction is obtained and output to the flight control subsystem (1);
when the field image information reaches the LGMD neuron, the field image information is diagonally segmented to obtain coordinates on a y axis of two diagonal lines, which are specifically expressed as:
Figure FDA0003368502870000021
Figure FDA0003368502870000022
wherein: diag1, Diag2 are y coordinates of x corresponding to two diagonal lines, respectively; w is the width of the image; h is the height of the image; thereby obtaining the membrane potential of the 4-direction C-LGMD, which is specifically as follows:
Figure FDA0003368502870000023
Figure FDA0003368502870000024
Figure FDA0003368502870000025
Figure FDA0003368502870000026
wherein:
Figure FDA0003368502870000027
is a filtered G-layer neuronal membrane potential, ULGMD,DLGMD,LLGMD,RLGMDThe membrane potentials are respectively at the upper, lower, left and right 4 directions of the image, and the direction with the minimum membrane potential in the 4 directions is taken as the safest obstacle avoidance direction.
2. The LGMD-based bionic unmanned aerial vehicle obstacle avoidance control system is characterized in that: the flight control system is characterized by further comprising an inertial sensor, wherein the inertial sensor is integrated on the flight control subsystem (1) and is electrically connected with the flight control subsystem (1).
3. The LGMD-based bionic unmanned aerial vehicle obstacle avoidance control system is characterized in that: the device is characterized by further comprising a laser sensor (8), wherein the laser sensor (8) is electrically connected with the input end of the optical flow sensor (2).
4. An LGMD-based bionic unmanned aerial vehicle obstacle avoidance control method applied to the LGMD-based bionic unmanned aerial vehicle obstacle avoidance control system of claim 3, and characterized in that: the method comprises the following steps:
s1: real-time video acquisition is carried out through a camera (5) to obtain an input video;
s2: the embedded LGMD detector (4) acquires the view field image information of an input video, obtains an obstacle avoidance control command through LGMD neural network calculation, and outputs the obstacle avoidance control command to the flight control subsystem (1) to realize the obstacle avoidance control of the unmanned aerial vehicle.
5. The LGMD-based bionic unmanned aerial vehicle obstacle avoidance control method according to claim 4, wherein the method comprises the following steps: the specific process of calculating the obstacle avoidance control instruction by the LGMD neural network comprises the following steps:
s21: the P-layer neuron acquires the visual field image information of the input video, responds to the frame difference and obtains the membrane potential P of the P-layer neuronf(x, y), specifically:
Pf(x,y)=Lf(x,y)-Lf-1(x,y); 1)
wherein: f represents the f-th frame in the video sequence, (x, y) is the position of the pixel point in the network layer, Lf(x, y) are pixel values of the input field-of-view image;
s22: the output of the P layer neuron is used as the input of the E layer neuron and the I layer neuron, the E layer neuron directly receives the output of the P layer neuron, and the I layer neuron receives the output of a frame on the P layer neuron and performs local inhibition, which is specifically represented as:
Ef(x,y)=Pf(x,y); 2)
If(x,y)=∑ijPf-1(x+i,y+j)wj(i,j)(if i=j,j≠0); 3)
Figure FDA0003368502870000041
wherein: ef(x, y) is the excitatory membrane potential, i.e.the E-layer neuronal membrane potential; i isf(x, y) is the inhibitory membrane potential, i.e., the layer I neuronal membrane potential; w is aj(i, j) is the local suppression weight; i.e. iJ is not 0 at the same time;
s23: the method comprises the following steps that (1) the neurons in the S layer converge output in the corresponding position field of the neurons in the I layer, and the neurons in the E layer are inhibited through inhibiting membrane potential to obtain the membrane potential of the neurons in the S layer, and specifically the membrane potential of the neurons in the S layer is as follows:
Sf(x,y)=Ef(x,y)-If(x,y)WI; 5)
wherein: sf(x, y) is the membrane potential of S-layer neurons; wIIs a suppression weight matrix;
s24: the G-layer neuron is used for enhancing and extracting collision objects under the complex background, and the G-layer neuron membrane potential is obtained through calculation according to the S-layer neuron membrane potential, and the method specifically comprises the following steps:
Figure FDA0003368502870000042
Figure FDA0003368502870000043
wherein: gf(x, y) is G-layer neuronal membrane potential; [ w ]e]Is a convolution kernel; r is the convolution kernel radius, and r is 1;
setting a threshold TdeTo filter weak excitation points, specifically:
Figure FDA0003368502870000044
wherein:
Figure FDA0003368502870000045
is a filtered G-layer neuronal membrane potential, CdeHas a coefficient of weakness of [0, 1%];TdeIs a filtering threshold;
s25: when the field image information reaches the LGMD neuron, the field image information is diagonally segmented to obtain coordinates on a y axis of two diagonal lines, which are specifically expressed as:
Figure FDA0003368502870000046
Figure FDA0003368502870000047
wherein: diag1, Diag2 are y coordinates of x corresponding to two diagonal lines, respectively; w is the width of the image; h is the height of the image; thereby obtaining the membrane potential of the 4-direction C-LGMD, which is specifically as follows:
Figure FDA0003368502870000051
Figure FDA0003368502870000052
Figure FDA0003368502870000053
Figure FDA0003368502870000054
wherein: u shapeLGMD,ULGMD,LLGMD,RLGMDMembrane potentials of 4 directions of upper, lower, left and right of the image respectively; adding the membrane potentials of the 4 orientations C-LGMD to obtain the LGMD neuron membrane potential KfThe method specifically comprises the following steps:
Figure FDA0003368502870000055
will KfNormalized and mapped to [0,255 ]]The range of (a) is specifically:
Figure FDA0003368502870000056
wherein n iscellThe total number of pixel points of the image;
obtaining the membrane potential after mapping in 4 directions according to the proportion of the membrane potential of the C-LGMD in the whole image in the 4 directions, which specifically comprises the following steps:
Figure FDA0003368502870000057
Figure FDA0003368502870000058
Figure FDA00033685028700000513
Figure FDA0003368502870000059
wherein the content of the first and second substances,
Figure FDA00033685028700000510
respectively mapping the membrane potentials of the upper, lower, left and right 4 azimuths of the image; when K isfExceeds its threshold value TsThen an LGMD peak pulse is generated
Figure FDA00033685028700000511
The method specifically comprises the following steps:
Figure FDA00033685028700000512
if n is continuoustsPulse is not less than nspThen it is judged that a collision is about to occur, specificallyExpressed as:
Figure FDA0003368502870000061
s26: the feedforward inhibition FFI neuron directly acquires field-of-view image information from the P layer neuron, which is specifically represented as:
Figure FDA0003368502870000062
wherein, FfInhibiting the membrane potential of an FFI neuron for feedforward; t isFFIIs a preset threshold value; when F is presentfExceeds a threshold value TFFIThe LGMD neuron membrane potential is immediately suppressed;
s27: selecting the direction: FFI membrane potential F is inhibited by current feedfGreater than a threshold value TFFIAll obstacle avoidance instructions are invalid; when C is presentfinalTurn and feed forward suppression of FFI membrane potential FfLess than threshold TFFIThen, the appearance of the obstacle is judged, and 4 azimuth films are compared
Figure FDA0003368502870000063
The potential is the minimum position of the membrane potential in 4 positions as the safest obstacle avoidance direction;
s28: predicting a flight scene: when C is presentfinalFALSE and feed forward suppression of FFI Membrane potential FfLess than threshold TFFIWhen the unmanned aerial vehicle flies normally, predicting an unknown flying scene of the unmanned aerial vehicle by acquiring membrane potentials of 4 orientations of the FFI;
s29: and forming an obstacle avoidance control command by using the signals obtained in the steps S25, S27 and S28 as obstacle avoidance control signals.
6. The LGMD-based bionic unmanned aerial vehicle obstacle avoidance control method according to claim 5, wherein the method comprises the following steps: the process of predicting the flight scene in the step S28 specifically includes:
setting a scene prediction threshold TFFIIFFI average membrane potential when N frames are ahead
Figure FDA0003368502870000064
Less than threshold TFFIIIn time, the unmanned aerial vehicle flies normally; FFI average membrane potential when N frames are ahead
Figure FDA0003368502870000065
Greater than a threshold value TFFIIIs less than a threshold value TFFIBy comparison
Figure FDA0003368502870000066
Finding out the direction with the minimum membrane potential so as to predict the direction with the minimum future flight direction barrier in the front scene;
wherein, the FFI average membrane potential is calculated by N frames ahead
Figure FDA0003368502870000067
The concrete expression is as follows:
Figure FDA0003368502870000071
mean value of membrane potential in 4 azimuths
Figure FDA0003368502870000072
Respectively expressed as:
Figure FDA0003368502870000073
Figure FDA0003368502870000074
Figure FDA0003368502870000075
Figure FDA0003368502870000076
in the formula (I), the compound is shown in the specification,
Figure FDA0003368502870000077
represents the average value of the membrane potential of the previous N frames in the 4 orientations of the FFI, i.e. the upper, lower, left and right.
CN201910281845.5A 2019-04-09 2019-04-09 LGMD-based bionic obstacle avoidance control system and method for unmanned aerial vehicle Active CN109960278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910281845.5A CN109960278B (en) 2019-04-09 2019-04-09 LGMD-based bionic obstacle avoidance control system and method for unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910281845.5A CN109960278B (en) 2019-04-09 2019-04-09 LGMD-based bionic obstacle avoidance control system and method for unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN109960278A CN109960278A (en) 2019-07-02
CN109960278B true CN109960278B (en) 2022-01-28

Family

ID=67025945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910281845.5A Active CN109960278B (en) 2019-04-09 2019-04-09 LGMD-based bionic obstacle avoidance control system and method for unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN109960278B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110908399B (en) * 2019-12-02 2023-05-12 广东工业大学 Unmanned aerial vehicle autonomous obstacle avoidance method and system based on lightweight neural network
CN111831010A (en) * 2020-07-15 2020-10-27 武汉大学 Unmanned aerial vehicle obstacle avoidance flight method based on digital space slice
CN114217621B (en) * 2021-12-15 2023-07-07 中国科学院深圳先进技术研究院 Robot collision sensing method and sensing system based on bionic insect vision

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010129907A2 (en) * 2009-05-08 2010-11-11 Scientific Systems Company Inc. Method and system for visual collision detection and estimation
CN107170289A (en) * 2017-06-01 2017-09-15 岭南师范学院 One kind is based on optics multi-vision visual vehicle rear-end collision early warning system
US9984326B1 (en) * 2015-04-06 2018-05-29 Hrl Laboratories, Llc Spiking neural network simulator for image and video processing
CN108475058A (en) * 2016-02-10 2018-08-31 赫尔实验室有限公司 Time to contact estimation rapidly and reliably is realized so as to the system and method that carry out independent navigation for using vision and range-sensor data
CN208126205U (en) * 2018-04-28 2018-11-20 上海工程技术大学 A kind of unmanned flight's device of automatic obstacle-avoiding

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010129907A2 (en) * 2009-05-08 2010-11-11 Scientific Systems Company Inc. Method and system for visual collision detection and estimation
US9984326B1 (en) * 2015-04-06 2018-05-29 Hrl Laboratories, Llc Spiking neural network simulator for image and video processing
CN108475058A (en) * 2016-02-10 2018-08-31 赫尔实验室有限公司 Time to contact estimation rapidly and reliably is realized so as to the system and method that carry out independent navigation for using vision and range-sensor data
CN107170289A (en) * 2017-06-01 2017-09-15 岭南师范学院 One kind is based on optics multi-vision visual vehicle rear-end collision early warning system
CN208126205U (en) * 2018-04-28 2018-11-20 上海工程技术大学 A kind of unmanned flight's device of automatic obstacle-avoiding

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A Bio-inspired Collision Detector for Small Quadcopter;Jiannan Zhao;《 2018 International Joint Conference on Neural Networks (IJCNN)》;20180815;第1-7页 *
A bio-inspired embedded vision system for autonomous micro-robots: the LGMD case;Cheng Hu;Cheng Hu;《 IEEE Transactions on Cognitive and Developmental Systems》;20160530;第9卷(第3期);第241-254页 *
Collision detection in complex dynamic scenes using an LGMD-based visual neural network with feature enhancement,;Shigang Yue;《IEEE Transactions on Neural Networks》;20060508;第17卷(第3期);第705-716页 *
Collision selective LGMDs neuron models research benefits from a vision-based autonomous micro robot;Qinbing Fu;《 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)》;20171214;第3996-4002页 *
Obstacle avoidance with LGMD neuron: towards a neuromorphic UAV implementation;Llewyn Salt;《 2017 IEEE International Symposium on Circuits and Systems (ISCAS)》;20170928;第1-4页 *
基于生物视觉的碰撞预警传感器;张国鹏;《传感器与微系统》;20160615;第35卷(第3期);第70-73页 *

Also Published As

Publication number Publication date
CN109960278A (en) 2019-07-02

Similar Documents

Publication Publication Date Title
CN109144095B (en) Embedded stereoscopic vision-based obstacle avoidance system for unmanned aerial vehicle
CN109034018B (en) Low-altitude small unmanned aerial vehicle obstacle sensing method based on binocular vision
CN106681353B (en) The unmanned plane barrier-avoiding method and system merged based on binocular vision with light stream
CN109960278B (en) LGMD-based bionic obstacle avoidance control system and method for unmanned aerial vehicle
CN109460709A (en) The method of RTG dysopia analyte detection based on the fusion of RGB and D information
CN111399505A (en) Mobile robot obstacle avoidance method based on neural network
CN107817820A (en) A kind of unmanned plane autonomous flight control method and system based on deep learning
CN104331901A (en) TLD-based multi-view target tracking device and method
CN111461048B (en) Vision-based parking lot drivable area detection and local map construction method
CN111186379B (en) Automobile blind area dangerous object alarm method based on deep learning
CN114266889A (en) Image recognition method and device, readable medium and electronic equipment
CN114359714A (en) Unmanned body obstacle avoidance method and device based on event camera and intelligent unmanned body
CN106155082B (en) A kind of unmanned plane bionic intelligence barrier-avoiding method based on light stream
Liu et al. A novel trail detection and scene understanding framework for a quadrotor UAV with monocular vision
CN110610130A (en) Multi-sensor information fusion power transmission line robot navigation method and system
CN111611869B (en) End-to-end monocular vision obstacle avoidance method based on serial deep neural network
CN112380933B (en) Unmanned aerial vehicle target recognition method and device and unmanned aerial vehicle
WO2023155903A1 (en) Systems and methods for generating road surface semantic segmentation map from sequence of point clouds
Samal et al. Closed-loop approach to perception in autonomous system
US20220377973A1 (en) Method and apparatus for modeling an environment proximate an autonomous system
Hu et al. Coping with multiple visual motion cues under extremely constrained computation power of micro autonomous robots
Lv et al. Target recognition algorithm based on optical sensor data fusion
Cai et al. LWDNet-A lightweight water-obstacles detection network for unmanned surface vehicles
CN112731918B (en) Ground unmanned platform autonomous following system based on deep learning detection tracking
CN114217621B (en) Robot collision sensing method and sensing system based on bionic insect vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231211

Address after: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Dragon totem Technology (Hefei) Co.,Ltd.

Address before: 524037 No.29 Cunjin Road, Chikan District, Zhanjiang City, Guangdong Province

Patentee before: LINGNAN NORMAL University