CN114217621A - Robot collision sensing method and sensing system based on bionic insect vision - Google Patents
Robot collision sensing method and sensing system based on bionic insect vision Download PDFInfo
- Publication number
- CN114217621A CN114217621A CN202111539529.7A CN202111539529A CN114217621A CN 114217621 A CN114217621 A CN 114217621A CN 202111539529 A CN202111539529 A CN 202111539529A CN 114217621 A CN114217621 A CN 114217621A
- Authority
- CN
- China
- Prior art keywords
- signal
- path
- summation
- cell
- excitation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 241000238631 Hexapoda Species 0.000 title claims abstract description 29
- 239000011664 nicotinic acid Substances 0.000 title claims abstract description 23
- 230000033001 locomotion Effects 0.000 claims abstract description 105
- 210000004027 cell Anatomy 0.000 claims abstract description 80
- 210000002569 neuron Anatomy 0.000 claims abstract description 40
- 230000008447 perception Effects 0.000 claims abstract description 28
- 238000013519 translation Methods 0.000 claims abstract description 19
- 230000007246 mechanism Effects 0.000 claims abstract description 9
- 230000004044 response Effects 0.000 claims abstract description 9
- 230000005284 excitation Effects 0.000 claims description 60
- 230000001629 suppression Effects 0.000 claims description 40
- 230000005764 inhibitory process Effects 0.000 claims description 28
- 230000036961 partial effect Effects 0.000 claims description 24
- 210000001082 somatic cell Anatomy 0.000 claims description 17
- 108091008695 photoreceptors Proteins 0.000 claims description 11
- 230000003247 decreasing effect Effects 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 9
- 238000005457 optimization Methods 0.000 claims description 8
- 230000009467 reduction Effects 0.000 claims description 7
- 239000000126 substance Substances 0.000 claims description 6
- 239000012528 membrane Substances 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 3
- 238000010791 quenching Methods 0.000 claims description 2
- 238000003062 neural network model Methods 0.000 abstract description 8
- 201000009342 Limb-girdle muscular dystrophy Diseases 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 230000000638 stimulation Effects 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000003915 cell function Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000035772 mutation Effects 0.000 description 2
- 210000005036 nerve Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 241000255581 Drosophila <fruit fly, genus> Species 0.000 description 1
- 241000257303 Hymenoptera Species 0.000 description 1
- 241000238814 Orthoptera Species 0.000 description 1
- 241000255588 Tephritidae Species 0.000 description 1
- 230000036755 cellular response Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000026058 directional locomotion Effects 0.000 description 1
- 230000023886 lateral inhibition Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000037361 pathway Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0253—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/086—Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
Abstract
The invention discloses a robot collision sensing method and system based on bionic insect vision. The sensing method comprises the following steps: acquiring a real-time video of the surrounding environment when the robot moves; inputting the real-time video into a pre-constructed leaflet giant motion detector and a direction sensitive neuron model respectively to obtain a depth direction motion signal and a translation direction motion signal respectively, wherein the leaflet giant motion detector and the direction sensitive neuron share the same light sensing layer and ON \ OFF cells; obtaining a perception pulse signal according to the depth direction motion signal and the translation direction motion signal; and inputting the perception pulse signal to a motion decision mechanism to generate obstacle avoidance response. The method integrates two neural network models, can identify complex scenes, and simultaneously, the two neural network models share part of a network structure, so that the parameter complexity can be reduced, and the calculated amount can be reduced.
Description
Technical Field
The invention belongs to the technical field of computational neuroscience, and particularly relates to a robot collision sensing method and system based on bionic insect vision.
Background
Fast and reliable collision sensing is crucial for autonomous mobile robots (including ground vehicles and robots, unmanned aerial vehicles, etc.). Insects in nature possess an excellent motor perception system, and some specific visual neurons or pathways are found in insects such as fruit flies, locusts, ants, etc.: they have the surprising ability to handle moving objects and to interact with dynamic cluttered scenes, such as collision avoidance and tracking of targets, and these models can be used as ideal modules for designing dynamic vision systems or sensors for low-power, fast and reliable motion perception and obstacle avoidance of intelligent robots.
The current common visual collision detection method is the traditional computer vision technology, such as object scene segmentation, estimation or classification algorithm; some research has focused on sensor-based strategies, such as RGB-D cameras or event-driven (event-drive) cameras; still another part of research is directed to a collision sensing system using insect visual neurons to achieve a balance of stability and power consumption. Depending on the selectivity of the target for different directions and sizes, the insect optic neurons commonly used at present include LGMDs (lobular Giant Movement Detector) which are sensitive to proximal and posterior movements, DSNs (Direction sensitive neurons) which are sensitive to translational movements. According to the selectivity of moving objects with different light and shade degrees, the leaflet giant motion detector is divided into a type 1 (LGMD-1) and a type 2 (LGMD-2), wherein the type 2 is more sensitive to objects with darker backgrounds than the type 1, the biological structure and the calculation model of the type 1 and the type 2 are similar, and the difference is that the combination coefficients of a summation layer are different. F Claire Rind studied the role of lateral inhibition in reducing motor-dependent stimuli, and the first LGMDs universal neural network model was constructed and applied by Mark blanchard to collision detection for ground mobile robots. Cheng Hu et al apply LGMD visual neural networks as embedded vision systems to autonomous micro robots and achieve high obstacle avoidance performance in multi-obstacle complex fields. ON \ OFF mechanisms such as ShigangYue and QinbingFu provide robot obstacle avoidance implementation based ON LGMD, so as to improve collision detection performance in a driving scene. There are three main ways of DSN modeling: 1) based on optical flow and EMD (empirical motion detector), the modeling method is widely applied to unmanned aerial vehicles and micro air vehicles; 2) simulating visual processing of the drosophila by adding a plurality of nerve layers on the optical flow-based implementation level, wherein the nerve layers are used for decoding the translational motion direction of the object; (3) and modeling the neural layer based ON an ON \ OFF mechanism. Due to the selectivity of the single translation movement direction of the DSN, the single DSN cannot meet the requirements of the robot on all translation direction sensing and obstacle avoidance, and the sensing of multiple translation directions is realized by integrating multiple DSNs in the existing DSN modeling work.
Although the currently common computer vision technology is widely applied to most robot application scenes, the methods usually need multiple sensor data sources and are mostly realized in a deep neural network form at present, so that the defects of complex model, high calculation power consumption and the like exist. Therefore, collision perception methods such as insect bionic visual methods are developed at present, the methods simulate the insect to detect dynamic obstacles and make obstacle avoidance decisions through shallow neural network layer modeling, and have the advantages of being fast and low in calculation power consumption.
Disclosure of Invention
(I) technical problems to be solved by the invention
The technical problem solved by the invention is as follows: on the basis of identifying a complex scene in the motion process of the robot, the complexity of model parameters is reduced, and the calculation power consumption is reduced.
(II) the technical scheme adopted by the invention
A robot collision sensing method based on bionic insect vision comprises the following steps:
acquiring a real-time video of the surrounding environment when the robot moves;
inputting the real-time video into a pre-constructed leaflet giant motion detector and a direction sensitive neuron model respectively to obtain a depth direction motion signal and a translation direction motion signal respectively, wherein the leaflet giant motion detector and the direction sensitive neuron share the same light sensing layer and ON \ OFF cells;
obtaining a perception pulse signal according to the depth direction motion signal and the translation direction motion signal;
and inputting the perception pulse signal to a motion decision mechanism to generate obstacle avoidance response.
Preferably, the pre-constructed leaflet giant motion detector comprises the photoreceptor layer, the ON \ OFF cell, the first excitation cell, the first suppressor cell, the first local summation cell, the first summation layer and the first body cell, and the method for inputting the real-time video to the pre-constructed leaflet giant motion detector respectively to obtain the depth direction motion signal comprises the following steps:
the light sensing layer generates a continuous frame brightness difference signal of each pixel according to the input real-time video;
the ON \ OFF cell generates a brightness increasing signal ON an ON path and a brightness decreasing signal ON an OFF path according to the continuous frame brightness difference signals of the pixels;
the first excited cell and the first suppressor cell generate an excitation signal and a suppression signal ON an ON path and an excitation signal and a suppression signal ON an OFF path, respectively, based ON the brightness increase signal and the brightness decrease signal;
the first partial summation cell respectively carries out linear summation ON the excitation signal and the suppression signal ON an ON channel and the excitation signal and the suppression signal ON an OFF channel to respectively obtain a first partial summation signal of each pixel ON the ON channel and a first partial summation signal ON the OFF channel;
the first summation layer performs a super-linear summation ON the first local summation signals ON the ON path and the OFF path to obtain a summation signal of each pixel;
the first somatic cell obtains a feedforward excitation signal according to a summation signal of all pixels, and the first somatic cell obtains a feedforward inhibition signal according to an absolute value of brightness change of each pixel, and when the feedforward inhibition signal is smaller than a threshold value, the first somatic cell outputs the feedforward excitation signal as a depth direction motion signal.
Preferably, the first partial sum cells obtain the first partial sum signal S ON the ON path according to the following formulaonFirst local sum signal S on (x, y, t) and OFF pathsoff(x,y,t):
Wherein the constant ω1、ω2Denotes the suppression factor obtained by optimization, Eon(x, y, t) represents the excitation signal ON the ON path, Ion(x, y, t) represents a suppression signal ON the ON path, Eoff(x, y, t) represents the excitation signal on the OFF path, Ioff(x, y, t) represents a suppression signal on the OFF path.
Preferably, the first summation layer obtains a summation signal S (x, y, t) of each pixel according to the following formula:
S(x,y,t)=θ1Son(x,y,t)+θ2Soff(x,y,t)+θ3Son(x,y,t)Soff(x,y,t)
wherein, theta1、θ2、θ3Representing the optimized combination coefficients.
Preferably, the method for obtaining the feedforward excitation signal by the first bulk cell according to the summation signal of all pixels is as follows: and converting the summation signal of all pixels into sigmoid membrane potential through a sigmoid function to be used as a feedforward excitation signal.
Preferably, the robot collision sensing method further includes:
the first somatic cell sets the feedforward excitation signal to a minimum value when the feedforward suppression signal is greater than or equal to the threshold.
Preferably, the pre-constructed direction-sensitive neuron model includes the photoreceptor layer, the ON \ OFF cell, a second excitation cell, a second inhibition cell, a Reichardt detector, a second local summation cell, a second summation layer, and a second ontology cell, and the method for obtaining the translational direction motion signal by respectively inputting the real-time video to the pre-constructed direction-sensitive neuron model is as follows:
the light sensing layer generates a continuous frame brightness difference signal of each pixel according to the input real-time video;
the ON/OFF cell generates a brightness increasing signal with time delay ON an ON path, a brightness increasing signal with time delay ON an OFF path and a brightness decreasing signal with time delay ON a brightness decreasing signal with time delay according to the continuous frame brightness difference signal of each pixel;
the second excitation cell, the second inhibition cell and the Reichardt detector jointly obtain an excitation signal and an inhibition signal ON an ON path according to a brightness increasing signal and a brightness increasing signal with time delay ON the ON path, and obtain an excitation signal and an inhibition signal ON an OFF path according to a brightness decreasing signal and a brightness reducing signal with time delay ON the OFF path;
the second local summation cell carries out linear summation according to the excitation signal and the inhibition signal ON the ON path and the excitation signal and the inhibition signal ON the OFF path to respectively obtain a second local summation signal of each pixel ON the ON path and a second local summation signal ON the OFF path;
the second summation layer respectively sums the second local summation signals of all the pixels ON the ON path and the second local summation signals ON the OFF path to obtain second summation signals ON the ON path and second summation signals ON the OFF path;
and the second body cell generates a translational direction movement signal according to the second summation signal ON the ON path and the second summation signal ON the OFF path.
Preferably, the second stimulated cell, the second suppressor cell and the Reichardt detector calculate the stimulation signal and the suppression signal ON the ON path and the stimulation signal and the suppression signal ON the OFF path according to the following formulas:
wherein the content of the first and second substances,representing the stimulus signal and the quench signal ON the ON path,a luminance increasing signal with a delay representing a luminance increasing signal ON an ON path;representing the excitation signal and the suppression signal on the OFF path,a luminance reduction signal indicating an OFF path and a luminance reduction signal with a delay; n represents the Reichardt detector sample number and d represents the Reichardt detector sample distance.
Preferably, the second local summation cell obtains the second local summation signal ON the ON path and the second local summation signal ON the OFF path of each pixel according to the following formula:
wherein the content of the first and second substances,respectively representing the second partial sum signal ON the ON path and the second partial sum signal ON the OFF path, with a constant ω3,ω4The suppression coefficient obtained by optimization is shown.
The application also discloses robot collision perception system based on bionical insect vision, robot collision perception system includes:
the shooting unit is used for acquiring a real-time video of the surrounding environment when the robot moves;
the motion signal detection unit comprises a pre-constructed leaflet giant motion detector and a direction sensitive neuron model, wherein the leaflet giant motion detector and the direction sensitive neuron model share the same photoreceptor layer and ON/OFF cells, the leaflet giant motion detector is used for obtaining a depth direction motion signal according to the real-time video, and the direction sensitive neuron model is used for obtaining a translation direction motion signal according to the real-time video;
the signal cooperation unit is used for obtaining a perception pulse signal according to the depth direction motion signal and the translation direction motion signal;
a motion decision unit for generating obstacle avoidance response according to the perception pulse signal
(III) advantageous effects
The invention discloses a robot collision sensing method and a sensing device based on bionic insect vision, which have the following technical effects compared with the prior art:
the method integrates two neural network models, can identify complex scenes, and simultaneously, the two neural network models share part of a network structure, so that the parameter complexity can be reduced, and the calculated amount can be reduced.
Drawings
Fig. 1 is an overall flowchart of a robot collision sensing method based on bionic insect vision according to a first embodiment of the present invention;
fig. 2 is a detailed flowchart of a robot collision sensing method based on bionic insect vision according to a first embodiment of the present invention;
FIG. 3 is a first embodiment of the present invention;
fig. 4 is a block diagram of a robot collision sensing system based on bionic insect vision according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Before describing in detail the various embodiments of the present application, the inventive concepts of the present application are first briefly described: in the prior art, a common collision detection method for a robot motion process is difficult to identify a complex motion mode by adopting a shallow neural network model ON one hand, and complex system parameters and increased computational power consumption are caused by adopting a deep neural network model ON the other hand.
It should be noted that relevant parameters of the leaflet giant motion detector and the direction sensitive neuron model can be applied only after being adjusted and optimized in advance, specific contents describing a robot collision perception method based on bionic insect vision are developed by the adjusted and optimized leaflet giant motion detector and the direction sensitive neuron model, and then the adjustment and optimization process is described later.
Specifically, as shown in fig. 1 and fig. 2, the robot collision sensing method based on bionic insect vision of the first embodiment includes the following steps:
step S10: acquiring a real-time video of the surrounding environment when the robot moves;
step S20: respectively inputting the real-time video into a pre-constructed leaflet giant motion detector and a direction sensitive neuron model to respectively obtain a depth direction motion signal and a translation direction motion signal, wherein the leaflet giant motion detector and the direction sensitive neuron share the same photoreceptor layer and ON \ OFF cells;
step S30: obtaining a perception pulse signal according to the depth direction motion signal and the translation direction motion signal;
step S40: and inputting the perception pulse signal to a motion decision mechanism to generate obstacle avoidance response.
In step S10, the robot may be an intelligent device such as an unmanned aerial vehicle or an unmanned vehicle that needs to sense a dynamic obstacle of the surrounding environment, and a monocular camera carried by the robot may be used to acquire a real-time video of the surrounding environment.
Further, in step S20, the pre-constructed leaflet giant motion detector includes a light sensing layer, an ON \ OFF cell, a first stimulated cell, a first suppressor cell, a first local summing cell, a first summing layer, and a first somatic cell, and the pre-constructed direction sensitive neuron model includes a light sensing layer, an ON \ OFF cell, a second stimulated cell, a second suppressor cell, a Reichardt detector, a second local summing cell, a second summing layer, and a second somatic cell. It should be noted that, the leaflet giant motion detector and the direction sensitive neuron model both adopt a neural network model in the prior art, and the difference between the embodiment and the prior art is that the leaflet giant motion detector and the photoreceptor layer and the ON/OFF cells in the direction sensitive neuron model are shared, so that the parameter complexity caused by multiple channels and the calculation power consumption can be reduced.
Illustratively, as shown in fig. 2, the method of inputting real-time video to the pre-constructed leaflet jumbo motion detector to obtain the depth direction motion signal includes the following steps:
step S201: and the light sensing layer generates a continuous frame brightness difference signal of each pixel according to the input real-time video.
The photoreceptor layer is composed of photoreceptors with the same number of pixels as the image, is represented in a two-dimensional matrix form, and is used for calculating the brightness difference of the pixel point position (x, y) between the time t and the time t-1, namely the brightness difference between the continuous frames of the video, and is marked as P (x, y, t).
Step S202: the ON/OFF cell generates a luminance increase signal ON an ON path and a luminance decrease signal ON an OFF path from the continuous frame luminance difference signals of the respective pixels.
A half-wave rectifier is adopted to realize the ON \ OFF cell function, and continuous frame brightness difference signals generated by the light sensing layer are divided into two paths of signals of an ON path and an OFF path.
Step S203: the first excited cell and the first suppressor cell generate an excitation signal and a suppression signal ON an ON path and an excitation signal and a suppression signal ON an OFF path, respectively, based ON the luminance increase signal and the luminance decrease signal.
Wherein the brightness increasing signal ON the ON path is directly transmitted to the excited cell to obtain the excitation signal E ON the ON pathon(x, y, t), suppression signal I ON the ON pathon(x, y, t) is convolved with the delayed excitation signal. The brightness reduction signal on the OFF path is directly transmitted to the suppressor cell to obtain a suppression signal Ioff(x, y, t), and an excitation signal E on the OFF pathoff(x, y, t) is convolved with the suppressed signal with a time delay.
Step S204: the first partial summation cell respectively carries out linear summation ON the excitation signal and the suppression signal ON the ON path and the excitation signal and the suppression signal ON the OFF path to respectively obtain a first partial summation signal of each pixel ON the ON path and a first partial summation signal ON the OFF path.
Specifically, the first partial sum cell obtains the first partial sum signal S ON the ON path according to the following formulaonFirst local sum signal S on (x, y, t) and OFF pathsoff(x,y,t):
Wherein the constant ω1、ω2Denotes the suppression factor obtained by optimization, Eon(x, y, t) represents the excitation signal ON the ON path, Ion(x, y, t) represents a suppression signal ON the ON path, Eoff(x, y, t) represents the excitation signal on the OFF path, Ioff(x, y, t) represents a suppression signal on the OFF path.
Step S205: the first summing layer performs a super-linear summation ON the first local summation signals ON the ON path and the OFF path to obtain a summation signal for each pixel.
Wherein the first summation layer obtains a summation signal S (x, y, t) for each pixel according to the following formula:
S(x,y,t)=θ1Son(x,y,t)+θ2Soff(x,y,t)+θ3Son(x,y,t)Soff(x,y,t)
wherein, theta1、θ2、θ3Representing the optimized combination coefficients.
Step S206: the first somatic cell obtains a feedforward excitation signal according to a summation signal of all pixels, and the first somatic cell obtains a feedforward inhibition signal according to an absolute value of brightness change of each pixel, and when the feedforward inhibition signal is smaller than a threshold value, the first somatic cell outputs the feedforward excitation signal as a depth direction motion signal.
The first body cell response is jointly determined by feedforward excitation and feedforward inhibition, wherein the results S (x, y, t) in the summation layer of all pixels of the image are summed and converted through a sigmoid function to obtain the sigmoid membrane potential which is used as a feedforward excitation signal. The feedforward suppression signal is defined as the average of the absolute values of the luminance variations of the individual pixels. If the feedforward inhibition is greater than or equal to the threshold, directly setting the sigmoid membrane potential to be a minimum value of 0.5, namely indicating that the LGMDs neurons are immediately inhibited; otherwise, the feedforward inhibition mechanism does not affect the LGMDs cells, and at this time, the first somatic cells output the feedforward excitation signal as the depth direction movement signal.
Further, the method for respectively inputting the real-time video to the pre-constructed direction sensitive neuron model to obtain the translational direction motion signal comprises the following steps:
step S211: the light sensing layer generates continuous frame brightness difference signals of each pixel according to the input real-time video.
The photoreceptor layer is composed of photoreceptors with the same number of pixels as the image, is represented in a two-dimensional matrix form, and is used for calculating the brightness difference of the pixel point position (x, y) between the time t and the time t-1, namely the brightness difference between the continuous frames of the video, and is marked as P (x, y, t).
Step S212: the ON/OFF cell generates a luminance increasing signal with a delay ON the ON path, a luminance increasing signal with a delay ON the OFF path, and a luminance decreasing signal with a delay ON the OFF path based ON the luminance difference signals of the successive frames of each pixel.
A half-wave rectifier is adopted to realize the ON \ OFF cell function, and continuous frame brightness difference signals generated by the light sensing layer are divided into two paths of signals of an ON path and an OFF path.
Step S213: the second stimulated cell, the second suppressor cell, and the Reichardt detector collectively obtain a stimulation signal and a suppression signal ON the ON path based ON the brightness increase signal ON the ON path and the brightness increase signal with a delay, and obtain a stimulation signal and a suppression signal ON the OFF path based ON the brightness decrease signal ON the OFF path and the brightness decrease signal with a delay.
The second stimulated cell, the second suppressor cell and the Reichardt detector calculate the stimulation signal and suppression signal ON the ON path and the stimulation signal and suppression signal ON the OFF path according to the following formula:
wherein the content of the first and second substances,respectively representing the excitation signal and the suppression signal ON the ON path,respectively representing a brightness increasing signal ON an ON path and a brightness increasing signal with time delay;respectively representing the excitation signal and the suppression signal on the OFF path,respectively representing a luminance reduction signal and a luminance reduction signal with a delay on an OFF path; n represents the Reichardt detector sample number and d represents the Reichardt detector sample distance.
Step S214: the second local summation cell carries out linear summation according to the excitation signal and the inhibition signal ON the ON path and the excitation signal and the inhibition signal ON the OFF path to respectively obtain a second local summation signal of each pixel ON the ON path and a second local summation signal ON the OFF path;
the second partial summation cell obtains a second partial summation signal of each pixel ON an ON path and a second partial summation signal ON an OFF path according to the following formula:
wherein the content of the first and second substances,respectively representing the second partial sum signal ON the ON path and the second partial sum signal ON the OFF path, with a constant ω3,ω4The suppression coefficient obtained by optimization is shown.
Step S215: and the second summation layer respectively sums the second partial summation signals of all the pixels ON the ON path and the second partial summation signals ON the OFF path to obtain the second summation signals ON the ON path and the second summation signals ON the OFF path.
Step S215: the second somatic cell generates a translational directional motion signal according to the second summation signal ON the ON path and the second summation signal ON the OFF path.
Wherein, the sigmoid function is utilized to respectively process the second summation signal ON the ON path and the second summation signal ON the OFF path at the second somatic cell position to obtain the sigmoid membrane potentialOn the basis of the above-mentioned technical scheme,and adding to obtain the horizontal motion direction response HS (t), namely a translation direction motion signal.
Further, as shown in fig. 3, in step S30, a perception pulse signal method is obtained according to the depth direction motion signal and the translation direction motion signal. Wherein, the number of the small-leaf giant motion detectors in the embodiment is two, which are respectively expressed asLGMD-1, LGMD-2, responsible for the perception of near-moving objects in depth, the two being distinguished by a combination coefficient θ of the first summation layer1、θ2、θ3Are not identical. The direction sensitive neuron model is composed of a plurality of direction sensitive neurons, each of which is denoted as a DSNs. In order to realize cooperative perception among selective neurons in different directions, a cooperative/competitive mechanism is designed: on one hand, the LGMD-1 and the LGMD-2 cooperate with each other to generate a cooperation signal, so that the perception of objects with different brightness degrees is realized, and only when the LGMD-1 and the LGMD-2 are activated, the potential collision threat is finally confirmed; on the other hand, if DSN is activated, translational or angular proximity moving objects are detected, which results in strong suppression of LGMD-1, LGMD-2. And finally, transmitting the winning sensing pulse signal to a subsequent movement decision mechanism to generate obstacle avoidance response.
The specific application of the adjusted optimized leaflet giant motion detector and the direction sensitive neuron model is described above, and the specific optimization procedures for both are described below.
Firstly, video data used for optimizing and adjusting needs to be collected, and two collection modes are provided, namely, synthesizing simulation video data and acquiring an actual obstacle avoidance scene video. The first method comprises the following steps: generating a two-dimensional matrix corresponding to a test scene by using numpy on a python platform, wherein the matrix of continuous frames is realized by program circulation due to the fact that a video is to be synthesized; then, the PIL packet is utilized to realize the conversion from the two-dimensional matrix to the picture; and finally, converting the continuous pictures into the video by using opencv. And the second method comprises the following steps: the method comprises the steps of setting a laboratory obstacle avoidance scene, setting obstacles according to different shapes and sizes, remotely controlling the robot to move close to, away from, translate left, translate right and the like relative to the obstacles respectively according to relative movement due to the fact that the obstacle blocks cannot be controlled to move, and collecting videos shot by the robot carried by a monocular camera.
Then, if it is necessary to define whether the collision and non-collision events can be correctly distinguished to measure the perceived accuracy, a fitness function is defined:
wherein, Fcol(i),Fnon(i) Representing the number of events that have actually moved but not been detected and the number of events that have not actually moved but have made false detections, Ncol(i),Nnon(i) Total number of events, W, representing actual obstacle motion and actual no obstacle motioncol,WnonDenotes the corresponding penalty factor, Wcol,WnonValue of according to Fcol(i),Fnon(i) For determining the severity of the consequences of, usually Wcol>Wnon。
After constructing the single-leaflet giant motion detector and the direction-sensitive neuron model, the m-leaflet giant motion detector and the direction-sensitive neuron model are copied, hereinafter referred to as individuals, and model parameters including a first-order low-pass filter delay coefficient for acquiring a delay signal, an inhibition coefficient in a local summation cell, and excitation thresholds of LGMDs and DSNs cells are randomly initialized for each individual.
Respectively executing an evolutionary algorithm on two kinds of video data, specifically: inputting a video, operating all individuals, and calculating a fitness function value; sorting fitness function values of all individuals in a descending order and picking out the first n individuals; thirdly, the individuals in the second generation are paired and crossed to generate n/2 next generation individuals; fourthly, carrying out mutation on individuals generated in the third step according to mutation probability; calculating fitness function values of newly generated individuals and sorting the fitness function values in descending order with all the individuals; sixthly, selecting the first m-n/2 individuals to enter the next generation. And (5) circulating the above processes until the evolution algebra reaches a specified value, and obtaining the individuals winning the current generation in the evolution algorithm.
By the method, the leaflet giant motion detector with optimal parameters and the direction sensitive neuron model can be obtained.
Further, as shown in fig. 4, the robot collision sensing system based on bionic insect vision according to the second embodiment includes a shooting unit 10, a motion signal detection unit 20, a signal cooperation unit 30, and a motion decision unit 40. The camera unit 10 is used to obtain real-time video of the surroundings when the robot is in motion. The motion signal detection unit 20 includes a pre-constructed leaflet giant motion detector and a direction sensitive neuron model, the leaflet giant motion detector and the direction sensitive neuron model share the same photoreceptor layer and ON/OFF cells, the leaflet giant motion detector is used for obtaining a depth direction motion signal according to a real-time video, and the direction sensitive neuron model is used for obtaining a translation direction motion signal according to the real-time video. The signal cooperation unit 30 is configured to obtain a sensing pulse signal according to the depth direction motion signal and the translation direction motion signal; the motion decision unit 40 is configured to generate an obstacle avoidance response according to the sensing pulse signal. The data processing process of the leaflet giant motion detector, the direction sensitive neuron model, the signal cooperation unit 30 and the motion decision unit 40 refers to the related description of the first embodiment, and is not repeated herein.
Although a few embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents, and that such changes and modifications are intended to be within the scope of the invention.
Claims (10)
1. A robot collision sensing method based on bionic insect vision is characterized by comprising the following steps:
acquiring a real-time video of the surrounding environment when the robot moves;
inputting the real-time video into a pre-constructed leaflet giant motion detector and a direction sensitive neuron model respectively to obtain a depth direction motion signal and a translation direction motion signal respectively, wherein the leaflet giant motion detector and the direction sensitive neuron share the same light sensing layer and ON \ OFF cells;
obtaining a perception pulse signal according to the depth direction motion signal and the translation direction motion signal;
and inputting the perception pulse signal to a motion decision mechanism to generate obstacle avoidance response.
2. The method for robot collision sensing based ON bionic insect vision of claim 1, wherein the pre-constructed leaflet giant motion detector comprises the light sensing layer, the ON \ OFF cell, a first excitation cell, a first inhibition cell, a first local summation cell, a first summation layer and a first body cell, and the method for inputting the real-time video to the pre-constructed leaflet giant motion detector respectively to obtain the depth direction motion signal comprises:
the light sensing layer generates a continuous frame brightness difference signal of each pixel according to the input real-time video;
the ON \ OFF cell generates a brightness increasing signal ON an ON path and a brightness decreasing signal ON an OFF path according to the continuous frame brightness difference signals of the pixels;
the first excited cell and the first suppressor cell generate an excitation signal and a suppression signal ON an ON path and an excitation signal and a suppression signal ON an OFF path, respectively, based ON the brightness increase signal and the brightness decrease signal;
the first partial summation cell respectively carries out linear summation ON the excitation signal and the suppression signal ON an ON channel and the excitation signal and the suppression signal ON an OFF channel to respectively obtain a first partial summation signal of each pixel ON the ON channel and a first partial summation signal ON the OFF channel;
the first summation layer performs a super-linear summation ON the first local summation signals ON the ON path and the OFF path to obtain a summation signal of each pixel;
the first somatic cell obtains a feedforward excitation signal according to a summation signal of all pixels, and the first somatic cell obtains a feedforward inhibition signal according to an absolute value of brightness change of each pixel, and when the feedforward inhibition signal is smaller than a threshold value, the first somatic cell outputs the feedforward excitation signal as a depth direction motion signal.
3. The method of claim 2The robot collision sensing method based ON bionic insect vision is characterized in that the first local summation cell respectively obtains a first local summation signal S ON an ON channel according to the following formulaonFirst local sum signal S on (x, y, t) and OFF pathsoff(x,y,t):
Wherein the constant ω1、ω2Denotes the suppression factor obtained by optimization, Eon(x, y, t) represents the excitation signal ON the ON path, Ion(x, y, t) represents a suppression signal ON the ON path, Eoff(x, y, t) represents the excitation signal on the OFF path, Ioff(x, y, t) represents a suppression signal on the OFF path.
4. The robot collision perception method based on bionic insect vision according to claim 3, wherein the first summation layer obtains a summation signal S (x, y, t) of each pixel according to the following formula:
S(x,y,t)=θ1Son(x,y,t)+θ2Soff(x,y,t)+θ3Son(x,y,t)Soff(x,y,t)
wherein, theta1、θ2、θ3Representing the optimized combination coefficients.
5. The robot collision sensing method based on bionic insect vision according to claim 3, wherein the method for obtaining the feedforward excitation signal by the first somatic cell according to the summation signal of all pixels is as follows: and converting the summation signal of all pixels into sigmoid membrane potential through a sigmoid function to be used as a feedforward excitation signal.
6. The robot collision perception method based on bionic insect vision according to claim 6, characterized in that the robot collision perception method further comprises:
the first somatic cell sets the feedforward excitation signal to a minimum value when the feedforward suppression signal is greater than or equal to the threshold.
7. The robot collision sensing method based ON bionic insect vision according to claim 1, wherein the pre-constructed direction-sensitive neuron model comprises the light-sensing layer, the ON \ OFF cells, the second excitation cells, the second inhibition cells, the Reichardt detector, the second local summation cells, the second summation layer and the second body cells, and the method for obtaining the translational direction motion signals by respectively inputting the real-time videos into the pre-constructed direction-sensitive neuron model comprises the following steps:
the light sensing layer generates a continuous frame brightness difference signal of each pixel according to the input real-time video;
the ON/OFF cell generates a brightness increasing signal with time delay ON an ON path, a brightness increasing signal with time delay ON an OFF path and a brightness decreasing signal with time delay ON a brightness decreasing signal with time delay according to the continuous frame brightness difference signal of each pixel;
the second excitation cell, the second inhibition cell and the Reichardt detector jointly obtain an excitation signal and an inhibition signal ON an ON path according to a brightness increasing signal and a brightness increasing signal with time delay ON the ON path, and obtain an excitation signal and an inhibition signal ON an OFF path according to a brightness decreasing signal and a brightness reducing signal with time delay ON the OFF path;
the second local summation cell carries out linear summation according to the excitation signal and the inhibition signal ON the ON path and the excitation signal and the inhibition signal ON the OFF path to respectively obtain a second local summation signal of each pixel ON the ON path and a second local summation signal ON the OFF path;
the second summation layer respectively sums the second local summation signals of all the pixels ON the ON path and the second local summation signals ON the OFF path to obtain second summation signals ON the ON path and second summation signals ON the OFF path;
and the second body cell generates a translational direction movement signal according to the second summation signal ON the ON path and the second summation signal ON the OFF path.
8. The robot collision perception method based ON bionic insect vision according to claim 7, wherein the second excitation cell, the second inhibition cell and the Reichardt detector calculate an excitation signal and an inhibition signal ON an ON channel and an excitation signal and an inhibition signal ON an OFF channel according to the following formulas:
wherein the content of the first and second substances,representing the stimulus signal and the quench signal ON the ON path,a luminance increasing signal with a delay representing a luminance increasing signal ON an ON path;representing the excitation signal and the suppression signal on the OFF path,a luminance reduction signal indicating an OFF path and a luminance reduction signal with a delay; n represents the Reichardt detector sample number and d represents the Reichardt detector sample distance.
9. The method for robot collision perception based ON bionic insect vision according to claim 8, wherein the second local summation cell obtains the second local summation signal of each pixel ON an ON path and the second local summation signal ON an OFF path according to the following formulas:
10. A robot collision perception system based on bionic insect vision is characterized by comprising:
the shooting unit is used for acquiring a real-time video of the surrounding environment when the robot moves;
the motion signal detection unit comprises a pre-constructed leaflet giant motion detector and a direction sensitive neuron model, wherein the leaflet giant motion detector and the direction sensitive neuron model share the same photoreceptor layer and ON/OFF cells, the leaflet giant motion detector is used for obtaining a depth direction motion signal according to the real-time video, and the direction sensitive neuron model is used for obtaining a translation direction motion signal according to the real-time video;
the signal cooperation unit is used for obtaining a perception pulse signal according to the depth direction motion signal and the translation direction motion signal;
and the motion decision unit is used for generating obstacle avoidance response according to the perception pulse signal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111539529.7A CN114217621B (en) | 2021-12-15 | 2021-12-15 | Robot collision sensing method and sensing system based on bionic insect vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111539529.7A CN114217621B (en) | 2021-12-15 | 2021-12-15 | Robot collision sensing method and sensing system based on bionic insect vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114217621A true CN114217621A (en) | 2022-03-22 |
CN114217621B CN114217621B (en) | 2023-07-07 |
Family
ID=80702731
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111539529.7A Active CN114217621B (en) | 2021-12-15 | 2021-12-15 | Robot collision sensing method and sensing system based on bionic insect vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114217621B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107704866A (en) * | 2017-06-15 | 2018-02-16 | 清华大学 | Multitask Scene Semantics based on new neural network understand model and its application |
CN109960278A (en) * | 2019-04-09 | 2019-07-02 | 岭南师范学院 | A kind of bionical obstruction-avoiding control system of unmanned plane based on LGMD and method |
CN111816162A (en) * | 2020-07-09 | 2020-10-23 | 腾讯科技(深圳)有限公司 | Voice change information detection method, model training method and related device |
CN112053379A (en) * | 2020-08-21 | 2020-12-08 | 河海大学 | Biovisual nerve sensitivity bionic modeling method |
-
2021
- 2021-12-15 CN CN202111539529.7A patent/CN114217621B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107704866A (en) * | 2017-06-15 | 2018-02-16 | 清华大学 | Multitask Scene Semantics based on new neural network understand model and its application |
CN109960278A (en) * | 2019-04-09 | 2019-07-02 | 岭南师范学院 | A kind of bionical obstruction-avoiding control system of unmanned plane based on LGMD and method |
CN111816162A (en) * | 2020-07-09 | 2020-10-23 | 腾讯科技(深圳)有限公司 | Voice change information detection method, model training method and related device |
CN112053379A (en) * | 2020-08-21 | 2020-12-08 | 河海大学 | Biovisual nerve sensitivity bionic modeling method |
Non-Patent Citations (2)
Title |
---|
QINBING FU等: "Collision Selective LGMDs Neuron Models Research Benefits from a Vision-based Autonomous Micro Robot" * |
QINBING FU等: "Modeling Direction Selective Visual Neural Network with ON and OFF Pathways for Extracting Motion Cues from Cluttered Background" * |
Also Published As
Publication number | Publication date |
---|---|
CN114217621B (en) | 2023-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chen et al. | Event-based neuromorphic vision for autonomous driving: A paradigm shift for bio-inspired visual sensing and perception | |
CN109344725B (en) | Multi-pedestrian online tracking method based on space-time attention mechanism | |
CN108222749B (en) | Intelligent automatic door control method based on image analysis | |
Gaya et al. | Vision-based obstacle avoidance using deep learning | |
Bagheri et al. | An autonomous robot inspired by insect neurophysiology pursues moving features in natural environments | |
CN111832592B (en) | RGBD significance detection method and related device | |
Milde et al. | Bioinspired event-driven collision avoidance algorithm based on optic flow | |
D'Angelo et al. | Event-based eccentric motion detection exploiting time difference encoding | |
Fu et al. | Bio-inspired collision detector with enhanced selectivity for ground robotic vision system | |
CN114067166A (en) | Apparatus and method for determining physical properties of a physical object | |
CN109960278B (en) | LGMD-based bionic obstacle avoidance control system and method for unmanned aerial vehicle | |
Fu et al. | Performance of a visual fixation model in an autonomous micro robot inspired by drosophila physiology | |
CN110610130A (en) | Multi-sensor information fusion power transmission line robot navigation method and system | |
CN106651921B (en) | Motion detection method and method for avoiding and tracking moving target | |
Moeys et al. | Pred18: Dataset and further experiments with davis event camera in predator-prey robot chasing | |
CN111611869B (en) | End-to-end monocular vision obstacle avoidance method based on serial deep neural network | |
CN114217621B (en) | Robot collision sensing method and sensing system based on bionic insect vision | |
CN115116132B (en) | Human behavior analysis method for depth perception in Internet of things edge service environment | |
CN106815550B (en) | Emergency obstacle avoidance method based on visual fear reaction brain mechanism | |
Maldonado-Ramírez et al. | Ethologically inspired reactive exploration of coral reefs with collision avoidance: Bridging the gap between human and robot spatial understanding of unstructured environments | |
Perez-Cutino et al. | Event-based human intrusion detection in UAS using deep learning | |
Fu et al. | Complementary visual neuronal systems model for collision sensing | |
Rasamuel et al. | Specialized visual sensor coupled to a dynamic neural field for embedded attentional process | |
Zhang et al. | Temperature-based collision detection in extreme low light condition with bio-inspired LGMD neural network | |
Kerr et al. | Biologically inspired intensity and range image feature extraction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |