CN112097769A - Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method - Google Patents

Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method Download PDF

Info

Publication number
CN112097769A
CN112097769A CN202010776277.9A CN202010776277A CN112097769A CN 112097769 A CN112097769 A CN 112097769A CN 202010776277 A CN202010776277 A CN 202010776277A CN 112097769 A CN112097769 A CN 112097769A
Authority
CN
China
Prior art keywords
experience
image
module
navigation
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010776277.9A
Other languages
Chinese (zh)
Other versions
CN112097769B (en
Inventor
段海滨
辛龙
鲜宁
邓亦敏
魏晨
吴江
周锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202010776277.9A priority Critical patent/CN112097769B/en
Publication of CN112097769A publication Critical patent/CN112097769A/en
Application granted granted Critical
Publication of CN112097769B publication Critical patent/CN112097769B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and a homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation method, wherein the system comprises 7 modules, namely an image acquisition and preprocessing module, a local view field cell module, a mileometer module, a pose cell module, a head orientation cell module, a grid cell module and an experience map module; the simulated pigeon brain hippocampus has the characteristics of two independent navigation mechanisms, two types of brain navigation path integral algorithms are respectively established based on a continuous attractor neural network model, and the modules such as local view cells, pose cells, an experience map and the like are combined to realize redundant and robust positioning and navigation, so that the defect that the original RatSLAM algorithm does not have redundant navigation is overcome, the unmanned aerial vehicle can be assisted to realize navigation and positioning, and the intelligent level of the unmanned aerial vehicle is improved.

Description

Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method
Technical Field
The invention relates to a homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method, and belongs to the field of unmanned aerial vehicle autonomous navigation.
Background
With the advent of the information age, navigation technology has played an increasingly important role in new battlefield environments. The important problem to be solved by the intelligent robot represented by the unmanned aerial vehicle is autonomous navigation, namely, the unmanned aerial vehicle senses and learns through a sensor of the unmanned aerial vehicle in a dynamic unstructured environment, and the target-oriented obstacle avoidance autonomous moving process is realized. In 12 months 2007, the United states department of defense issued "road map for unmanned aerial vehicle 2007-plus 2032" indicates that, as the autonomous control level of the unmanned aerial vehicle is defined clearly, the autonomous navigation technology is combined with artificial intelligence, and the unmanned aerial vehicle has the capability of sensing and making decisions to the environment to a certain extent.
Currently, unmanned aerial vehicles have systematically developed navigation modes such as inertial navigation and satellite navigation, but the level of intelligence of unmanned aerial vehicles is still relatively low compared with the navigation capability of living beings. The satellite navigation system is easy to be maliciously interfered and damaged, and navigation signals can be rejected and weakened under the conditions of urban high buildings, mountain bottoms, canyons and the like; the inertial navigation increases along with the increase of time, errors can be accumulated quickly, and the high-precision inertial navigation sensor is high in cost and large in size. Simultaneous localization and mapping (SLAM) is one of the main ways for unmanned aerial vehicles to navigate and localize autonomously in unstructured or unknown complex environments, but the existing unmanned aerial vehicle SLAM system has the following outstanding problems: an accurate world and unmanned aerial vehicle physical model needs to be established by depending on environment sensing sensors such as a high-precision expensive laser radar and the like; the system is greatly influenced by the environment, the autonomous intelligent level is low, the requirement of the unmanned aerial vehicle on a navigation system cannot be well met, and an autonomous intelligent navigation mode needs to be developed.
Over the years of evolution, many animals have evolved the singular ability to perceive and use environmental information for navigation. Different from the dependence of human beings on high-tech and precision equipment, organisms in the nature can realize accurate navigation only by depending on environmental information through specific physiological structures of the organisms, so that the actions of foraging, homing, long-distance migration and the like are realized. Only 302 neurons of the nematode are still able to find food by olfactory signals, depending on the concentration of the scent. Other nervous systems are slightly more complex like bees and more complex path planning strategies are available. Birds and mammals rely more on more powerful and complex neural networks to survive in complex environments. The method is used for researching how the complex environment information of the higher animals is abstracted into the cognitive information in the brain to form understanding of the environment, so that the reproduction of the intelligence of the higher animals expressed in the complex environment is the key for solving the problem of autonomous intelligent navigation.
The animal can still accurately and rapidly position and navigate under the condition that the animal does not have a high-precision sense organ and a high-resolution map, the high-intelligent navigation capability depends on a cognitive map which is generated by a brain hippocampal structure and can represent the spatial relationship of environmental objects, and the formation of the cognitive map is closely related to spatial cells existing in the brain hippocampal region. The discharge activity of cells in the hippocampal space is thought to be able to develop an expression of an intrinsic map of the environment, the so-called cognitive map. With the successive discovery of several key navigation cells such as position cells, head orientation cells, grid cells, boundary cells and the like related to cognitive navigation in the hippocampal region of the brain of mammals such as rats and the like, the possibility of deeply understanding the brain navigation mechanism of the animals is provided. Inspired by the mechanisms, the brain-like navigation technology based on artificial intelligence is greatly developed in recent years, and a navigation method for simulating the brain navigation nerve process of rodent, represented by RatSLAM, is provided. In the current stage, robot navigation research based on rodent hippocampal environment cognitive mechanism mainly has two aspects, one is based on hippocampal anatomy structure and spatial cell cognitive mechanism, a neural network model is constructed for autonomous navigation of the mobile robot, and the mobile robot imitating a rat brain nervous system is carried; the other is real-time robot positioning And Mapping (SLAM) based on hippocampal neuro-behaviours. The former emphasizes on simulating the navigation behavior of animals from a behavioral level, and realizes the cognitive navigation of the robot; the latter focuses more on the simulation of the hippocampus structure, strictly imitating the information transmission mechanism among various regions in the hippocampus body and realizing the effect of various space cells. The former is relatively more realistic, suitable for existing navigation applications, and already has RatSLAM for projects.
Compare rodent, the navigation system of carrier pigeon has more the advantage, more is applicable to unmanned aerial vehicle intelligent navigation scene demand. Research shows that the homing pigeon hippocampal structure plays a key role in landmark navigation learning in a natural environment, and the navigation characteristic of the homing pigeon hippocampal structure has two obvious independent mechanisms: firstly, a homing guide is directly provided through a memorized visual mark, which is called as a pilotage mechanism, and the autonomous positioning and navigation can be realized only depending on the vision; secondly, the user can remember the compass direction by familiar landmarks to return to the home, so the map is called a mosaic map. Therefore, the homing pigeon hippocampal structure combines compass orientation and landmark guidance to form a dual control system of simultaneous or oscillatory switching, and the homing pigeon navigation in the familiar area is a combined navigation strategy combining compass and landmark information. If the RatSLAM based on rodent inspiration and other bionic SLAM methods based on mammals are directly adopted to simulate the hippocampal navigation mechanism of the homing pigeon, enough physiological basis is lacked, and the physiological phenomenon and cognitive function realization process of the hippocampal structure of the homing pigeon in navigation can not be accurately reflected. Meanwhile, the intelligent navigation and homing mechanism of the pigeon is quite consistent with the technical requirement of the visual autonomous intelligent navigation of the unmanned aerial vehicle, and especially, if the intelligent mechanism which is developed in the navigation and homing process of the pigeon can be applied to the autonomous navigation of the unmanned aerial vehicle, the intelligent navigation and homing mechanism has wide application prospect undoubtedly.
The invention provides an unmanned aerial vehicle SLAM navigation system imitating homing pigeon brain-hippocampus and a method thereof, aiming at the defects of the navigation system of the existing brain-like neural network system and inspired by the navigation mechanism of the pigeon brain-hippocampus. The SLAM navigation method provided by the invention simulates a signal pigeon brain-hippocampus neural network navigation mechanism from a bionic angle, integrates two independent path integration technologies based on an attractor neural network, realizes redundant navigation mapping and positioning by matching with local view correction, and effectively improves the autonomy of the unmanned aerial vehicle.
Disclosure of Invention
The invention aims to provide a homing pigeon brain-hippocampus simulation unmanned aerial vehicle simultaneous positioning and mapping navigation system and a method, and aims to simulate a cognitive navigation mechanism of a homing pigeon brain-hippocampus and establish an unmanned aerial vehicle navigation system simulating the homing pigeon brain-hippocampus navigation mechanism, specifically realize that the simulated homing pigeon brain-hippocampus has the characteristics of two positioning and navigation mechanisms, establish two continuous attractor neural network models, respectively realize two path integral algorithms, and cooperate with local view correction to establish topology to realize robust and redundant navigation to realize redundant navigation, and improve the intelligent level of the unmanned aerial vehicle.
The invention relates to a homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system, which is hereinafter referred to as an SLAM navigation system, and a system framework is shown in figure 1 and specifically comprises the following steps:
the SLAM navigation system consists of 7 modules which are respectively as follows: 1) an image acquisition and preprocessing module, 2) a Local view cell (Local view cells) module, 3) an odometer module, 4) a pose cell module, 5) a head-facing cell module, 6) a grid cell module, 7) an experience map module, the system can be divided into two independent brain-like navigation subsystems according to different functions, the difference is that the mileage information and the landmark perception are fused based on different attractor network models to form a consistent representation of the environment, wherein the image acquisition and preprocessing module, the local view field cell module, the odometer module, the pose cell module and the experience map module jointly form a first homing pigeon brain hippocampus cognitive navigation subsystem, the image acquisition and preprocessing module, the local view field cell module, the pose cell module, the head orientation cell module, the grid cell module and the experience map module jointly form a second homing pigeon brain hippocampus cognitive navigation subsystem.
1) And the image acquisition and preprocessing module comprises image acquisition and image preprocessing. For image acquisition, a visual sensor is required to acquire a real-time image; and then, image preprocessing is carried out to convert the color image into a gray image, image enhancement and image blocking processing are carried out, and the blocked image is used for calculation of a visual odometer module and a local view module.
2) A Local view cells (LV) module, which is an expandable array of cells, each LV representing a different visual scene in the environment. The LV simulates the current environment snapshot seen by an animal, and when a new visual scene is seen, a new LV is created and is associated with original pixel data and a Pose Cell (PC) in the scene; and provides an activating effect when a known region (loop) is encountered. The LV module section as shown in fig. 1, and the feature-scan line intensity profile extracted for the different images as shown in fig. 2a, b, c.
3) The odometer module, which includes two separate odometer modules, one being a visual odometer based on scan line intensity profile (Scanline intensity profile), roughly determines the motion of the camera by comparing the differences in successive images, as shown in fig. 3. The other is a visual odometry module based on feature points, which provides motion information for the head-oriented cell module and the grid cell module, and is relatively accurate, and the flow of the visual odometry based on the feature points is shown in the lower left corner of fig. 1.
4) A Pose Cells (PC) module, PC implemented as a Three dimensional (3D) competitive attractor neural network, converges on its cells to a stable activation pattern through local activation and global inhibition. Competing attractor neural network elements can have many configurations, but typically each element excites cells close to itself and suppresses cells further away, which results in a cluster of events that eventually dominates, called a packet of events. Activities near the activity package will tend to be activated and activities injected away from it will be suppressed. If enough activity is injected, a new packet may "win" while an old packet disappears. As shown in the PC module part of fig. 1 and in the PC discharge effect and spatial distribution diagram of fig. 4a, b.
5) The HD generates maximum discharge when an animal faces a specific direction, a HD preference direction mechanism is simulated, each cell has different preference directions, the joint response of the HD completes the coding of the horizontal direction by dynamic estimation, and a continuous Head orientation signal is generated when all HD information is integrated. The HD needs to receive external motion signals, and the traditional RatSLAM core motion sensing information is from a rough visual odometer and is greatly influenced by the external environment, so that the problems of low reliability and poor environmental adaptability exist. The HD schematic diagram of the direction, the discharge characteristic of the HD and 4 preference orientations is estimated by adopting a visual odometer motion estimation algorithm based on feature points, and is shown in figure 5.
6) Grid Cells (GCs) module, which shows obvious regular discharge response to the position of the animal in two-dimensional space, is considered to form the neural basis of dead reckoning. To simulate the neural response of the GC, a 2D continuous attractor neural network is used, the network mode is driven to flow by the speed input, and the displacement of the animal can be represented by the flow, so that dead reckoning can be realized according to the motion information. If the motion information of the unmanned aerial vehicle is estimated from the vision system, the speed vector and the heading of the unmanned aerial vehicle can be calculated, and then the velocity vector and the heading are used for driving a periodic network to generate a dot matrix stream, so that a GC path integration mechanism is simulated in a biological angle. Fig. 6 shows two consecutive lattice patterns of GC, corresponding to active neurons marked by red circles, from which we can find that the lattice pattern flows to the left. The model of the GC forming mechanism utilizes the path integral of the input signal of the speed regulation to represent a periodic grid field, namely the module simulates the GC discharge to realize the mileage calculation, so the module receives the speed information of the unmanned aerial vehicle. The GC needs to receive the speed estimated by the visual odometer motion estimation algorithm based on the feature points, and simultaneously receives the direction input of the HD, and the GC is driven together to form a motion flow.
7) An Experience map (EP) module, which simulates a cognitive map in the brain of a pigeon, can organize information streams from PC, LV and self-movement estimation into a group of related spatial experiences to construct a topological map. A single experience in the EP is defined by the concatenation of the activity pattern in the PC and the activity pattern in the LV. In EP, each experience has an associated position and orientation. Creating a new node that will also create connections to previously active nodes, the connections encapsulating gesture transitions between nodes based on mileage information; when the drone returns to the known area again, the LV is the same experience as before, and map correction will be performed by aligning the empirical position and orientation stored in the EP with the translation information for the current position, as shown by the EP module of fig. 1.
An unmanned aerial vehicle simultaneous positioning and mapping navigation method imitating homing pigeon brain-hippocampus comprises the following implementation steps:
the method comprises the following steps: and establishing a 3D (position cells, PC) continuous attractor neural network dynamic model and initializing.
PC is a 3D Contiguous Attractor Network (CAN) collection of cells connected by active and inhibitory connections, and characterized by a similar navigational neuron, i.e., Grid cells (cells), found in mammals. The PC is an abstract cell that updates its activities by replacing activity packages to encode the location in the large-scale environment of the drone navigation task. PC network dynamics make the steady state a single cluster of active units, called an active packet. The centroid of the activity packet is the best internal estimation of the coded unmanned aerial vehicle on the current attitude of the coded unmanned aerial vehicle, the dynamic behavior is realized by local activation and global inhibition connectivity, and a weight matrix is activateda,b,cDescribed as follows:
Figure BDA0002618535260000071
wherein k isp,kdIs the position and direction variance constant; a, b and c respectively represent the distance between the activation unit and the origin of the x ', y ' and theta ' joint coordinate system. The weight matrix for suppression is the same except that the variance constant is different from the suppression range.
Step two: establishing a 2D Grid Cell (GC) continuous attractor neural network dynamic model and initializing.
The GC shows a pronounced regular discharge response to the position of the animal in 2D space and if coupled with a velocity input, can drive the continuous attractor network model of the GC to propagate its network state along the manifold, performing an accurate path integration, thereby encoding the position of the animal in space. The invention adopts the following continuous attractor model to model the dynamic characteristics of Grid cells
Figure BDA0002618535260000072
Where f is a non-linear function and the synaptic activation of neuron i is si,WijIs a synaptic weight matrix from neuron j to neuron i, tau is the time constant of the neural response, and the feedforward input B of neuron ii
Step three: image acquisition and image pre-processing.
And acquiring a real-time image through a camera, and carrying out gray processing on the image. Then, an image enhancement algorithm based on the Laplace operator is adopted to enhance the image contrast. The laplacian is defined as:
Figure BDA0002618535260000073
the laplacian operator can enhance the image, and the obtained sharpening formula is as follows:
Figure BDA0002618535260000074
where g is the output, f is the original image, and c is the coefficient.
Finally, the image is cropped into 3 different regions A, B, C, shown in FIG. 3, for computing odometer rotation, translation, and image matching, respectively.
Step four: visual odometry motion estimation based on scan line intensity profile (Scanline intensity profile).
The odometer is implemented based on a scan line intensity profile, which is obtained by summing the intensity values of the columns of pixels of the image and then normalizing the obtained row vectors to form a one-dimensional vector, i.e., a profile, such as the LV module shown in fig. 2, which is obtained for different images. profile is used to estimate the rotation and forward speed between images for the odometer and compare the current image with previously seen images to perform local view calibration.
The rotational speed is then estimated by determining the relative horizontal offset of the two successive scan line profiles such that the sum of the absolute differences between the two profiles is minimized, and the translational speed is estimated by multiplying the minimum difference by a scaling factor and limited to a maximum value to prevent spurious results due to large variations in illumination. The comparison of the current image with the previously seen image is performed by calculating the average absolute difference between the two scan line intensity profiles, as in equation (5):
Figure BDA0002618535260000081
wherein IjAnd IkIs the scan line intensity profile to be compared, s is the profile offset, and w is the image width. The continuous image I is the s value for making f(s) take the minimum valuejAnd IkRelative movement:
Figure BDA0002618535260000082
where the offset p ensures sufficient overlap between the profiles, the angle and velocity are related to s and f(s), respectively, as calculated by:
Δθ=σsm (7)
v=min[vcalf(sm,Ij,Ij-1),vmax] (8)
step five: the Head is oriented towards the cell (HD) acquisition direction.
This step is synchronized with step four and external self-movement information can be obtained by speed and direction sensors. HD is a neuron associated with the animal's perception of directional heading relative to the environment, with each cell having only one direction of maximum firing, which can be fired anywhere as long as the animal's head is facing the appropriate direction. Unlike a compass, HD does not rely on the earth's magnetic field, but on landmarks and self-movement signals. HD discharges whether the animal is moving or stationary and is a sustained action independent of the animal. HD is considered to represent the neural substrate of the directional heading perceived by the organism in the environment, enabling precise navigation.
HD is used to generate a speed adjustment signal, i.e. the discharge rate is proportional to the current head orientation and the speed of the animal's movement. The preferential direction of the ith HD can be expressed as a relative to the dominant direction theta0Angular offset of thetaiThen, given the adjustment kernel for HD:
Figure BDA0002618535260000091
and instantaneous angular velocity w (t) of the animal, then the head orientation signal is:
Figure BDA0002618535260000092
wherein m is the number of HD, di(t) is the preferential direction in HD of θiHead orientation signal of the ith cell at time t.
Step six: path integration based on Grid Cells (GC) flow.
And receiving the speed and direction information of the step five, updating the activation response of the cells according to the GC neural network dynamic model of the step two, and calculating the relative motion information.
The GC's activation pattern will be driven by the velocity input, and the motion displacement can be calculated using the flow of activated cells, which means that dead reckoning can be achieved from the motion information. For the mobile unmanned aerial vehicle, according to the motion information of the unmanned aerial vehicle estimated from the step five, such as translational velocity and angular velocity, the velocity vector v and the heading direction theta of the unmanned aerial vehicle can be calculated, and then the periodic network is driven by the velocity vector v and the heading direction theta to generate the dot matrix stream. Basing on stream-based estimation of drone positionIn the above, a visual mileage measurement based on GC flow can be realized. Fig. 6 shows two typical continuous lattice patterns, two groups of typical active neurons marked by red circles, from which lattice pattern flow and direction can be found. Weight matrix W in equation (2)ijIs composed of
Figure BDA0002618535260000093
Wherein
Figure BDA0002618535260000094
Where l, a, γ, β are the relevant simulation parameters. The feed forward inputs are:
Figure BDA0002618535260000101
wherein
Figure BDA0002618535260000102
Is directed at thetajIs a velocity vector, a is an envelope function, adjusts the strength of the neuron input, if l, α is not 0, the velocity couples with the network dynamics and drives the formation of the modal flow.
Step seven: local view cells (LV) generate Local view templates.
The LV extracts image information and generates a view template. This step uses the area a in fig. 3 as a sub-image, and then calculates the scan line intensity profile of the sub-image, and generates a one-dimensional vector, i.e. an image template. And meanwhile, transmitting the activity information to the pose cells and creating connection, and executing the step eight.
The LV calculation uses the mean absolute intensity difference function in equation (5) to compare the current contour (i.e., the scan line intensity contour of the a region) with the template. The comparison is made over a small range of pixel offsets psi, as in equation (14)
Figure BDA0002618535260000103
Wherein is represented byjThe current profile is all stored profiles. And comparing the difference value with a threshold value to determine whether the current view is similar to the existing scene or a new scene.
Step eight: pose Cells (PC) activation and update.
And the PC represents the three-degree-of-freedom pose of the unmanned aerial vehicle by adopting the dynamics model established in the step one, and updates and calibrates the dynamics of the attractor, the path integral information and the LV information. The PCs are arranged in three dimensions, two dimensions (x ', y ') representing the manifold at absolute position (x, y), and one dimension θ ' representing the manifold at absolute head orientation θ. And the motion estimation of the step four is used for driving path integration in the attitude unit, the local view template of the step seven is used for triggering LV (low voltage) associated with the PC through association learning, and the activation effect of the PC is updated according to the dynamic model of the step one. The change in activity of the PC due to local excitation is given by:
Figure BDA0002618535260000104
wherein n isx',ny',nθ'Is the dimension of the PC matrix in (x ', y ', θ ') space.
Step nine: relative motion is calculated and an empirical map (EP) is created in preparation for topological joining.
EP is a fine-grained topological graph composed of many individual experiences e by transforming connections, and functions to organize from PC, LV and from motion information streams into a set of related spatial experiences. A single experience in EP is derived from the activity pattern P in the PCiAnd activity pattern V in LViIs simultaneously located at position pi. Thus, one can define a triple:
ei={Pi,Vi,pi} (16)
the first experience is created at an arbitrary starting point,subsequent experience is established on the basis of the first experience. When P is presentiOr ViWill form a new experience and link to the previous experience by a conversion that encodes the distance found from the self-movement cues. As the drone enters a new area, new experiences and transitions will continue to develop.
And the step four and the step six of self-movement information and the step seven of local view template are obtained, and the relative movement is calculated and accumulated. Then, whether the current experience is a new scene or an existing scene is judged, and the current P needs to be determinediAnd ViCompared to all experience, by scoring criterion S of formula (17):
S=μp|Pi-P|+μv|Vi-V| (17)
wherein mupvAre the weights of the pos code and the local view code. If the experience is new, executing a step ten; otherwise, executing step eleven.
Step ten: create new Experience, update Experience map (EP).
When the score of equation (17) exceeds a maximum threshold, a new scene is deemed, and the cumulative relative motion increment is greater than the threshold, a new experience is created, the maximum threshold being set by the experience. By performing pose correlation transformation on the slave self-movement information, the pose change from the odometer measurement is tij
tij={Δpij} (18)
ΔpijIs based on the pose change, t, obtained by the odometerijEstablishes a previous experience eiAnd new experience ejThe new experience is:
ej={Pj,Vj,pi+Δpij} (19)
this links to previous experience through the conversion of the code, forming a new map.
Step eleven: and executing local view calibration, judging a loop, filtering false information and executing map correction. If it is determined that the local view is a scene that has been encountered, the active LV injects activity into the PC associated with the scene through the learned connections, filtering the injected activity from the LV by the attractor dynamics of the network. If the activity of the LV is injected into the inactive area of the PC network, contradiction occurs, the global suppression connection tends to suppress the new packet, and the step ten is returned; otherwise, the PC and LV activity locations are the same, and map correction is performed by aligning the activity location and direction with the EP stored translation information.
Since the odometer motion estimation based on image matching inevitably has accumulated errors, when a loop occurs, the accumulated change in position of the transform results in the experience at the loop being less likely to match to the previous same position. Therefore, to achieve matching, all experienced locations need to be updated, i.e. map revisions:
Figure BDA0002618535260000121
wherein alpha is0Is the constant of the correction rate, NfIs derived from experience eiNumber of connections to other experiences, NtFrom other experiences to experience eiThe number of connections of (2).
Step twelve: repeating the third step to the eleventh step, and outputting two independent visual odometer pose estimations in real time; and the pose cell activation map and the experience map realize simultaneous positioning and navigation.
The invention relates to a homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and a method, which have the advantages and effects that: the simulated pigeon brain hippocampus has the characteristics of two independent navigation mechanisms, two types of brain navigation path integral algorithms are respectively established based on a continuous attractor neural network model, and the modules such as local view cells, pose cells, an experience map and the like are combined to realize redundant and robust positioning and navigation, so that the defect that the original RatSLAM algorithm does not have redundant navigation is overcome, an unmanned aerial vehicle can be assisted to realize navigation and positioning, and the intelligent level of the unmanned aerial vehicle is improved.
Drawings
FIG. 1 shows the structure of the SLAM navigation system of an unmanned aerial vehicle imitating homing pigeon brain-hippocampus
FIG. 2a, b, c partial view generation template schematic diagram
FIG. 3 image pre-processing blocking effect diagram
FIG. 4a, b 3D pose cell activation and space distribution effect diagram
FIG. 5a and b are schematic diagrams of head discharge and bias direction toward cell
FIG. 6a, b motion information driven 2D mesh cell flow map of periodic network
FIG. 7 shows a flow chart of unmanned plane SLAM navigation system simulating the brain-hippocampus of a pigeon
8a, b, c, d visual odometer motion estimation, pose cell activation map and experience map
The reference numbers and symbols in the figures are as follows:
LV1, LV2, LV 3: partial view template
Visual Odometry 1: first visual odometer module
Visual Odometry 2: second visual odometer module
Bundle Adjustment: light beam adjustment method
x ', y ', θ ': three-dimensional coordinate system, x ', y ' representing displacement and theta ' representing direction
E, N, W, S: the head is oriented towards the direction of cell preference, corresponding to (0 °,90 °,180 °,270 °)
Detailed Description
Referring to fig. 1 to 8, the effectiveness of the proposed system and method is verified by a specific example of the pigeon brain-hippocampus-simulated unmanned plane SLAM navigation method, and the adopted data set is the KITTI data set created by the joint research institute of technologies, california irkura, germany. An unmanned aerial vehicle SLAM navigation system imitating homing pigeon brain-hippocampus and a method thereof have the implementation flow shown in figure 7, and the method comprises the following specific steps:
the method comprises the following steps: and establishing a 3D (position cells, PC) continuous attractor neural network dynamic model and initializing.
The dynamic behavior of PC network dynamics is realized by local excitability and global inhibitory connectivity, and is described by formula (1). The variance constant of the activated position and direction is 1, the size of the activated range is 7, namely the activation parameters are as follows:
Figure BDA0002618535260000141
similarly, the values of the parameters for inhibiting the connection are as follows:
Figure BDA0002618535260000142
step two: establishing a 2D Grid Cell (GC) continuous attractor neural network dynamic model and initializing.
The GC shows a pronounced regular discharge response to the position of the animal in 2D space and if coupled with a velocity input, can drive the continuous attractor network model of the GC to propagate its network state along the manifold, performing an accurate path integration, thereby encoding the position of the animal in space. The invention adopts a continuous attractor model as formula (2) to model grid cells dynamic characteristics, and f adopts a nonlinear function
Figure BDA0002618535260000143
Tau is the time constant of the neural response, and the value in the simulation is 0.5 ms.
Step three: image acquisition and image pre-processing.
And acquiring a real-time image through a camera, and carrying out gray processing on the image. Then, an image enhancement algorithm based on the Laplace operator is adopted to enhance the image contrast. The image can be enhanced by using the Laplace operator, and the obtained sharpening formula is shown as a formula (4). Finally, the image is cropped into 3 different regions A, B, C, shown in FIG. 3, for computing odometer rotation, translation, and image matching, respectively.
Step four: visual odometry motion estimation based on scan line intensity profile (Scanline intensity profile).
Obtaining a minimum value of the difference and a corresponding displacement by minimizing the average absolute difference between the scan line intensity profiles of successive images according to equations (5) - (6); the angle and velocity variations are then solved according to equations (7) - (8).
Step five: the Head is oriented towards the cell (HD) acquisition direction.
And acquiring continuous images, acquiring pose estimation by adopting a visual odometer estimation algorithm based on the feature points, and then solving direction and speed information. Passed to HD and GC, respectively. The invention assumes that the whole neural network has N-1282Each neuron i has a preferred direction of (0 °,90 °,180 °,270 °), as shown in fig. 5a, b, for determining the direction in which its output weight is shifted and determining the velocity input it receives. The preference direction is formed byiIt is specified that each 2 x 2 block contains one neuron for each preferred direction, as in equation (9). Then, the HD head orientation signal is solved according to the direction information obtained by the odometer and substituted by an expression (10).
Step six: path integration based on Grid cell (Grid cells) flow.
For the mobile unmanned aerial vehicle, according to the motion information of the unmanned aerial vehicle estimated in the step five, such as translational velocity and angular velocity, the velocity vector v and the heading direction theta of the unmanned aerial vehicle can be calculated; and receiving the speed and direction information of the step five, calculating a weight matrix according to the step two, and updating the activation response of the cells by combining the formulas (11) - (13). And then use them to drive a periodic network to generate a lattice diagram stream. Based on the flow-based estimation of the drone location, a GC flow-based visual mileage measurement can be achieved. Wherein l, a, γ, β are related simulation parameters, and take the values of l ═ 2, a ═ 1, γ ═ 1.05 β,
Figure BDA0002618535260000151
wherein λnet13 is the period of the grid and l is the displacement of the output weight. The simulation uses a network with periodic boundary conditions, so the envelope function is 1:
A(x)=1 (24)
step seven: local view cells (LV) generate Local view templates.
And extracting image information, calculating scanline intensity profile of the sub-image, and generating a view template. At the same time, the activity information is passed to the pose cells and a connection is created.
Step eight: pose Cells (PC) activation and update.
And (3) adopting the dynamics model established in the step one to represent the three-degree-of-freedom pose of the unmanned aerial vehicle, and updating and calibrating by attractor dynamics, path integration and LV process. And the motion estimation in the step four is used for driving path integration in the attitude unit, a redundant navigation mechanism can be formed, and is one of the main innovation points of simulating the brain-hippocampus of the pigeon, and then the motion estimation and the path integration are fused and used for driving the path integration in the attitude unit. And combining the local view template in the step seven, triggering the LV related to the PC through related learning, and updating the activation effect of the PC according to the dynamic model in the step one. The change in activity of the PC due to the local excitation is given by equation (15).
Step nine: the 3D relative motion is computed and an empirical map (EP) is created in preparation for topological joining.
This step acquires continuous self-movement information and local view information, calculates and accumulates relative movement. Then, whether the current experience is a new scene or an existing scene is judged, and the current P needs to be determinediAnd ViComparing with all the experiences, solving the score S through the formula (17), and executing the step ten if the new experience is obtained; otherwise, executing step eleven.
Step ten: create new Experience, update Experience map (EP).
When the score of equation (17) exceeds a maximum threshold, an equation new scene is identified and the cumulative relative motion delta is greater than the threshold, an experience is created. By performing pose correlation transformation on the slave self-movement information, the pose change from the odometer measurement is tijThe new map is formed by linking to previous experience through the conversion of the code according to equation (19).
Step eleven: and executing local view calibration, judging a loop, filtering false information and executing map correction.
If it is determined that the local view is a scene that has been encountered, the active LV injects activity into the PC associated with the scene through the learned connections, filtering the injected activity from the LV by the attractor dynamics of the network. If the activity of the LV is injected into the inactive area of the PC network, contradiction occurs, the global suppression connection tends to suppress the new packet, and the step ten is returned; otherwise, the PC and LV activity locations are the same, and map correction is performed by aligning the activity location and direction with the EP stored translation information.
Since the odometer motion estimation based on image matching inevitably has accumulated errors, when a loop occurs, the accumulated change in position of the transform results in the experience at the loop being less likely to match to the previous same position. Therefore, to achieve matching, all experienced locations need to be updated, with relaxed map corrections according to equation (20). In the formula of alpha0Is the constant of the correction rate, NfIs derived from experience eiNumber of connections to other experiences, NtFrom other experiences to experience eiThe number of connections of (2). In the simulation, α is set to 0.5 (a larger value may cause map instability).
Step twelve: repeating the third step to the eleventh step, outputting two independent vision odometer pose estimation tracks in real time as shown in the graphs 8a and b, and realizing redundant navigation: when one navigation system fails (e.g., magnetic field disturbance), switching to another navigation system may occur. And outputting a cell activation map of the pose as shown in fig. 8c and an empirical map as shown in fig. 8d, and realizing simultaneous positioning and navigation. Through comparison of the odometer motion estimation and the experience map, errors exist in the position estimation of the two odometers, and after the local view calibration, the fact that the unmanned aerial vehicle returns to a known area can be correctly judged, and the map correction is carried out by aligning the experience position and direction in the experience map with the conversion information stored between experience, so that a consistent cognitive map is obtained.

Claims (2)

1. An unmanned aerial vehicle of imitative homing pigeon brain-hippocampus simultaneously fixes a position and builds picture navigation, hereinafter is called SLAM navigation for short, its characterized in that:
the SLAM navigation system consists of 7 modules which are respectively as follows: 1) the system comprises an image acquisition and preprocessing module, a 2) local view field cell module, a 3) odometer module, a 4) position posture cell module, a 5) head orientation cell module, a 6) grid cell module and a 7) experience map module, and can be divided into two independent brain navigation subsystems according to different functions, wherein the image acquisition and preprocessing module, the local view field cell module, the odometer module, the position posture cell module and the experience map module jointly form a first homing pigeon brain-hippocampus cognitive navigation subsystem, and the image acquisition and preprocessing module, the local view field cell module, the position posture cell module, the head orientation cell module, the grid cell module and the experience map module jointly form a second homing pigeon brain-hippocampus cognitive navigation subsystem;
the image acquisition and preprocessing module comprises image acquisition and image preprocessing; the image acquisition needs to adopt a visual sensor to acquire a real-time image; then, image preprocessing is carried out to convert the color image into a gray image, image enhancement and image blocking processing are carried out, and the blocked image is used for calculation of a visual odometer module and a local view module;
the local view field cell module is an expandable cell array, and each local view field cell represents different visual scenes in the environment; the local visual field cell simulates a current environment snapshot seen by an animal, and when a new visual scene is seen, a new local visual field cell is created and is associated with original pixel data and a pose cell in the scene; and provide an activating effect when encountering a known region or loop;
wherein the odometer module comprises two independent odometer modules, one is a visual odometer based on the scan line intensity profile, and the motion of the camera is roughly determined by comparing the difference of successive images; the other is a visual odometer module based on the characteristic points, which provides motion information for the head-facing cell module and the grid cell module;
the position and pose cell module is used for realizing a three-dimensional competitive attractor neural network, and converging a stable activation mode on a unit of the position and pose cell module through local activation and global inhibition; there are many configurations of competing attractor neural network elements, but typically each element excites cells close to itself and suppresses cells further away, which results in a cluster of events that eventually dominates, called a packet of events; activities near the activity package will tend to be activated, activities injected away from it will be suppressed; if enough activity is injected, the new packet can "win" while the old packet disappears;
5) a head-oriented cell module for estimating a direction by a feature point-based visual odometry motion estimation algorithm;
6) the grid cell module simulates grid cell discharge to realize mileage calculation, so that the module receives speed information of the unmanned aerial vehicle; the grid cells need to receive the speed estimated by the visual odometer motion estimation algorithm based on the feature points, and meanwhile, the receiving head inputs the speed in the direction towards the cells to jointly drive the grid cells to form a motion flow;
7) the experience map module simulates a cognitive map in the brain of the carrier pigeon, and can organize information streams from attitude cells, local view fields and motion estimation into a group of related spatial experience to construct a topological map; a single experience in the experience map is defined by the concatenation of the activity pattern in the pose cell and the activity pattern in the local field of view; in the experience map, each experience has an associated position and direction; creating a new node that will also create connections to previously active nodes, the connections encapsulating gesture transitions between nodes based on mileage information; when the drone returns to the known area again, the local field of view is the same as the previous experience, and map correction will be performed by aligning the empirical position and orientation stored in the empirical map with the conversion information of the current position.
2. An unmanned aerial vehicle simultaneous positioning and mapping navigation method imitating homing pigeon brain-hippocampus is characterized in that: the method comprises the following steps:
the method comprises the following steps: establishing a 3D pose cell continuous attractor neural network dynamic model and initializing;
the pose cell is called PC below, and is a 3D continuous cellThe collection of attractor network cells of (a), connected by activating and inhibitory connections, are characterized by analogy to the navigational neurons found in mammals, i.e., lattice cells; the PC is an abstract cell that updates its activities by replacing activity packages to encode the location in the large-scale environment of the drone navigation task; PC network dynamics make the steady state a single cluster of activation units, called an active packet; the centroid of the activity packet is the best internal estimation of the coded unmanned aerial vehicle on the current attitude of the coded unmanned aerial vehicle, the dynamic behavior is realized by local activation and global inhibition connectivity, and a weight matrix is activateda,b,cDescribed as follows:
Figure FDA0002618535250000031
wherein k isp,kdIs the position and direction variance constant; a, b and c respectively represent the distances between the activation unit and the origin of the x ', y ' and theta ' combined coordinate system; the weight matrix of the suppression is the same, but the variance constant and the suppression range are different;
step two: establishing a 2D grid cell continuous attractor neural network dynamic model and initializing;
the invention adopts the following continuous attractor model to model the dynamics characteristics of the grid cells
Figure FDA0002618535250000032
Where f is a non-linear function and the synaptic activation of neuron i is si,WijIs a synaptic weight matrix from neuron j to neuron i, tau is the time constant of the neural response, and the feedforward input B of neuron ii
Step three: image acquisition and image preprocessing;
acquiring a real-time image through a camera, and carrying out gray processing on the image; then, enhancing the image contrast by adopting an image enhancement algorithm based on a Laplace operator; the laplacian is defined as:
Figure FDA0002618535250000033
the laplacian operator can enhance the image, and the obtained sharpening formula is as follows:
Figure FDA0002618535250000034
where g is the output, f is the original image, c is the coefficient;
finally, the image is cropped into 3 different regions A, B, C for computing odometer rotation, translation, and image matching, respectively;
step four: visual odometry motion estimation based on scan line intensity profiles;
the odometer is implemented based on a scan line intensity profile that is normalized by summing the intensity values of the columns of pixels of the image, then forming a one-dimensional vector, i.e., profile, that is used to estimate the rotation and advance speed between the images for the odometer and to compare the current image with the previously seen images to perform local view calibration;
then estimating the rotation speed by determining the relative horizontal offset of the two consecutive scan line profiles such that the sum of the absolute differences between the two profiles is minimized, estimating the translation speed by multiplying the minimum difference by a scaling factor and limiting it within a maximum value to prevent spurious results due to large variations in illumination; the comparison of the current image with the previously seen image is performed by calculating the average absolute difference between the two scan line intensity profiles, as in equation (5):
Figure FDA0002618535250000041
wherein IjAnd IkIs the scan line intensity profile to be compared, s is the profileOffset, w is the image width; the continuous image I is the s value for making f(s) take the minimum valuejAnd IkRelative movement:
Figure FDA0002618535250000042
where the offset p ensures sufficient overlap between the profiles, the angle and velocity are related to s and f(s), respectively, as calculated by:
Δθ=σsm (7)
v=min[vcalf(sm,Ij,Ij-1),vmax] (8)
step five: the head is directed towards the cell, hereinafter referred to as HD acquisition direction;
the step is synchronous with the step four, and the external self-movement information can be obtained by a speed sensor and a direction sensor;
generating a speed adjustment signal using HD, i.e. the discharge rate is proportional to the current head orientation and the speed of the animal movement; the preferential direction of the ith HD can be expressed as a relative to the dominant direction theta0Angular offset of thetaiThen, given the adjustment kernel for HD:
Figure FDA0002618535250000043
and instantaneous angular velocity w (t) of the animal, then the head orientation signal is:
Figure FDA0002618535250000051
wherein m is the number of HD, di(t) is the preferential direction in HD of θiHead orientation signal of the ith cell at time t;
step six: path integration based on the grid cell flow;
receiving the speed and direction information of the step five, updating the activation response of the cells according to the grid cell neural network dynamic model of the step two, and calculating the relative motion information;
the activation pattern of the grid cells will be driven by the velocity input, while the motion displacement can be calculated with the flow of the activated cells, which means that dead reckoning can be achieved from the motion information; for the mobile unmanned aerial vehicle, according to the motion information of the unmanned aerial vehicle, namely translational velocity and angular velocity, estimated from the step five, the velocity vector v and the heading direction theta of the unmanned aerial vehicle can be calculated, and then the periodic network is driven by the velocity vector v and the heading direction theta to generate a dot matrix stream; on the basis of flow-based unmanned aerial vehicle position estimation, visual mileage measurement based on grid cell flow is realized; weight matrix W in equation (2)ijIs composed of
Figure FDA0002618535250000052
Wherein
Figure FDA0002618535250000053
Wherein l, a, gamma and beta are related simulation parameters; the feed forward inputs are:
Figure FDA0002618535250000054
wherein
Figure FDA0002618535250000055
Is directed at thetajV is a velocity vector, A is an envelope function, the intensity of neuron input is regulated, if l and alpha are not 0, the velocity is coupled with network dynamics and is driven to form a mode flow;
step seven: local visual field cells are called LV generation local view template for short hereinafter;
the LV extracts image information to generate a view template; the step adopts the area A as a sub-image, then calculates the scanning line intensity profile of the sub-image, and generates a one-dimensional vector, namely an image template; meanwhile, the activity information is transmitted to the pose cells and connection is established, and the step eight is executed;
when the LV is calculated, the average absolute intensity difference function in the formula (5) is adopted to compare the current contour, namely the scanning line intensity contour of the area A, with the template; the comparison is made over a small range of pixel offsets psi, as in equation (14)
Figure FDA0002618535250000061
Wherein is represented byjThe current profile is all stored profiles; comparing the difference value with a threshold value to determine whether the current view is sufficiently similar to the existing scene or a new scene;
step eight: the pose cells are called PC activation and updating for short;
the PC represents the three-degree-of-freedom pose of the unmanned aerial vehicle by adopting the dynamics model established in the first step, and the three-degree-of-freedom pose is updated and calibrated by the attractor dynamics, the path integral information and the LV information; the PCs are arranged in a three-dimensional manner, the two dimensions (x ', y ') representing the manifold at the absolute position (x, y), and the one dimension θ ' representing the manifold at the absolute head orientation θ; the motion estimation in the step four is used for driving path integration in the attitude unit, the local view template in the step seven triggers LV related to the PC through related learning, and the activation effect of the PC is updated according to the dynamic model in the step one; the change in activity of the PC due to local excitation is given by:
Figure FDA0002618535250000062
wherein n isx',ny',nθ'Is the dimension of the PC matrix in (x ', y ', θ ') space;
step nine: calculating relative motion, and creating an experience map to prepare for topological connection;
an experience map, hereinafter EP, is composed of a plurality of individual experiences e connected by transitionsThe fine-grained topological graph is used for organizing information flows from a PC, an LV and a self-movement into a group of related spatial experience; a single experience in EP is derived from the activity pattern P in the PCiAnd activity pattern V in LViIs simultaneously located at position pi(ii) a Thus, one can define a triple:
ei={Pi,Vi,pi} (16)
the first experience is created at an arbitrary starting point, and the subsequent experiences are established on the basis of the first experience; when P is presentiOr ViWill form a new experience when one or both of them change, and link to the previous experience by making a coded transition to the distance found from the self-movement cues; as the drone enters a new area, new experiences and transitions will continue to form;
acquiring the self-movement information of the fourth step and the self-movement information of the sixth step and the local view template of the seventh step, and calculating and accumulating relative movement; then, whether the current experience is a new scene or an existing scene is judged, and the current P needs to be determinediAnd ViCompared to all experience, by scoring criterion S of formula (17):
S=μp|Pi-P|+μv|Vi-V| (17)
wherein mupvIs the weight of the pos code and the local view code; if the experience is new, executing a step ten; otherwise, executing step eleven;
step ten: creating new experience and updating an experience map;
when the score of the formula (17) exceeds a maximum threshold value, a new scene is determined, and the accumulated relative movement increment is larger than the threshold value, new experience is created, wherein the maximum threshold value is set through experience; by performing pose correlation transformation on the slave self-movement information, the pose change from the odometer measurement is tij
tij={Δpij} (18)
ΔpijIs obtained from a odometerChange in pose of tijEstablishes a previous experience eiAnd new experience ejThe new experience is:
ej={Pj,Vj,pi+Δpij} (19)
thus, a new map is formed by linking to the previous experience through the conversion of the codes;
step eleven: executing local view calibration, judging a loop, filtering false information, and executing map correction; if it is determined that the local view is a scene that has been encountered, the active LV injects activity into the PC associated with the scene through the learned connections, filtering the injected activity from the LV by the attractor dynamics of the network; if the activity of the LV is injected into the inactive area of the PC network, contradiction occurs, the global suppression connection tends to suppress the new packet, and the step ten is returned; otherwise, the PC and the LV activity positions are the same, and the activity positions and the direction are aligned with the conversion information stored in the EP to execute map correction;
since the odometer motion estimation based on image matching inevitably has accumulated errors, when a loop occurs, the accumulated change in position results in that the experience at the loop is unlikely to match to the previous same position; therefore, to achieve matching, all experienced locations need to be updated, i.e. map revisions:
Figure FDA0002618535250000081
wherein alpha is0Is the constant of the correction rate, NfIs derived from experience eiNumber of connections to other experiences, NtFrom other experiences to experience eiThe number of connections of (a);
step twelve: repeating the third step to the eleventh step, and outputting two independent visual odometer pose estimations in real time; and the pose cell activation map and the experience map realize simultaneous positioning and navigation.
CN202010776277.9A 2020-08-05 2020-08-05 Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method Active CN112097769B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010776277.9A CN112097769B (en) 2020-08-05 2020-08-05 Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010776277.9A CN112097769B (en) 2020-08-05 2020-08-05 Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method

Publications (2)

Publication Number Publication Date
CN112097769A true CN112097769A (en) 2020-12-18
CN112097769B CN112097769B (en) 2022-06-10

Family

ID=73749604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010776277.9A Active CN112097769B (en) 2020-08-05 2020-08-05 Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method

Country Status (1)

Country Link
CN (1) CN112097769B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113110492A (en) * 2021-05-07 2021-07-13 苏州大学 Path planning method
CN113223099A (en) * 2021-06-11 2021-08-06 苏州大学 RatSLAM environmental adaptability improving method and system based on biological vision model
CN113297506A (en) * 2021-06-08 2021-08-24 南京航空航天大学 Brain-like relative navigation method based on social position cells/grid cells
CN113703322A (en) * 2021-08-28 2021-11-26 北京工业大学 Scenario memory model construction method based on rat brain visual pathway and olfactory-hippocampus cognitive mechanism
CN114152259A (en) * 2021-12-01 2022-03-08 中北大学 Brain-like vision unmanned aerial vehicle navigation method based on hippocampus and entorhinal cortex
CN114812565A (en) * 2022-06-23 2022-07-29 北京航空航天大学 Dynamic navigation method based on artificial intelligence network
CN114894191A (en) * 2022-04-14 2022-08-12 河南大学 Unmanned aerial vehicle navigation method suitable for dynamic complex environment
CN115391516A (en) * 2022-10-31 2022-11-25 成都飞机工业(集团)有限责任公司 Unstructured document extraction method, device, equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015063119A1 (en) * 2013-09-12 2015-05-07 Partnering 3.0 Autonomous multi-modal neuro-inspired mobile robot for monitoring and restoring an environment
CN106125730A (en) * 2016-07-10 2016-11-16 北京工业大学 A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell
CN106814737A (en) * 2017-01-20 2017-06-09 安徽工程大学 A kind of SLAM methods based on rodent models and RTAB Map closed loop detection algorithms
US20190384318A1 (en) * 2017-01-31 2019-12-19 Arbe Robotics Ltd. Radar-based system and method for real-time simultaneous localization and mapping
CN110672088A (en) * 2019-09-09 2020-01-10 北京航空航天大学 Unmanned aerial vehicle autonomous navigation method imitating homing mechanism of landform perception of homing pigeons
CN111376273A (en) * 2020-04-23 2020-07-07 大连理工大学 Brain-like inspired robot cognitive map construction method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015063119A1 (en) * 2013-09-12 2015-05-07 Partnering 3.0 Autonomous multi-modal neuro-inspired mobile robot for monitoring and restoring an environment
CN106125730A (en) * 2016-07-10 2016-11-16 北京工业大学 A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell
CN106814737A (en) * 2017-01-20 2017-06-09 安徽工程大学 A kind of SLAM methods based on rodent models and RTAB Map closed loop detection algorithms
US20190384318A1 (en) * 2017-01-31 2019-12-19 Arbe Robotics Ltd. Radar-based system and method for real-time simultaneous localization and mapping
CN110672088A (en) * 2019-09-09 2020-01-10 北京航空航天大学 Unmanned aerial vehicle autonomous navigation method imitating homing mechanism of landform perception of homing pigeons
CN111376273A (en) * 2020-04-23 2020-07-07 大连理工大学 Brain-like inspired robot cognitive map construction method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ANNAGAGLIARDO 等: "Bilateral participation of the hippocampus in familiar landmark navigation by homing pigeons", 《BEHAVIOURAL BRAIN RESEARCH》 *
KRUPIC J等: "Local transformations of the hippocampal cognitive map", 《SCIENCE》 *
于乃功等: "一种基于海马认知机理的仿生机器人认知地图构建方法", 《自动化学报》 *
蒋晓军: "基于鼠脑海马的情景认知地图构建及导航方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113110492B (en) * 2021-05-07 2022-06-21 苏州大学 Path planning method
CN113110492A (en) * 2021-05-07 2021-07-13 苏州大学 Path planning method
CN113297506A (en) * 2021-06-08 2021-08-24 南京航空航天大学 Brain-like relative navigation method based on social position cells/grid cells
CN113223099A (en) * 2021-06-11 2021-08-06 苏州大学 RatSLAM environmental adaptability improving method and system based on biological vision model
CN113223099B (en) * 2021-06-11 2023-04-18 苏州大学 RatSLAM environmental adaptability improving method and system based on biological vision model
CN113703322A (en) * 2021-08-28 2021-11-26 北京工业大学 Scenario memory model construction method based on rat brain visual pathway and olfactory-hippocampus cognitive mechanism
CN113703322B (en) * 2021-08-28 2024-02-06 北京工业大学 Method for constructing scene memory model imitating mouse brain vision pathway and entorhinal-hippocampal structure
CN114152259B (en) * 2021-12-01 2023-09-05 中北大学 Brain-like visual unmanned aerial vehicle navigation method based on Hippocampus and entorhinal cortex
CN114152259A (en) * 2021-12-01 2022-03-08 中北大学 Brain-like vision unmanned aerial vehicle navigation method based on hippocampus and entorhinal cortex
CN114894191A (en) * 2022-04-14 2022-08-12 河南大学 Unmanned aerial vehicle navigation method suitable for dynamic complex environment
CN114894191B (en) * 2022-04-14 2024-04-26 河南大学 Unmanned aerial vehicle navigation method suitable for dynamic complex environment
CN114812565A (en) * 2022-06-23 2022-07-29 北京航空航天大学 Dynamic navigation method based on artificial intelligence network
CN115391516B (en) * 2022-10-31 2023-04-07 成都飞机工业(集团)有限责任公司 Unstructured document extraction method, device, equipment and medium
CN115391516A (en) * 2022-10-31 2022-11-25 成都飞机工业(集团)有限责任公司 Unstructured document extraction method, device, equipment and medium

Also Published As

Publication number Publication date
CN112097769B (en) 2022-06-10

Similar Documents

Publication Publication Date Title
CN112097769B (en) Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method
CN109668566B (en) Robot scene cognition map construction and navigation method based on mouse brain positioning cells
CN109211241B (en) Unmanned aerial vehicle autonomous positioning method based on visual SLAM
WO2018010458A1 (en) Rat hippocampal space cell-based method for constructing navigation map using robot
KR101121763B1 (en) Apparatus and method for recognizing environment
CN109813319B (en) Open loop optimization method and system based on SLAM (Simultaneous localization and mapping) mapping
CN111474953B (en) Multi-dynamic-view-angle-coordinated aerial target identification method and system
CN108362284A (en) A kind of air navigation aid based on bionical hippocampus cognitive map
Zeng et al. NeuroBayesSLAM: Neurobiologically inspired Bayesian integration of multisensory information for robot navigation
Yokoyama et al. Autonomous mobile robot with simple navigation system based on deep reinforcement learning and a monocular camera
CN110260866A (en) A kind of robot localization and barrier-avoiding method of view-based access control model sensor
CN113703322B (en) Method for constructing scene memory model imitating mouse brain vision pathway and entorhinal-hippocampal structure
CN111258311A (en) Obstacle avoidance method of underground mobile robot based on intelligent vision
Sun et al. Autonomous state estimation and mapping in unknown environments with onboard stereo camera for micro aerial vehicles
Li et al. Learning view and target invariant visual servoing for navigation
CN111739066B (en) Visual positioning method, system and storage medium based on Gaussian process
CN112509051A (en) Bionic-based autonomous mobile platform environment sensing and mapping method
CN117152249A (en) Multi-unmanned aerial vehicle collaborative mapping and perception method and system based on semantic consistency
CN112212867B (en) Robot self-positioning and navigation method and system
Short et al. Abio-inspiredalgorithminimage-based pathplanning and localization using visual features and maps
CN111611869B (en) End-to-end monocular vision obstacle avoidance method based on serial deep neural network
Cao et al. Unsupervised visual odometry and action integration for pointgoal navigation in indoor environment
Atsuzawa et al. Robot navigation in outdoor environments using odometry and convolutional neural network
Zhao et al. 3D indoor map building with monte carlo localization in 2D map
Goldschmidt et al. A neural path integration mechanism for adaptive vector navigation in autonomous agents

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant