CN111044031B - Cognitive map construction method based on mouse brain hippocampus information transfer mechanism - Google Patents

Cognitive map construction method based on mouse brain hippocampus information transfer mechanism Download PDF

Info

Publication number
CN111044031B
CN111044031B CN201910958426.0A CN201910958426A CN111044031B CN 111044031 B CN111044031 B CN 111044031B CN 201910958426 A CN201910958426 A CN 201910958426A CN 111044031 B CN111044031 B CN 111044031B
Authority
CN
China
Prior art keywords
information
cell
discharge rate
cells
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910958426.0A
Other languages
Chinese (zh)
Other versions
CN111044031A (en
Inventor
于乃功
王林
魏雅乾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201910958426.0A priority Critical patent/CN111044031B/en
Publication of CN111044031A publication Critical patent/CN111044031A/en
Application granted granted Critical
Publication of CN111044031B publication Critical patent/CN111044031B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a cognitive map construction method based on a rat brain hippocampus information transmission mechanism, which comprises the steps of inputting an angular velocity and a linear velocity acquired by an inertial sensor and an RGB image and a depth image of an environment acquired by a visual sensor into a visual inertial fusion module, correcting drift errors caused by the angular velocity and the linear velocity along with time by utilizing image information, inputting the corrected angular velocity and linear velocity into a position sensing module constructed according to the rat brain hippocampus space cells and the information transmission mechanism, respectively inputting the angular velocity and the linear velocity into head orientation cells and stripe cells, constructing a grid cell and a position cell model according to a competitive network model of a CA 3-inner olfactory cortex information circulation transmission loop in the rat brain hippocampus, expressing the environment by using the position cell model, obtaining the position information of a robot, and finally constructing a cognitive map. Compared with the traditional slam mode, the cognitive map constructed by the invention contains biological information and can be applied to various fields in work and life.

Description

Cognitive map construction method based on mouse brain hippocampus information transfer mechanism
Technical field:
the invention belongs to the fields of brain-like calculation and intelligent robot navigation, and particularly relates to a cognitive map construction method based on a mouse brain hippocampus information transmission mechanism.
The background technology is as follows:
the artificial intelligence field is an important research field in twenty-first century, and relates to aspects in human life, so that the life of human beings is greatly changed and improved, and the robot navigation is mainly divided into inertial navigation, road sign navigation, visual navigation and the like as an important research aspect in the artificial intelligence field. The invention combines robot navigation and brain-like calculation, imitates the navigation mode of human and rat to make the robot have similar environment cognition and navigation ability as human, and specifically imitates the discharge mechanism of various space cells in rat hippocampus and the information transmission mechanism between various parts of hippocampus and various nerve cells to realize mobile robot navigation. Robot navigation refers to the process of realizing autonomous movement of an object-oriented obstacle avoidance by sensing and learning through a sensor of the robot in a structured or dynamic unstructured environment. Traditional mobile robot navigation is based on Bayesian probability algorithm, such as graph optimization, kalman filtering, extended Kalman filtering algorithm and the like, has been widely applied in the industrial and life fields, but compared with the navigation capability of human beings and animals, a certain gap still exists, so that the mobile robot capable of processing complex situations and having higher intelligence is required to meet the increasing industrial and life requirements of people, and the simulated biological navigation method has great significance.
In order to meet the higher requirements of the artificial intelligence on robots under the high-speed development, the mobile robots can autonomously solve the problems and have learning ability under the complex environment, so that more and more complex tasks are completed, people are liberated from heavy work and replace people to perform dangerous or high-precision operations, and the brain-like calculation and the bionic calculation model are combined with the robot technology to be more and more valued. Researchers in the fields of artificial intelligence and robot research have proposed that robots should learn from advanced mammals, mimic the brain of advanced mammals, and make them highly intelligent, thereby having advanced capabilities of autonomous learning, deductive reasoning, complex operations, inductive summarization, innovative decisions, etc. Namely, the characteristic learning of the external environment can be automatically completed through the autonomous learning system, so that the required knowledge is obtained and stored; further through the memory capability of the system, the knowledge is stably stored; and how to learn and form the robot internal abstract feature expression on line and autonomously through the system; finally, the decision suitable for the current environment is dynamically regulated and obtained through the system, so that the robot completes the rapid adaptation process to the dynamic environment.
According to the invention, a cognitive map is constructed according to various space cells and information transmission mechanisms in a rat brain hippocampus, and scientists find various space cells related to navigation in the rat brain hippocampus: in 1971, O' Keefe and Dostrovsky first found that the cone neurons in the sea had position selective spike activity, and when the animals were in a specific spatial position, the firing frequency of the specific cone neurons was changed, and such neurons were called position cells; the head found in the posterior hypothalamus by Taube in 1990 is directed towards the cell; in 2005 Hafting et al found grid cells with strong discharge at specific locations in space by changing the size and shape of the test chamber; several laboratory sequential publications in 2008 indicate that boundary cells are found in the superficial cortex of the entorhinal cortex. In 2012, O' Keefa et al found streak cells in the side pessary and entorhinal cortex in the presence of spatially periodic streak-like discharge fields. The hippocampus is mainly composed of four parts, i.e., entorhinal cortex (entorhinal cortex, EC), dentate Gyrus (DG), hippocampus (hippocampus), and hypothalamus complex (subiculum complex, SUB). The hippocampal region is divided into the hippocampal angle 1 region (CA 1) and the hippocampal angle 3 region (CA 3). Two-way fiber projection exists between the hippocampus and the entorhinal cortex, the entorhinal cortex firstly projects information into the hippocampus, and after fiber projection and neuron cell exchange, the hippocampus projects output information into the entorhinal cortex to form an entorhinal cortex-hippocampus loop. The invention models the discharge mechanism of various space cells and the information transmission mechanism among the space cells, constructs a cognitive map of the environment, and navigates on the basis of the cognitive map. The invention can realize the cognition of rats to the space environment and the self-positioning in the environment, can be applied to various fields of industry, agriculture and life, and has good practical value and commercial value.
In previous studies, there has been a construction of a cognitive map using hippocampal space cells, but these models directly input angular velocity and linear velocity information acquired by an inertial sensor into a space cell model, and errors caused by accumulation over time are large; and ignoring the information transfer relationship of CA3 to the entorhinal cortex, there is no complete function to mimic biological space cells.
The invention comprises the following steps:
the invention constructs the cognitive map and realizes the self-positioning of the robot based on the mouse brain hippocampus information transmission mechanism and the discharge mechanism of the space cells in the mouse brain hippocampus, and carries out the robot navigation research on the cognitive map, thereby being applicable to various environments such as indoor, outdoor and the like and being suitable for various fields in work and life.
The traditional robot navigation map construction method is mainly used for constructing a map based on a slam method, the constructed map is a grid map and a topological map, the method needs high calculation requirements and hardware requirements, the grid is divided, the recruitment of topological points is mostly based on a static environment, and the method is poor in performance in a dynamic environment. The map construction method is a cognitive map, and although the cognitive map is one of topological maps, the cognitive map has the advantage with a common topological map that each node of the cognitive map contains space cell discharge information, environment RGB-D information and association information of a plurality of related nodes.
The specific working flow of the invention is as follows:
step 1, a robot explores an environment, an inertial sensor acquires angular velocity information and linear velocity information of the robot, and a visual sensor acquires RGB (red, green and blue) image and depth image information of the environment;
and 2, inputting the acquired angular velocity, linear velocity information, RGB image and depth image information into a visual inertia fusion module, correcting drift errors caused by the angular velocity and linear velocity information along with time by utilizing the image information, and obtaining corrected angular velocity and linear velocity information.
Step 3, inputting the corrected angular velocity theta and linear velocity information v into a position sensing module constructed according to the space cells of the rat brain hippocampus and an information transmission mechanism to obtain the position information of the robot expressed by the position cell discharge information;
step 4, inputting RGB image and depth image information into a feature extraction algorithm to obtain current environment image features;
step 5, judging whether the image features obtained in the step 4 are matched with the image features in the view library or not according to a matching algorithm: if the position information is matched with the position information, correcting the position of the current robot by using the position information associated with the view library image, correcting the position of a cognitive point of the cognitive map, updating the cognitive map, returning to the step 1 for continuing, otherwise, carrying out the step 6;
step 6, storing the current image characteristics obtained in the step 4 into a view library, associating the current image characteristics with the current position information of the robot, creating cognitive map cognitive points, updating the cognitive map, and continuing the next step;
and 7, returning to the step 1, continuously searching the environment, and continuously updating the cognitive map.
Advantageous effects
Firstly, the invention adopts the visual inertia fusion input to construct the cognitive map, and the inertia and visual sensor has the advantages of small volume and low cost, so that the device is smaller and the cost is lower along with the continuous progress of technology; both sensors are passive devices, no external input is needed, and real autonomous navigation can be realized; the inertial sensor and the visual sensor have good complementarity, although errors of the inertial sensor accumulate with time, the inertial sensor can track the rapid motion of the carrier well in a short time, the time precision can be ensured, the visual sensor has high estimation precision in low dynamic motion, the drift of the inertial sensor can be effectively corrected, but the motion is too fast, the good pose estimation cannot be maintained, and the fusion of the inertial sensor and the visual sensor can obtain better navigation parameter estimation. The model of the invention is more in line with the space cognition mechanism of a biological rat, supplements the traditional network model, is more in line with the biological facts, has robustness and can construct a more accurate map.
Drawings
FIG. 1 is a flow chart of a cognitive map construction method based on a mouse brain hippocampus information transfer mechanism
FIG. 2 is a graph of information transfer from a rat brain hippocampus;
fig. 3 is a diagram of feature points matched in pairs in RGB images at the present time and at a later time;
FIG. 4 is a diagram of a competitive neural network based on entorhinal cortex-CA 3 information transfer, including a competitive neural network for CA3 to entorhinal cortex information transfer, and a competitive neural network for entorhinal cortex to CA3 information transfer;
FIG. 5 is a graph of the discharge rate of striped cells, wherein the first vertical row of striped cells is oriented at 0 degrees, the second vertical row of striped cells is oriented at 60 degrees, and the third vertical row of striped cells is oriented at 90 degrees;
FIG. 6 is a graph of the discharge rate of cells of the grid;
FIG. 7 is a graph of discharge rate of cells at locations;
fig. 8 left is an uncorrected cognitive map, right is a corrected cognitive map, and robot points are corrected positions of the robot at the current time.
Detailed Description
Application scene: the main application scene of the invention is the indoor navigation of the mobile robot. The robot can acquire angular speed, linear speed and image information through the method, construct a cognitive map of the environment by exploring the unknown environment, and perform a task of target-oriented navigation according to the position of the robot.
The following describes the method in detail with reference to the drawings and examples.
Fig. 1 is a flowchart of a cognitive map construction method based on a mouse brain hippocampus space cell and an information transmission mechanism, and as shown in the figure, in a visual inertial fusion module, RGB image information acquired by a visual sensor of a robot is subjected to a visual mileage calculation method to obtain angular velocity and linear velocity information, and drift error correction is performed on the angular velocity and linear velocity information acquired by the inertial sensor, so that the angular velocity and linear velocity information input to a position sensing module is more accurate. In the position sensing module, angular velocity information is input into head-oriented cells, head-oriented cell discharge and linear velocity information are input into stripe cells together, the stripe cells are input into grid cells through a two-dimensional continuous attractor model, a competitive network model of an entorhinal cortex-CA 3 information circulation transmission loop between the grid cells and the position cells is constructed, a cognitive map is constructed through final position cell discharge, and the cognitive map is updated through closed loop detection.
The specific implementation steps of the invention are as follows:
1. the robot explores the environment, the inertial sensor collects angular velocity information and linear velocity information, and the visual sensor collects RGB image and depth image information of the environment;
the inertial sensor is mainly used for acquiring angular speed information and linear speed information by a wheel encoder; the vision sensor mainly comprises a Kinect camera for collecting RGB image and depth image information of the environment.
2. And inputting the acquired angular velocity and linear velocity information and RGB image and depth image information into a visual inertia fusion module, correcting drift errors caused by the angular velocity and linear velocity information along with time by utilizing the image information, and obtaining corrected angular velocity and linear velocity information.
The visual inertial fusion module can well track the rapid motion of the carrier in a short time by the inertial sensor, so that the time accuracy can be ensured, but errors can be accumulated along with the time, and the visual sensor has high estimation accuracy in low dynamic motion, so that the drift of the inertial sensor can be effectively corrected.
ORB feature extraction is carried out on RGB image information at each moment, and the ORB feature extraction is divided into two steps: extracting FAST corner points, namely extracting characteristic points in the image; BRIEF descriptor: feature descriptors of the extracted feature points are obtained, and all feature points matched pairwise in the RGB images at the current moment and the later moment are obtained through the feature descriptors;
after feature points are detected by using a FAST feature point detection method by using an ORB algorithm, N feature points with maximum responses to the Harris corner points are selected from the FAST feature points by using a measurement method of the Harris corner points. Wherein the response function of Harris corner points is:
R=detM-α(traceM) 2
since the FAST feature point has no direction, the method needs to be solved by using a gray level centroid method, the determination of the centroid is calculated by moment, and the coordinate of the feature point to the centroid forms a vector as the direction of the feature point. The moment is defined as follows:
Figure BDA0002228138920000051
the centroid obtained is:
Figure BDA0002228138920000052
connecting the geometric center and the mass center of the image block to obtain a direction vector, wherein the direction of the characteristic point can be obtained as follows:
θ=arctan(m 01 ,m 10 )∈[-π,π]
firstly, calculating the barycenter coordinates p of all feature points of a group of two images matched at the current moment and the next moment 1 ,p 2 The barycenter removing coordinates of all the characteristic points of the images at the current moment and the subsequent moment are calculated, and a specific calculation formula is as follows:
Figure BDA0002228138920000061
wherein,,
Figure BDA0002228138920000062
for the de-centroid coordinates of the ith feature point in the jth (j=1, 2) group of images, i.e. j=1 represents the current moment, j=2 represents the subsequent moment,/>
Figure BDA0002228138920000063
For the centroid coordinates of the ith feature point in the jth (j=1, 2) group of images,
calculating an optimal rotation matrix R *
Figure BDA0002228138920000064
Wherein R is * R is the optimal solution of R, R is a rotation matrix;
calculating the optimal translation t *
t * =p 1 -Rp 2
Wherein t is * An optimal solution of t, wherein t is a translation amount;
the calculated angle according to the rodgers formula is:
Figure BDA0002228138920000065
the calculation speed is as follows:
Figure BDA0002228138920000066
the obtained angular velocity information θ″ and linear velocity information v″ correct the angular velocity information θ 'and linear velocity information v' acquired at the present time from the inertial sensor:
θ=θ″+αθ′,v=v″+αv′(α∈(0,1))
where α (α∈ (0, 1)) is a weighting coefficient, θ is corrected angular velocity information, and v is corrected linear velocity information.
3. Inputting the corrected angular velocity theta and linear velocity information v into a position sensing module constructed according to the space cells of the rat brain hippocampus and an information transmission mechanism to obtain the position information of the robot expressed by the position cell discharge information;
the position sensing module constructed according to the space cells of the rat brain hippocampus and the information transmission mechanism firstly inputs the corrected angular velocity information theta into a head-oriented cell model to obtain the head-oriented cell discharge rate
Figure BDA0002228138920000067
Next, the head is directed toward the cell discharge rate
Figure BDA0002228138920000068
And inputting the corrected linear velocity information v into a streak cell model to obtain a streak cell discharge rate F θ The method comprises the steps of carrying out a first treatment on the surface of the Again discharging the striped cell rate F θ Inputting the cell discharge rate r into a grid cell model to obtain a grid cell discharge rate r; then inputting the grid cell discharge rate r into a competitive network model of a CA 3-entorhinal cortex information circulation transmission loop to obtain a corrected position cell discharge rate u'; finally, inputting the corrected position cell discharge rate u 'into an environment expression model of the position cell to obtain corrected position information p' of the robot at the current moment;
3.1 head towards cell model
The head-facing cells are most discharged when the rat is facing a specific direction angle, according to which the more distant the head-facing cells are discharged, the weaker the specific direction angle is, which is only related to the current rat head-facing direction, independent of the environment and rat body direction. The rat is in either a stationary state or a moving state, the head is always discharged towards the cell, the discharge of all the head towards the cell encodes the horizontal direction in which the rat is located, and the information of all the head towards the cell generates a continuous rat head towards signal. In the process of exploring the environment by the robot, the corrected angular velocity input head faces the cell, so that the head faces the cell to generate an angular velocity adjusting signal, and the discharge rate of the head towards the cell is directly proportional to the direction and the velocity of the movement.
From physiological studies of head orientation to cells, the head orientation adjustment kernel was found to be:
Figure BDA0002228138920000071
wherein θ 0 θ, the principal orientation of the head toward the cell i The preferential direction of the ith head toward the cell, expressed as a relative to the principal direction θ 0 Is used for the angular offset of (a).
The head-to-cell discharge rate was obtained as:
Figure BDA0002228138920000072
wherein r is the number of head-oriented cells,
Figure BDA0002228138920000073
is the current rotation direction, d i (t) is the head orientation of the cell group with a preferential orientation of θ i The head of the ith cell at time t is directed to the cell discharge rate.
3.2 stripe cell model
The stripe cells are upper cells of the grid cells, the output of the stripe cells is a part of the forward input of the grid cells, linear velocity information is input into the stripe cells, stripe-shaped discharge fields of the stripe cells can be obtained, and a plurality of stripe cell groups jointly influence the hexagonal discharge fields forming the grid cells. Inputting the corrected linear velocity into stripe cells, and constructing a one-dimensional annular attractor model of the stripe cells, wherein the stripe cells in one annular attractor model have the same orientation and period and different position selectivities. And constructing a Gaussian model of the discharge rate of the stripe cells, inputting the Gaussian model into the grid cells, and determining the moving direction of the attractors of the grid cells.
Defining the velocity in the θ direction is:
Figure BDA0002228138920000081
where θ is the preferential orientation of the robot, α is the phase of the streak cells, v (t) is the speed of the robot,
Figure BDA0002228138920000082
is the direction of motion.
The displacement of the robot in the direction can be obtained by carrying out path integration on the speed:
Figure BDA0002228138920000083
the distance to obtain periodic discharge reset of the striped cells is as follows:
s θα (t)=(d θ (t)-α)modl
the discharge rate of the available streak cells was:
Figure BDA0002228138920000084
3.3 Competitive network model for CA 3-entorhinal cortex information transfer
The competitive network model of the CA 3-entorhinal cortex information circulation transmission loop comprises a competitive network model of CA 3-entorhinal cortex information transmission, namely a model 1, and a competitive network model of entorhinal cortex-CA 3 information transmission, namely a model 2.
Firstly, inputting the discharge rate of single grid cell to obtain single grid cellDischarge rate u of cells at each site i The expression is:
u i =H(r j -C in ) (5)
wherein H is learning rate, u i The discharge rate of the cells at the ith position, r j Discharge rate of jth cell, C in Is an inhibitory level constant for the cells of the grid;
then, the discharge rates of a plurality of grid cells related to a certain fixed position in the space are input to obtain the discharge rates u of a plurality of cells related to the fixed position i (d) The expression is:
Figure BDA0002228138920000085
wherein A is the excitatory level constant of the positional cell, C in Is the inhibitory level constant of the site cell, w ij For connecting the weights, d is the position of the robot, M is the number of layers of the grid cell nerve plate;
single site cell discharge rate u i And cell discharge rates u at a plurality of positions associated with a certain fixed position in space i (d) The co-composition position cell discharge rate u;
3.4 competitive network model for olfactory cortex to CA3 information transfer
Firstly, inputting a motion track of a robot to obtain a discharge rate alpha of intermediate cells, wherein the expression is as follows:
Figure BDA0002228138920000091
wherein p is the motion trail of the robot, l i D () is Euclidean distance for the position of the ith intermediate cell, σ is the size of the intermediate cell;
then inputting the discharge rate of the intermediate cells to obtain the discharge rate r of the grid cells, wherein the expression is as follows:
Figure BDA0002228138920000092
wherein alpha is i Discharge rate of intermediate cells, w i For connecting weights τ r For the time constant corresponding to the neuron, N is the total number of intermediate cells;
when the robot explores the environment for the first time, inputting the original grid cell discharge rate into a model 1 to obtain uncorrected position cell discharge rate u, inputting u into an environment expression model of position cells to obtain uncorrected position information p of the robot at the current moment, and performing step 4;
when the robot completely explores the primary environment and begins to explore the environment for the second time, the grid cell discharge rate r obtained by the model 2 is input into the model 1 again, the corrected more accurate position cell discharge rate u ' can be obtained, u ' is input into the environment expression model of the position cell, the corrected position information p ' of the robot at the current moment is obtained, the position of the cognitive map cognitive point is corrected, and the cognitive map is updated.
3.5 environmental expression model of cells at position
Constructing a two-dimensional circular attractor model of the positional cell forms a measure of the relative position of the environment. Synaptic connections of continuous attractor models of positional cells are divided into local excitatory connections, local inhibitory connections, global inhibitory connections, which cooperate to form a random packet of activity. Because the nerve panel of the cells in place has boundaries, the perimeter of the connecting nerve panel forms a circular model.
The invention uses two-dimensional Gaussian function to express local excitation of the cell nerve panel at the position, and the weight connection matrix is expressed as follows:
Figure BDA0002228138920000093
wherein k is p Is a constant of width of the position distribution.
The formula of the change amount of the discharge rate of the position cells caused by the local excitatory connection is as follows:
Figure BDA0002228138920000101
wherein n is X ,n Y Representing the length of the two-dimensional nerve panel in the X, Y direction. The calculation formula of m and n of the relative position coordinates is as follows:
m=(X-X i )(modn X )
n=(Y-Y j )(modn Y )
the discharge rate of cells at the excited position on the nerve panel varies by:
Figure BDA0002228138920000102
wherein, psi is m,n Representing the weight of the inhibitory connection,
Figure BDA0002228138920000103
is the level of inhibition that is global,
in order to ensure that the cells at the positions are not in a negative excitation state, the change amount of the cells at the positions is required to be greater than or equal to zero, so that the change amount of the discharge rate of the cells at each position at the time t is determined according to the formula:
Figure BDA0002228138920000104
the site cell discharge rate was then normalized.
Figure BDA0002228138920000105
The discharge rate of the position cells after path integration is expressed as:
Figure BDA0002228138920000106
wherein δX is 0 、δY 0 The rounding value, the downward offset determined for the current speed and direction, is expressed as:
Figure BDA0002228138920000107
wherein k is m 、k n As a step-size variable,
Figure BDA0002228138920000108
the unit vector θ is the corrected angular velocity, and v is the corrected linear velocity.
In order to obtain the position cell discharge rate at the next moment, the residual quantity is used for expressing the diffusion of the position cell discharge rate, and the residual quantity bias expression is as follows:
Figure BDA0002228138920000111
the residual amounts obtained were:
α mn =Q(δX f ,m-δX 0 )Q(δY f ,n-δY 0 )
Figure BDA0002228138920000112
4. inputting RGB image and depth image information into a feature extraction algorithm to obtain current environment image features;
5. judging whether the image features obtained in the step 4 are matched with the image features in the view library or not according to a matching algorithm: if the position information is matched with the position information, correcting the position of the current robot by using the position information associated with the view library image, correcting the position of a cognitive point of the cognitive map, updating the cognitive map, returning to the step 1 for continuing, otherwise, carrying out the step 6;
the matching algorithm is the scan line intensity distribution, and the average absolute scan line intensity difference of the two images is expressed as:
Figure BDA0002228138920000113
wherein I is j ,I k For the scan line intensity distribution, c is the offset and b is the width of the image.
Obtaining a matching degree measurement of the image:
G=μ R |g iR (c)-g(c)|+μ D |g iD (c)-g(c)|
wherein mu R Is the connection weight of RGB image, mu D Is the connection weight of the depth map.
The minimum offset is:
c m =min(G)
taking a fixed value c, when c m <And c, judging that the current image features are not matched with the image features in the view library, if not, turning to step 6, correcting the position of the current robot by using the position information associated with the view library images, correcting the cognitive map cognitive point positions, and updating the cognitive map.
6. Storing the current image characteristics obtained in the step 4 into a view library, associating the current image characteristics with the current position information of the robot, creating cognitive map cognitive points, updating a cognitive map, and continuing the next step;
the cognitive map consists of cognitive points e, and the cognitive point expression is as follows:
e i ={p i ,V i ,d i }
wherein p is i Discharge information for position cell, V i For image features, d in a view-library i Is the current position information of the robot.
Comparing the current position information of the robot with the position information of the cognitive points contained in the cognitive map to obtain the position measurement as follows:
S=|p i -p|
the conversion amount is as follows:
t ij ={Δd ij }
wherein Δd ij Is the amount of position change.
The new cognitive points are obtained as follows:
e j ={p j ,V j ,d i +Δd ij }
when the image features are not matched with the image features in the view library, correcting the position of the current robot by using the position information associated with the view library images, and correcting the position of the cognitive point of the cognitive map:
Figure BDA0002228138920000121
wherein,,
Figure BDA0002228138920000122
for the update amount N f N is the number of updates from the original cognitive point to other cognitive points t And updating the number from other cognitive points to the original cognitive points.
7. Returning to the step 1, continuously searching the environment and continuously updating the cognitive map.

Claims (1)

1. The cognitive map construction method based on the mouse brain hippocampus information transmission mechanism is characterized by comprising the following steps of:
step 1, a robot explores an environment, an inertial sensor acquires angular velocity information and linear velocity information of the robot, and a visual sensor acquires RGB (red, green and blue) image and depth image information of the environment;
step 2, inputting the acquired angular velocity, linear velocity information, RGB image and depth image information into a visual inertia fusion module, correcting drift errors caused by the angular velocity and linear velocity information along with time by utilizing the image information, and obtaining corrected angular velocity and linear velocity information;
the visual inertia fusion module comprises the following specific steps:
2.1, carrying out ORB feature extraction on RGB image information at each moment, wherein the ORB feature extraction comprises two steps: extracting FAST corner points, namely extracting characteristic points in the image; BRIEF descriptor: feature descriptors of the extracted feature points are obtained, and all feature points matched pairwise in the RGB images at the current moment and the later moment are obtained through the feature descriptors;
2.2 obtaining European transformation R, T by SVD method
Firstly, calculating the barycenter coordinates p of all feature points of a group of two images matched at the current moment and the next moment 1 ,p 2 The barycenter removing coordinates of all the characteristic points of the images at the current moment and the subsequent moment are calculated, and a specific calculation formula is as follows:
Figure FDA0004219311040000011
wherein,,
Figure FDA0004219311040000012
for the de-centroid coordinates of the ith feature point in the jth group of images, i.e. j=1 represents the current moment and j=2 represents the subsequent moment, +.>
Figure FDA0004219311040000013
Is the centroid coordinates of the ith feature point in the jth set of images, where j=1, 2,
calculating an optimal rotation matrix R *
Figure FDA0004219311040000014
Wherein R is * R is the optimal solution of R, R is a rotation matrix;
calculating the optimal translation t *
t * =p 1 -Rp 2 (3)
Wherein t is * The optimal solution is T, and T is the translation quantity;
handle R * And t * Obtaining angular velocity information theta 'and linear velocity information v' of the image by taking into a Rodrigas formula;
2.3 correcting the angular velocity information θ″ and the linear velocity information v″ obtained in the previous step to obtain the angular velocity information θ 'and the linear velocity information u' acquired at the current time from the inertial sensor:
θ=θ″+αθ′,v=v″+αv′ (4)
wherein, alpha epsilon (0, 1) is a weighting coefficient, theta is corrected angular velocity information, and v is corrected linear velocity information;
step 3, inputting the corrected angular velocity theta and linear velocity information v into a position sensing module constructed according to the space cells of the rat brain hippocampus and an information transmission mechanism to obtain the position information of the robot expressed by the position cell discharge information;
the position sensing module constructed according to the space cells of the rat brain hippocampus and the information transmission mechanism is characterized in that: firstly, inputting corrected angular velocity information theta into a head-oriented cell model to obtain a head-oriented cell discharge rate
Figure FDA0004219311040000021
Next, the head is directed toward the cell discharge rate +.>
Figure FDA0004219311040000022
And inputting the corrected linear velocity information v into a streak cell model to obtain a streak cell discharge rate F θ The method comprises the steps of carrying out a first treatment on the surface of the Again discharging the striped cell rate F θ Inputting the cell discharge rate r into a grid cell model to obtain a grid cell discharge rate r; then inputting the grid cell discharge rate r into a competitive network model of a CA 3-entorhinal cortex information circulation transmission loop to obtain a corrected position cell discharge rate u'; finally, inputting the corrected position cell discharge rate u 'into an environment expression model of the position cell to obtain corrected position information p' of the robot at the current moment;
the competitive network model of the CA 3-entorhinal cortex information circulation transmission loop comprises a competitive network model from CA3 to entorhinal cortex information transmission, namely a model 1, and a competitive network model from entorhinal cortex to CA3 information transmission, namely a model 2;
the specific steps of the two models are as follows:
3.1CA3 competitive network model for information transfer to inner olfactory cortex
First, a single grid fine is inputThe discharge rate of cells, and the discharge rate u of cells at single position v The expression is:
u v =H(r k -C ing ) (5)
wherein H is learning rate, u v Discharge rate of the cells at the v-th position, r k Discharge rate of the kth cell, C ing Is an inhibitory level constant for the cells of the grid;
then, the discharge rates of a plurality of grid cells related to a certain fixed position in the space are input to obtain the discharge rates u of a plurality of cells related to the fixed position b (d) The expression is:
Figure FDA0004219311040000023
wherein A is the excitatory level constant of the positional cell, C inp Is the inhibitory level constant of the site cell, w vk For measuring and calculating the connection weight of the cell discharge rate of the position, d is the position of the robot, and M is the number of layers of the grid cell nerve plate;
single site cell discharge rate u v And cell discharge rates u at a plurality of positions associated with a certain fixed position in space v (d) The co-composition position cell discharge rate u;
3.2 competitive network model for olfactory cortex to CA3 information transfer
Firstly, inputting a motion track of a robot to obtain a discharge rate alpha of intermediate cells, wherein the expression is as follows:
Figure FDA0004219311040000031
wherein p is the motion trail of the robot, l m D () is Euclidean distance, σ is the size of the intermediate cell, which is the position of the mth intermediate cell;
then inputting the discharge rate of the intermediate cells to obtain the discharge rate r of the kth grid cells k The expression is:
Figure FDA0004219311040000032
wherein alpha is m Discharge rate of intermediate cells, w mk For measuring and calculating the connection weight tau of the discharge rate of the kth grid cell r For the time constant corresponding to the neuron, S is the total number of the intermediate cells, and t is expressed as a certain moment;
when the robot explores the environment for the first time, inputting the original grid cell discharge rate into a model 1 to obtain uncorrected position cell discharge rate u, inputting u into an environment expression model of position cells to obtain uncorrected position information p of the robot at the current moment, and performing step 4;
when the robot completely explores the primary environment and begins to explore the environment for the second time, the grid cell discharge rate r obtained by the model 2 is input into the model 1 again, so that the corrected more accurate position cell discharge rate u ' can be obtained, u ' is input into the environment expression model of the position cell, the corrected position information p ' of the robot at the current moment is obtained, the position of the cognitive map cognitive point is corrected, and the cognitive map is updated;
step 4, inputting RGB image and depth image information into a feature extraction algorithm to obtain current environment image features;
step 5, judging whether the image features obtained in the step 4 are matched with the image features in the view library or not according to a matching algorithm: if the position information is matched with the position information, correcting the position of the current robot by using the position information associated with the view library image, correcting the position of a cognitive point of the cognitive map, updating the cognitive map, returning to the step 1 for continuing, otherwise, carrying out the step 6;
step 6, storing the current image characteristics obtained in the step 4 into a view library, associating the current image characteristics with the current position information of the robot, creating cognitive map cognitive points, updating the cognitive map, and continuing the next step;
and 7, returning to the step 1, continuously searching the environment, and continuously updating the cognitive map.
CN201910958426.0A 2019-10-10 2019-10-10 Cognitive map construction method based on mouse brain hippocampus information transfer mechanism Active CN111044031B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910958426.0A CN111044031B (en) 2019-10-10 2019-10-10 Cognitive map construction method based on mouse brain hippocampus information transfer mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910958426.0A CN111044031B (en) 2019-10-10 2019-10-10 Cognitive map construction method based on mouse brain hippocampus information transfer mechanism

Publications (2)

Publication Number Publication Date
CN111044031A CN111044031A (en) 2020-04-21
CN111044031B true CN111044031B (en) 2023-06-23

Family

ID=70232239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910958426.0A Active CN111044031B (en) 2019-10-10 2019-10-10 Cognitive map construction method based on mouse brain hippocampus information transfer mechanism

Country Status (1)

Country Link
CN (1) CN111044031B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111813113B (en) * 2020-07-06 2021-07-02 安徽工程大学 Bionic vision self-movement perception map drawing method, storage medium and equipment
CN112525194B (en) * 2020-10-28 2023-11-03 北京工业大学 Cognitive navigation method based on in vivo source information and exogenous information of sea horse-striatum
CN113297506A (en) * 2021-06-08 2021-08-24 南京航空航天大学 Brain-like relative navigation method based on social position cells/grid cells
CN113657574A (en) * 2021-07-28 2021-11-16 哈尔滨工业大学 Construction method and system of bionic space cognitive model
CN113703322B (en) * 2021-08-28 2024-02-06 北京工业大学 Method for constructing scene memory model imitating mouse brain vision pathway and entorhinal-hippocampal structure

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106125730A (en) * 2016-07-10 2016-11-16 北京工业大学 A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell
CN109000655A (en) * 2018-06-11 2018-12-14 东北师范大学 Robot bionic indoor positioning air navigation aid
CN109668566A (en) * 2018-12-05 2019-04-23 大连理工大学 Robot scene cognition map construction and navigation method based on mouse brain positioning cells
CN110210462A (en) * 2019-07-02 2019-09-06 北京工业大学 A kind of bionical hippocampus cognitive map construction method based on convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10677883B2 (en) * 2017-05-03 2020-06-09 Fuji Xerox Co., Ltd. System and method for automating beacon location map generation using sensor fusion and simultaneous localization and mapping

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106125730A (en) * 2016-07-10 2016-11-16 北京工业大学 A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell
CN109000655A (en) * 2018-06-11 2018-12-14 东北师范大学 Robot bionic indoor positioning air navigation aid
CN109668566A (en) * 2018-12-05 2019-04-23 大连理工大学 Robot scene cognition map construction and navigation method based on mouse brain positioning cells
CN110210462A (en) * 2019-07-02 2019-09-06 北京工业大学 A kind of bionical hippocampus cognitive map construction method based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于海马认知机理的仿生机器人认知地图构建方法;于乃功等;《自动化学报》;第44卷(第1期);第52-70页 *

Also Published As

Publication number Publication date
CN111044031A (en) 2020-04-21

Similar Documents

Publication Publication Date Title
CN111044031B (en) Cognitive map construction method based on mouse brain hippocampus information transfer mechanism
Tang et al. Spiking neural network on neuromorphic hardware for energy-efficient unidimensional slam
CN106949896B (en) Scene cognition map construction and navigation method based on mouse brain hippocampus
CN106125730B (en) A kind of robot navigation&#39;s map constructing method based on mouse cerebral hippocampal spatial cell
CN112734765B (en) Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors
Mu et al. End-to-end navigation for autonomous underwater vehicle with hybrid recurrent neural networks
Engel et al. Deeplocalization: Landmark-based self-localization with deep neural networks
Yuan et al. An entorhinal-hippocampal model for simultaneous cognitive map building
Chen et al. Convolutional multi-grasp detection using grasp path for RGBD images
CN108362284A (en) A kind of air navigation aid based on bionical hippocampus cognitive map
Liu et al. Brain-like position measurement method based on improved optical flow algorithm
CN113703322B (en) Method for constructing scene memory model imitating mouse brain vision pathway and entorhinal-hippocampal structure
Zhang et al. A bionic dynamic path planning algorithm of the micro UAV based on the fusion of deep neural network optimization/filtering and hawk-eye vision
CN112509051A (en) Bionic-based autonomous mobile platform environment sensing and mapping method
Wei et al. Design of robot automatic navigation under computer intelligent algorithm and machine vision
Srivastava et al. Least square policy iteration for ibvs based dynamic target tracking
Chen et al. Deep reinforcement learning of map-based obstacle avoidance for mobile robot navigation
CN113689502A (en) Multi-information fusion obstacle measuring method
CN111611869B (en) End-to-end monocular vision obstacle avoidance method based on serial deep neural network
Sleaman et al. Indoor mobile robot navigation using deep convolutional neural network
CN110774283A (en) Robot walking control system and method based on computer vision
CN115950414A (en) Adaptive multi-fusion SLAM method for different sensor data
Zhuang et al. A biologically-inspired simultaneous localization and mapping system based on LiDAR sensor
Taylor et al. Robot-centric human group detection
CN115454096A (en) Robot strategy training system and training method based on curriculum reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant