CN111044031B - Cognitive map construction method based on mouse brain hippocampus information transfer mechanism - Google Patents

Cognitive map construction method based on mouse brain hippocampus information transfer mechanism Download PDF

Info

Publication number
CN111044031B
CN111044031B CN201910958426.0A CN201910958426A CN111044031B CN 111044031 B CN111044031 B CN 111044031B CN 201910958426 A CN201910958426 A CN 201910958426A CN 111044031 B CN111044031 B CN 111044031B
Authority
CN
China
Prior art keywords
information
cell
firing rate
robot
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910958426.0A
Other languages
Chinese (zh)
Other versions
CN111044031A (en
Inventor
于乃功
王林
魏雅乾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201910958426.0A priority Critical patent/CN111044031B/en
Publication of CN111044031A publication Critical patent/CN111044031A/en
Application granted granted Critical
Publication of CN111044031B publication Critical patent/CN111044031B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a cognitive map construction method based on a rat brain hippocampus information transmission mechanism, which comprises the steps of inputting an angular velocity and a linear velocity acquired by an inertial sensor and an RGB image and a depth image of an environment acquired by a visual sensor into a visual inertial fusion module, correcting drift errors caused by the angular velocity and the linear velocity along with time by utilizing image information, inputting the corrected angular velocity and linear velocity into a position sensing module constructed according to the rat brain hippocampus space cells and the information transmission mechanism, respectively inputting the angular velocity and the linear velocity into head orientation cells and stripe cells, constructing a grid cell and a position cell model according to a competitive network model of a CA 3-inner olfactory cortex information circulation transmission loop in the rat brain hippocampus, expressing the environment by using the position cell model, obtaining the position information of a robot, and finally constructing a cognitive map. Compared with the traditional slam mode, the cognitive map constructed by the invention contains biological information and can be applied to various fields in work and life.

Description

Cognitive map construction method based on mouse brain hippocampus information transfer mechanism
Technical field:
the invention belongs to the fields of brain-like calculation and intelligent robot navigation, and particularly relates to a cognitive map construction method based on a mouse brain hippocampus information transmission mechanism.
The background technology is as follows:
the artificial intelligence field is an important research field in twenty-first century, and relates to aspects in human life, so that the life of human beings is greatly changed and improved, and the robot navigation is mainly divided into inertial navigation, road sign navigation, visual navigation and the like as an important research aspect in the artificial intelligence field. The invention combines robot navigation and brain-like calculation, imitates the navigation mode of human and rat to make the robot have similar environment cognition and navigation ability as human, and specifically imitates the discharge mechanism of various space cells in rat hippocampus and the information transmission mechanism between various parts of hippocampus and various nerve cells to realize mobile robot navigation. Robot navigation refers to the process of realizing autonomous movement of an object-oriented obstacle avoidance by sensing and learning through a sensor of the robot in a structured or dynamic unstructured environment. Traditional mobile robot navigation is based on Bayesian probability algorithm, such as graph optimization, kalman filtering, extended Kalman filtering algorithm and the like, has been widely applied in the industrial and life fields, but compared with the navigation capability of human beings and animals, a certain gap still exists, so that the mobile robot capable of processing complex situations and having higher intelligence is required to meet the increasing industrial and life requirements of people, and the simulated biological navigation method has great significance.
In order to meet the higher requirements of the artificial intelligence on robots under the high-speed development, the mobile robots can autonomously solve the problems and have learning ability under the complex environment, so that more and more complex tasks are completed, people are liberated from heavy work and replace people to perform dangerous or high-precision operations, and the brain-like calculation and the bionic calculation model are combined with the robot technology to be more and more valued. Researchers in the fields of artificial intelligence and robot research have proposed that robots should learn from advanced mammals, mimic the brain of advanced mammals, and make them highly intelligent, thereby having advanced capabilities of autonomous learning, deductive reasoning, complex operations, inductive summarization, innovative decisions, etc. Namely, the characteristic learning of the external environment can be automatically completed through the autonomous learning system, so that the required knowledge is obtained and stored; further through the memory capability of the system, the knowledge is stably stored; and how to learn and form the robot internal abstract feature expression on line and autonomously through the system; finally, the decision suitable for the current environment is dynamically regulated and obtained through the system, so that the robot completes the rapid adaptation process to the dynamic environment.
According to the invention, a cognitive map is constructed according to various space cells and information transmission mechanisms in a rat brain hippocampus, and scientists find various space cells related to navigation in the rat brain hippocampus: in 1971, O' Keefe and Dostrovsky first found that the cone neurons in the sea had position selective spike activity, and when the animals were in a specific spatial position, the firing frequency of the specific cone neurons was changed, and such neurons were called position cells; the head found in the posterior hypothalamus by Taube in 1990 is directed towards the cell; in 2005 Hafting et al found grid cells with strong discharge at specific locations in space by changing the size and shape of the test chamber; several laboratory sequential publications in 2008 indicate that boundary cells are found in the superficial cortex of the entorhinal cortex. In 2012, O' Keefa et al found streak cells in the side pessary and entorhinal cortex in the presence of spatially periodic streak-like discharge fields. The hippocampus is mainly composed of four parts, i.e., entorhinal cortex (entorhinal cortex, EC), dentate Gyrus (DG), hippocampus (hippocampus), and hypothalamus complex (subiculum complex, SUB). The hippocampal region is divided into the hippocampal angle 1 region (CA 1) and the hippocampal angle 3 region (CA 3). Two-way fiber projection exists between the hippocampus and the entorhinal cortex, the entorhinal cortex firstly projects information into the hippocampus, and after fiber projection and neuron cell exchange, the hippocampus projects output information into the entorhinal cortex to form an entorhinal cortex-hippocampus loop. The invention models the discharge mechanism of various space cells and the information transmission mechanism among the space cells, constructs a cognitive map of the environment, and navigates on the basis of the cognitive map. The invention can realize the cognition of rats to the space environment and the self-positioning in the environment, can be applied to various fields of industry, agriculture and life, and has good practical value and commercial value.
In previous studies, there has been a construction of a cognitive map using hippocampal space cells, but these models directly input angular velocity and linear velocity information acquired by an inertial sensor into a space cell model, and errors caused by accumulation over time are large; and ignoring the information transfer relationship of CA3 to the entorhinal cortex, there is no complete function to mimic biological space cells.
The invention comprises the following steps:
the invention constructs the cognitive map and realizes the self-positioning of the robot based on the mouse brain hippocampus information transmission mechanism and the discharge mechanism of the space cells in the mouse brain hippocampus, and carries out the robot navigation research on the cognitive map, thereby being applicable to various environments such as indoor, outdoor and the like and being suitable for various fields in work and life.
The traditional robot navigation map construction method is mainly used for constructing a map based on a slam method, the constructed map is a grid map and a topological map, the method needs high calculation requirements and hardware requirements, the grid is divided, the recruitment of topological points is mostly based on a static environment, and the method is poor in performance in a dynamic environment. The map construction method is a cognitive map, and although the cognitive map is one of topological maps, the cognitive map has the advantage with a common topological map that each node of the cognitive map contains space cell discharge information, environment RGB-D information and association information of a plurality of related nodes.
The specific working flow of the invention is as follows:
step 1, a robot explores an environment, an inertial sensor acquires angular velocity information and linear velocity information of the robot, and a visual sensor acquires RGB (red, green and blue) image and depth image information of the environment;
and 2, inputting the acquired angular velocity, linear velocity information, RGB image and depth image information into a visual inertia fusion module, correcting drift errors caused by the angular velocity and linear velocity information along with time by utilizing the image information, and obtaining corrected angular velocity and linear velocity information.
Step 3, inputting the corrected angular velocity theta and linear velocity information v into a position sensing module constructed according to the space cells of the rat brain hippocampus and an information transmission mechanism to obtain the position information of the robot expressed by the position cell discharge information;
step 4, inputting RGB image and depth image information into a feature extraction algorithm to obtain current environment image features;
step 5, judging whether the image features obtained in the step 4 are matched with the image features in the view library or not according to a matching algorithm: if the position information is matched with the position information, correcting the position of the current robot by using the position information associated with the view library image, correcting the position of a cognitive point of the cognitive map, updating the cognitive map, returning to the step 1 for continuing, otherwise, carrying out the step 6;
step 6, storing the current image characteristics obtained in the step 4 into a view library, associating the current image characteristics with the current position information of the robot, creating cognitive map cognitive points, updating the cognitive map, and continuing the next step;
and 7, returning to the step 1, continuously searching the environment, and continuously updating the cognitive map.
Advantageous effects
Firstly, the invention adopts the visual inertia fusion input to construct the cognitive map, and the inertia and visual sensor has the advantages of small volume and low cost, so that the device is smaller and the cost is lower along with the continuous progress of technology; both sensors are passive devices, no external input is needed, and real autonomous navigation can be realized; the inertial sensor and the visual sensor have good complementarity, although errors of the inertial sensor accumulate with time, the inertial sensor can track the rapid motion of the carrier well in a short time, the time precision can be ensured, the visual sensor has high estimation precision in low dynamic motion, the drift of the inertial sensor can be effectively corrected, but the motion is too fast, the good pose estimation cannot be maintained, and the fusion of the inertial sensor and the visual sensor can obtain better navigation parameter estimation. The model of the invention is more in line with the space cognition mechanism of a biological rat, supplements the traditional network model, is more in line with the biological facts, has robustness and can construct a more accurate map.
Drawings
FIG. 1 is a flow chart of a cognitive map construction method based on a mouse brain hippocampus information transfer mechanism
FIG. 2 is a graph of information transfer from a rat brain hippocampus;
fig. 3 is a diagram of feature points matched in pairs in RGB images at the present time and at a later time;
FIG. 4 is a diagram of a competitive neural network based on entorhinal cortex-CA 3 information transfer, including a competitive neural network for CA3 to entorhinal cortex information transfer, and a competitive neural network for entorhinal cortex to CA3 information transfer;
FIG. 5 is a graph of the discharge rate of striped cells, wherein the first vertical row of striped cells is oriented at 0 degrees, the second vertical row of striped cells is oriented at 60 degrees, and the third vertical row of striped cells is oriented at 90 degrees;
FIG. 6 is a graph of the discharge rate of cells of the grid;
FIG. 7 is a graph of discharge rate of cells at locations;
fig. 8 left is an uncorrected cognitive map, right is a corrected cognitive map, and robot points are corrected positions of the robot at the current time.
Detailed Description
Application scene: the main application scene of the invention is the indoor navigation of the mobile robot. The robot can acquire angular speed, linear speed and image information through the method, construct a cognitive map of the environment by exploring the unknown environment, and perform a task of target-oriented navigation according to the position of the robot.
The following describes the method in detail with reference to the drawings and examples.
Fig. 1 is a flowchart of a cognitive map construction method based on a mouse brain hippocampus space cell and an information transmission mechanism, and as shown in the figure, in a visual inertial fusion module, RGB image information acquired by a visual sensor of a robot is subjected to a visual mileage calculation method to obtain angular velocity and linear velocity information, and drift error correction is performed on the angular velocity and linear velocity information acquired by the inertial sensor, so that the angular velocity and linear velocity information input to a position sensing module is more accurate. In the position sensing module, angular velocity information is input into head-oriented cells, head-oriented cell discharge and linear velocity information are input into stripe cells together, the stripe cells are input into grid cells through a two-dimensional continuous attractor model, a competitive network model of an entorhinal cortex-CA 3 information circulation transmission loop between the grid cells and the position cells is constructed, a cognitive map is constructed through final position cell discharge, and the cognitive map is updated through closed loop detection.
The specific implementation steps of the invention are as follows:
1. the robot explores the environment, the inertial sensor collects angular velocity information and linear velocity information, and the visual sensor collects RGB image and depth image information of the environment;
the inertial sensor is mainly used for acquiring angular speed information and linear speed information by a wheel encoder; the vision sensor mainly comprises a Kinect camera for collecting RGB image and depth image information of the environment.
2. And inputting the acquired angular velocity and linear velocity information and RGB image and depth image information into a visual inertia fusion module, correcting drift errors caused by the angular velocity and linear velocity information along with time by utilizing the image information, and obtaining corrected angular velocity and linear velocity information.
The visual inertial fusion module can well track the rapid motion of the carrier in a short time by the inertial sensor, so that the time accuracy can be ensured, but errors can be accumulated along with the time, and the visual sensor has high estimation accuracy in low dynamic motion, so that the drift of the inertial sensor can be effectively corrected.
ORB feature extraction is carried out on RGB image information at each moment, and the ORB feature extraction is divided into two steps: extracting FAST corner points, namely extracting characteristic points in the image; BRIEF descriptor: feature descriptors of the extracted feature points are obtained, and all feature points matched pairwise in the RGB images at the current moment and the later moment are obtained through the feature descriptors;
after feature points are detected by using a FAST feature point detection method by using an ORB algorithm, N feature points with maximum responses to the Harris corner points are selected from the FAST feature points by using a measurement method of the Harris corner points. Wherein the response function of Harris corner points is:
R=detM-α(traceM) 2
since the FAST feature point has no direction, the method needs to be solved by using a gray level centroid method, the determination of the centroid is calculated by moment, and the coordinate of the feature point to the centroid forms a vector as the direction of the feature point. The moment is defined as follows:
Figure BDA0002228138920000051
the centroid obtained is:
Figure BDA0002228138920000052
connecting the geometric center and the mass center of the image block to obtain a direction vector, wherein the direction of the characteristic point can be obtained as follows:
θ=arctan(m 01 ,m 10 )∈[-π,π]
firstly, calculating the barycenter coordinates p of all feature points of a group of two images matched at the current moment and the next moment 1 ,p 2 The barycenter removing coordinates of all the characteristic points of the images at the current moment and the subsequent moment are calculated, and a specific calculation formula is as follows:
Figure BDA0002228138920000061
wherein,,
Figure BDA0002228138920000062
for the de-centroid coordinates of the ith feature point in the jth (j=1, 2) group of images, i.e. j=1 represents the current moment, j=2 represents the subsequent moment,/>
Figure BDA0002228138920000063
For the centroid coordinates of the ith feature point in the jth (j=1, 2) group of images,
calculating an optimal rotation matrix R *
Figure BDA0002228138920000064
Wherein R is * R is the optimal solution of R, R is a rotation matrix;
calculating the optimal translation t *
t * =p 1 -Rp 2
Wherein t is * An optimal solution of t, wherein t is a translation amount;
the calculated angle according to the rodgers formula is:
Figure BDA0002228138920000065
the calculation speed is as follows:
Figure BDA0002228138920000066
the obtained angular velocity information θ″ and linear velocity information v″ correct the angular velocity information θ 'and linear velocity information v' acquired at the present time from the inertial sensor:
θ=θ″+αθ′,v=v″+αv′(α∈(0,1))
where α (α∈ (0, 1)) is a weighting coefficient, θ is corrected angular velocity information, and v is corrected linear velocity information.
3. Inputting the corrected angular velocity theta and linear velocity information v into a position sensing module constructed according to the space cells of the rat brain hippocampus and an information transmission mechanism to obtain the position information of the robot expressed by the position cell discharge information;
the position sensing module constructed according to the space cells of the rat brain hippocampus and the information transmission mechanism firstly inputs the corrected angular velocity information theta into a head-oriented cell model to obtain the head-oriented cell discharge rate
Figure BDA0002228138920000067
Next, the head is directed toward the cell discharge rate
Figure BDA0002228138920000068
And inputting the corrected linear velocity information v into a streak cell model to obtain a streak cell discharge rate F θ The method comprises the steps of carrying out a first treatment on the surface of the Again discharging the striped cell rate F θ Inputting the cell discharge rate r into a grid cell model to obtain a grid cell discharge rate r; then inputting the grid cell discharge rate r into a competitive network model of a CA 3-entorhinal cortex information circulation transmission loop to obtain a corrected position cell discharge rate u'; finally, inputting the corrected position cell discharge rate u 'into an environment expression model of the position cell to obtain corrected position information p' of the robot at the current moment;
3.1 head towards cell model
The head-facing cells are most discharged when the rat is facing a specific direction angle, according to which the more distant the head-facing cells are discharged, the weaker the specific direction angle is, which is only related to the current rat head-facing direction, independent of the environment and rat body direction. The rat is in either a stationary state or a moving state, the head is always discharged towards the cell, the discharge of all the head towards the cell encodes the horizontal direction in which the rat is located, and the information of all the head towards the cell generates a continuous rat head towards signal. In the process of exploring the environment by the robot, the corrected angular velocity input head faces the cell, so that the head faces the cell to generate an angular velocity adjusting signal, and the discharge rate of the head towards the cell is directly proportional to the direction and the velocity of the movement.
From physiological studies of head orientation to cells, the head orientation adjustment kernel was found to be:
Figure BDA0002228138920000071
wherein θ 0 θ, the principal orientation of the head toward the cell i The preferential direction of the ith head toward the cell, expressed as a relative to the principal direction θ 0 Is used for the angular offset of (a).
The head-to-cell discharge rate was obtained as:
Figure BDA0002228138920000072
wherein r is the number of head-oriented cells,
Figure BDA0002228138920000073
is the current rotation direction, d i (t) is the head orientation of the cell group with a preferential orientation of θ i The head of the ith cell at time t is directed to the cell discharge rate.
3.2 stripe cell model
The stripe cells are upper cells of the grid cells, the output of the stripe cells is a part of the forward input of the grid cells, linear velocity information is input into the stripe cells, stripe-shaped discharge fields of the stripe cells can be obtained, and a plurality of stripe cell groups jointly influence the hexagonal discharge fields forming the grid cells. Inputting the corrected linear velocity into stripe cells, and constructing a one-dimensional annular attractor model of the stripe cells, wherein the stripe cells in one annular attractor model have the same orientation and period and different position selectivities. And constructing a Gaussian model of the discharge rate of the stripe cells, inputting the Gaussian model into the grid cells, and determining the moving direction of the attractors of the grid cells.
Defining the velocity in the θ direction is:
Figure BDA0002228138920000081
where θ is the preferential orientation of the robot, α is the phase of the streak cells, v (t) is the speed of the robot,
Figure BDA0002228138920000082
is the direction of motion.
The displacement of the robot in the direction can be obtained by carrying out path integration on the speed:
Figure BDA0002228138920000083
the distance to obtain periodic discharge reset of the striped cells is as follows:
s θα (t)=(d θ (t)-α)modl
the discharge rate of the available streak cells was:
Figure BDA0002228138920000084
3.3 Competitive network model for CA 3-entorhinal cortex information transfer
The competitive network model of the CA 3-entorhinal cortex information circulation transmission loop comprises a competitive network model of CA 3-entorhinal cortex information transmission, namely a model 1, and a competitive network model of entorhinal cortex-CA 3 information transmission, namely a model 2.
Firstly, inputting the discharge rate of single grid cell to obtain single grid cellDischarge rate u of cells at each site i The expression is:
u i =H(r j -C in ) (5)
wherein H is learning rate, u i The discharge rate of the cells at the ith position, r j Discharge rate of jth cell, C in Is an inhibitory level constant for the cells of the grid;
then, the discharge rates of a plurality of grid cells related to a certain fixed position in the space are input to obtain the discharge rates u of a plurality of cells related to the fixed position i (d) The expression is:
Figure BDA0002228138920000085
wherein A is the excitatory level constant of the positional cell, C in Is the inhibitory level constant of the site cell, w ij For connecting the weights, d is the position of the robot, M is the number of layers of the grid cell nerve plate;
single site cell discharge rate u i And cell discharge rates u at a plurality of positions associated with a certain fixed position in space i (d) The co-composition position cell discharge rate u;
3.4 competitive network model for olfactory cortex to CA3 information transfer
Firstly, inputting a motion track of a robot to obtain a discharge rate alpha of intermediate cells, wherein the expression is as follows:
Figure BDA0002228138920000091
wherein p is the motion trail of the robot, l i D () is Euclidean distance for the position of the ith intermediate cell, σ is the size of the intermediate cell;
then inputting the discharge rate of the intermediate cells to obtain the discharge rate r of the grid cells, wherein the expression is as follows:
Figure BDA0002228138920000092
wherein alpha is i Discharge rate of intermediate cells, w i For connecting weights τ r For the time constant corresponding to the neuron, N is the total number of intermediate cells;
when the robot explores the environment for the first time, inputting the original grid cell discharge rate into a model 1 to obtain uncorrected position cell discharge rate u, inputting u into an environment expression model of position cells to obtain uncorrected position information p of the robot at the current moment, and performing step 4;
when the robot completely explores the primary environment and begins to explore the environment for the second time, the grid cell discharge rate r obtained by the model 2 is input into the model 1 again, the corrected more accurate position cell discharge rate u ' can be obtained, u ' is input into the environment expression model of the position cell, the corrected position information p ' of the robot at the current moment is obtained, the position of the cognitive map cognitive point is corrected, and the cognitive map is updated.
3.5 environmental expression model of cells at position
Constructing a two-dimensional circular attractor model of the positional cell forms a measure of the relative position of the environment. Synaptic connections of continuous attractor models of positional cells are divided into local excitatory connections, local inhibitory connections, global inhibitory connections, which cooperate to form a random packet of activity. Because the nerve panel of the cells in place has boundaries, the perimeter of the connecting nerve panel forms a circular model.
The invention uses two-dimensional Gaussian function to express local excitation of the cell nerve panel at the position, and the weight connection matrix is expressed as follows:
Figure BDA0002228138920000093
wherein k is p Is a constant of width of the position distribution.
The formula of the change amount of the discharge rate of the position cells caused by the local excitatory connection is as follows:
Figure BDA0002228138920000101
wherein n is X ,n Y Representing the length of the two-dimensional nerve panel in the X, Y direction. The calculation formula of m and n of the relative position coordinates is as follows:
m=(X-X i )(modn X )
n=(Y-Y j )(modn Y )
the discharge rate of cells at the excited position on the nerve panel varies by:
Figure BDA0002228138920000102
wherein, psi is m,n Representing the weight of the inhibitory connection,
Figure BDA0002228138920000103
is the level of inhibition that is global,
in order to ensure that the cells at the positions are not in a negative excitation state, the change amount of the cells at the positions is required to be greater than or equal to zero, so that the change amount of the discharge rate of the cells at each position at the time t is determined according to the formula:
Figure BDA0002228138920000104
the site cell discharge rate was then normalized.
Figure BDA0002228138920000105
The discharge rate of the position cells after path integration is expressed as:
Figure BDA0002228138920000106
wherein δX is 0 、δY 0 The rounding value, the downward offset determined for the current speed and direction, is expressed as:
Figure BDA0002228138920000107
wherein k is m 、k n As a step-size variable,
Figure BDA0002228138920000108
the unit vector θ is the corrected angular velocity, and v is the corrected linear velocity.
In order to obtain the position cell discharge rate at the next moment, the residual quantity is used for expressing the diffusion of the position cell discharge rate, and the residual quantity bias expression is as follows:
Figure BDA0002228138920000111
the residual amounts obtained were:
α mn =Q(δX f ,m-δX 0 )Q(δY f ,n-δY 0 )
Figure BDA0002228138920000112
4. inputting RGB image and depth image information into a feature extraction algorithm to obtain current environment image features;
5. judging whether the image features obtained in the step 4 are matched with the image features in the view library or not according to a matching algorithm: if the position information is matched with the position information, correcting the position of the current robot by using the position information associated with the view library image, correcting the position of a cognitive point of the cognitive map, updating the cognitive map, returning to the step 1 for continuing, otherwise, carrying out the step 6;
the matching algorithm is the scan line intensity distribution, and the average absolute scan line intensity difference of the two images is expressed as:
Figure BDA0002228138920000113
wherein I is j ,I k For the scan line intensity distribution, c is the offset and b is the width of the image.
Obtaining a matching degree measurement of the image:
G=μ R |g iR (c)-g(c)|+μ D |g iD (c)-g(c)|
wherein mu R Is the connection weight of RGB image, mu D Is the connection weight of the depth map.
The minimum offset is:
c m =min(G)
taking a fixed value c, when c m <And c, judging that the current image features are not matched with the image features in the view library, if not, turning to step 6, correcting the position of the current robot by using the position information associated with the view library images, correcting the cognitive map cognitive point positions, and updating the cognitive map.
6. Storing the current image characteristics obtained in the step 4 into a view library, associating the current image characteristics with the current position information of the robot, creating cognitive map cognitive points, updating a cognitive map, and continuing the next step;
the cognitive map consists of cognitive points e, and the cognitive point expression is as follows:
e i ={p i ,V i ,d i }
wherein p is i Discharge information for position cell, V i For image features, d in a view-library i Is the current position information of the robot.
Comparing the current position information of the robot with the position information of the cognitive points contained in the cognitive map to obtain the position measurement as follows:
S=|p i -p|
the conversion amount is as follows:
t ij ={Δd ij }
wherein Δd ij Is the amount of position change.
The new cognitive points are obtained as follows:
e j ={p j ,V j ,d i +Δd ij }
when the image features are not matched with the image features in the view library, correcting the position of the current robot by using the position information associated with the view library images, and correcting the position of the cognitive point of the cognitive map:
Figure BDA0002228138920000121
wherein,,
Figure BDA0002228138920000122
for the update amount N f N is the number of updates from the original cognitive point to other cognitive points t And updating the number from other cognitive points to the original cognitive points.
7. Returning to the step 1, continuously searching the environment and continuously updating the cognitive map.

Claims (1)

1.一种基于鼠脑海马信息传递机制的认知地图构建方法,其特征在于,所述方法包括如下几个步骤:1. A cognitive map construction method based on the information transfer mechanism of the rat brain hippocampus, characterized in that, the method comprises the following steps: 步骤1,机器人探索环境,惯性传感器采集机器人的角速度信息、线速度信息,视觉传感器采集环境的RGB图像和深度图像信息;Step 1, the robot explores the environment, the inertial sensor collects the angular velocity information and linear velocity information of the robot, and the visual sensor collects the RGB image and depth image information of the environment; 步骤2,将采集的角速度、线速度信息、RGB图像和深度图像信息输入视觉惯性融合模块,利用图像信息修正角速度和线速度信息随时间造成的漂移误差,获得修正后的角速度和线速度信息;Step 2, input the collected angular velocity, linear velocity information, RGB image and depth image information into the visual-inertial fusion module, use the image information to correct the drift error caused by the angular velocity and linear velocity information over time, and obtain the corrected angular velocity and linear velocity information; 视觉惯性融合模块的具体步骤如下:The specific steps of the visual-inertial fusion module are as follows: 2.1对每一个时刻的RGB图像信息进行ORB特征提取,提取ORB特征分为两个步骤:FAST角点提取,即提取出图像中的特征点;BRIEF描述子:得到提取出的特征点的特征描述符,通过特征描述符得到当前时刻和后一时刻RGB图像中所有两两匹配的特征点;2.1 Extract the ORB feature of the RGB image information at each moment. Extracting the ORB feature is divided into two steps: FAST corner point extraction, which extracts the feature points in the image; BRIEF descriptor: obtains the feature description of the extracted feature points character, get all pairwise matching feature points in the RGB image at the current moment and the next moment through the feature descriptor; 2.2使用SVD方法得到欧式变换R,T2.2 Use the SVD method to get the Euclidean transformation R, T 首先求出一组匹配好的当前时刻和后一时刻的两个图像的所有特征点的质心坐标p1,p2,计算当前时刻和后一时刻的图像的所有特征点的去质心坐标,具体的计算公式如下:Firstly, calculate the barycentric coordinates p 1 and p 2 of all the feature points of the two images that are matched at the current moment and the next moment, and calculate the centroid coordinates of all the feature points of the images at the current moment and the next moment, specifically The calculation formula is as follows:
Figure FDA0004219311040000011
Figure FDA0004219311040000011
其中,
Figure FDA0004219311040000012
为第j组图像中第i个特征点的去质心坐标,即j=1表示当前时刻,j=2表示后一时刻,/>
Figure FDA0004219311040000013
为第j组图像中第i个特征点的质心坐标,其中,j=1,2,
in,
Figure FDA0004219311040000012
is the centroid coordinates of the i-th feature point in the j-th group of images, that is, j=1 means the current moment, j=2 means the next moment, />
Figure FDA0004219311040000013
is the centroid coordinates of the i-th feature point in the j-th group of images, where j=1,2,
计算最优旋转矩阵R*Compute the optimal rotation matrix R * :
Figure FDA0004219311040000014
Figure FDA0004219311040000014
其中,R*为R的最优解,R为旋转矩阵;Among them, R * is the optimal solution of R, and R is the rotation matrix; 计算最优平移量t*Compute the optimal translation t * : t*=p1-Rp2 (3)t * =p 1 -Rp 2 (3) 其中,t*为T的最优解,T为平移量;Among them, t * is the optimal solution of T, and T is the translation amount; 把R*和t*带入罗德里格斯公式中求出图像的角速度信息θ″和线速度信息v″;Bring R * and t * into the Rodriguez formula to obtain the angular velocity information θ″ and linear velocity information v″ of the image; 2.3将上一步骤中得到的角速度信息θ″和线速度信息v″对从惯性传感器得到当前时刻采集到的角速度信息θ′和线速度信息u′进行校正:2.3 The angular velocity information θ″ and linear velocity information v″ obtained in the previous step are corrected to the angular velocity information θ′ and linear velocity information u′ collected at the current moment from the inertial sensor: θ=θ″+αθ′,v=v″+αv′ (4)θ=θ″+αθ′, v=v″+αv′ (4) 其中,α∈(0,1)为加权系数,θ为校正后的角速度信息,v为校正后的线速度信息;Among them, α∈(0,1) is the weighting coefficient, θ is the corrected angular velocity information, and v is the corrected linear velocity information; 步骤3,将修正后的角速度θ和线速度信息v输入根据鼠脑海马空间细胞和信息传递机制构建的位置感知模块,得到用位置细胞放电信息表达的机器人的位置信息;Step 3, input the corrected angular velocity θ and linear velocity information v into the position perception module constructed according to the hippocampal space cells and information transmission mechanism of the rat brain, and obtain the position information of the robot expressed by the place cell discharge information; 根据鼠脑海马空间细胞和信息传递机制构建的位置感知模块,其特征在于:首先将修正后的角速度信息θ输入头朝向细胞模型,得到的头朝向细胞放电率
Figure FDA0004219311040000021
接下来把头朝向细胞放电率/>
Figure FDA0004219311040000022
和修正后的线速度信息v输入到条纹细胞模型中,得到的条纹细胞放电率Fθ;再次把条纹细胞放电率Fθ输入到网格细胞模型,得到网格细胞放电率r;然后把网格细胞放电率r输入到CA3-内嗅皮层信息循环传递回路的竞争型网络模型,得到校正后的位置细胞放电率u′;最后把校正后的位置细胞放电率u′输入位置细胞的环境表达模型,得到校正后的当前时刻机器人的位置信息p′;
The position perception module constructed based on the spatial cells and information transmission mechanism of the rat brain hippocampus is characterized in that: first, the corrected angular velocity information θ is input into the head-facing cell model, and the obtained head-facing cell firing rate
Figure FDA0004219311040000021
Next head towards cell firing rate />
Figure FDA0004219311040000022
and the corrected linear velocity information v are input into the stripe cell model, and the stripe cell firing rate F θ is obtained; again, the stripe cell firing rate F θ is input into the grid cell model, and the grid cell firing rate r is obtained; The lattice cell firing rate r is input to the competitive network model of the CA3-entorhinal cortex information circulation transmission loop, and the corrected place cell firing rate u' is obtained; finally, the corrected place cell firing rate u' is input into the environmental expression of the place cell model to obtain the corrected position information p′ of the robot at the current moment;
其中,CA3-内嗅皮层信息循环传递回路的竞争型网络模型包括CA3到内嗅皮层信息传递的竞争型网络模型,即模型1,和内嗅皮层到CA3信息传递的竞争型网络模型,即模型2;Among them, the competitive network model of CA3-entorhinal cortex information circulation loop includes the competitive network model of information transmission from CA3 to entorhinal cortex, namely model 1, and the competitive network model of information transmission from entorhinal cortex to CA3, namely model 2; 两个模型的具体步骤如下:The specific steps of the two models are as follows: 3.1CA3到内嗅皮层信息传递的竞争型网络模型3.1 Competitive network model of information transmission from CA3 to entorhinal cortex 首先输入单个网格细胞的放电率,得到单个位置细胞的放电率uv,表达式为:First input the firing rate of a single grid cell to obtain the firing rate u v of a single place cell, the expression is: uv=H(rk-Cing) (5)u v =H(r k -C ing ) (5) 其中,H为学习率,uv为第v个位置细胞的放电率,rk为第k个网格细胞的放电率,Cing为网格细胞的抑制性水平常量;Among them, H is the learning rate, u v is the firing rate of the vth place cell, r k is the firing rate of the kth grid cell, C ing is the inhibitory level constant of the grid cell; 然后输入跟空间中某个固定位置相关的多个网格细胞的放电率,得到跟这个固定位置相关的多个位置细胞的放电率ub(d),表达式为:Then input the firing rate of multiple grid cells related to a fixed position in space, and obtain the firing rate u b (d) of multiple position cells related to this fixed position, the expression is:
Figure FDA0004219311040000023
Figure FDA0004219311040000023
其中,A为位置细胞的兴奋性水平常量,Cinp为位置细胞的抑制性水平常量,wvk为测算位置细胞放电率的连接权重,d为机器人所在的位置,M为网格细胞神经板的层数;Among them, A is the excitatory level constant of the place cell, C inp is the inhibitory level constant of the place cell, w vk is the connection weight for measuring the firing rate of the place cell, d is the position of the robot, and M is the neural plate of the grid cell layers; 单个位置细胞放电率uv和与空间中某个固定位置相关的多个位置细胞放电率uv(d)共同组成位置细胞放电率u;A single place cell firing rate u v and multiple place cell firing rates u v (d) related to a certain fixed position in space jointly constitute the place cell firing rate u; 3.2内嗅皮层到CA3信息传递的竞争型网络模型3.2 Competitive network model of information transfer from entorhinal cortex to CA3 首先输入机器人的运动轨迹,得到中间细胞的放电率α,表达式为:First input the trajectory of the robot to get the firing rate α of the intermediate cells, the expression is:
Figure FDA0004219311040000031
Figure FDA0004219311040000031
其中,p为机器人的运动轨迹,lm为第m个中间细胞的位置,d( )为欧几里得距离,σ为中间细胞的大小;Among them, p is the trajectory of the robot, l m is the position of the mth intermediate cell, d( ) is the Euclidean distance, and σ is the size of the intermediate cell; 然后输入中间细胞的放电率,得到第k个网格细胞的放电率rk,表达式为:Then input the firing rate of the middle cell to get the firing rate r k of the kth grid cell, the expression is:
Figure FDA0004219311040000032
Figure FDA0004219311040000032
其中,αm为中间细胞的放电率,wmk为测算第k个网格细胞放电率的连接权重,τr为神经元对应的时间常量,S为中间细胞的总数,t表示为某一时刻;Among them, α m is the firing rate of the intermediate cell, w mk is the connection weight for measuring the firing rate of the kth grid cell, τ r is the time constant corresponding to the neuron, S is the total number of intermediate cells, and t is a certain moment ; 当机器人第一次探索环境时,将原始的网格细胞放电率输入到模型1中,得到未校正的位置细胞放电率u,将u输入位置细胞的环境表达模型,得到未校正的当前时刻机器人的位置信息p,进行步骤4;When the robot explores the environment for the first time, input the original grid cell firing rate into model 1 to obtain the uncorrected place cell firing rate u, and input u into the environment expression model of the place cell to obtain the uncorrected robot at the current moment The position information p of , go to step 4; 当机器人完全探索过一次环境,开始第二次探索环境时,将模型2得到的网格细胞放电率r再次输入模型1中,就可以得到校正后更加准确的位置细胞放电率u′,将u′输入位置细胞的环境表达模型,得到校正后的当前时刻机器人的位置信息p′,并修正认知地图认知点位置,更新认知地图;When the robot has completely explored the environment once and starts to explore the environment for the second time, input the grid cell firing rate r obtained in Model 2 into Model 1 again, and then the corrected and more accurate location cell firing rate u′ can be obtained, and u 'Input the environment expression model of the position cell, obtain the corrected position information p' of the robot at the current moment, and correct the position of the cognitive point on the cognitive map, and update the cognitive map; 步骤4,将RGB图像和深度图像信息输入特征提取算法中,获得当前环境图像特征;Step 4, input the RGB image and depth image information into the feature extraction algorithm to obtain the current environment image features; 步骤5,根据匹配算法,判断步骤4得到的图像特征与视图库中的图像特征是否匹配:若匹配,则利用视图库图像关联的位置信息纠正当前机器人的位置,并修正认知地图认知点位置,更新认知地图,并返回步骤1继续,否则进行步骤6;Step 5, according to the matching algorithm, judge whether the image features obtained in step 4 match the image features in the view library: if they match, use the position information associated with the view library image to correct the current position of the robot, and correct the cognition points of the cognitive map location, update the cognitive map, and return to step 1 to continue, otherwise go to step 6; 步骤6,将步骤4得到的当前图像特征存入视图库,并将其与机器人当前位置信息关联,创建认知地图认知点并更新认知地图,继续下一步;Step 6, store the current image features obtained in step 4 into the view library, and associate it with the current position information of the robot, create cognitive points on the cognitive map and update the cognitive map, and continue to the next step; 步骤7,返回步骤1,继续探索环境,不断更新认知地图。Step 7, return to step 1, continue to explore the environment, and constantly update the cognitive map.
CN201910958426.0A 2019-10-10 2019-10-10 Cognitive map construction method based on mouse brain hippocampus information transfer mechanism Active CN111044031B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910958426.0A CN111044031B (en) 2019-10-10 2019-10-10 Cognitive map construction method based on mouse brain hippocampus information transfer mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910958426.0A CN111044031B (en) 2019-10-10 2019-10-10 Cognitive map construction method based on mouse brain hippocampus information transfer mechanism

Publications (2)

Publication Number Publication Date
CN111044031A CN111044031A (en) 2020-04-21
CN111044031B true CN111044031B (en) 2023-06-23

Family

ID=70232239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910958426.0A Active CN111044031B (en) 2019-10-10 2019-10-10 Cognitive map construction method based on mouse brain hippocampus information transfer mechanism

Country Status (1)

Country Link
CN (1) CN111044031B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111813113B (en) * 2020-07-06 2021-07-02 安徽工程大学 Bionic visual self-motion perception map drawing method, storage medium and device
CN112525194B (en) * 2020-10-28 2023-11-03 北京工业大学 Cognitive navigation method based on in vivo source information and exogenous information of sea horse-striatum
CN113297506B (en) * 2021-06-08 2024-10-29 南京航空航天大学 Brain-like relative navigation method based on social position cells/grid cells
CN113657574A (en) * 2021-07-28 2021-11-16 哈尔滨工业大学 Construction method and system of bionic space cognitive model
CN113703322B (en) * 2021-08-28 2024-02-06 北京工业大学 Method for constructing scene memory model imitating mouse brain vision pathway and entorhinal-hippocampal structure
CN114952847B (en) * 2022-05-31 2025-01-03 中国电信股份有限公司 A method and device for constructing a cognitive map

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106125730A (en) * 2016-07-10 2016-11-16 北京工业大学 A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell
CN109000655A (en) * 2018-06-11 2018-12-14 东北师范大学 Robot bionic indoor positioning air navigation aid
CN109668566A (en) * 2018-12-05 2019-04-23 大连理工大学 Robot scene cognition map construction and navigation method based on mouse brain positioning cells
CN110210462A (en) * 2019-07-02 2019-09-06 北京工业大学 A kind of bionical hippocampus cognitive map construction method based on convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10677883B2 (en) * 2017-05-03 2020-06-09 Fuji Xerox Co., Ltd. System and method for automating beacon location map generation using sensor fusion and simultaneous localization and mapping

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106125730A (en) * 2016-07-10 2016-11-16 北京工业大学 A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell
CN109000655A (en) * 2018-06-11 2018-12-14 东北师范大学 Robot bionic indoor positioning air navigation aid
CN109668566A (en) * 2018-12-05 2019-04-23 大连理工大学 Robot scene cognition map construction and navigation method based on mouse brain positioning cells
CN110210462A (en) * 2019-07-02 2019-09-06 北京工业大学 A kind of bionical hippocampus cognitive map construction method based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于海马认知机理的仿生机器人认知地图构建方法;于乃功等;《自动化学报》;第44卷(第1期);第52-70页 *

Also Published As

Publication number Publication date
CN111044031A (en) 2020-04-21

Similar Documents

Publication Publication Date Title
CN111044031B (en) Cognitive map construction method based on mouse brain hippocampus information transfer mechanism
CN106949896B (en) A situational cognitive map construction and navigation method based on rat hippocampus
CN112097769B (en) Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method
Liu et al. Brain-like position measurement method based on improved optical flow algorithm
CN109668566A (en) Robot scene cognition map construction and navigation method based on mouse brain positioning cells
CN106125730A (en) A kind of robot navigation&#39;s map constructing method based on Mus cerebral hippocampal spatial cell
CN108362284A (en) A kind of air navigation aid based on bionical hippocampus cognitive map
CN113703322B (en) Method for constructing scene memory model imitating mouse brain vision pathway and entorhinal-hippocampal structure
Yuan et al. An entorhinal-hippocampal model for simultaneous cognitive map building
Chen et al. Convolutional multi-grasp detection using grasp path for RGBD images
CN103926930A (en) Multi-robot cooperation map building method based on Hilbert curve detection
CN109000655A (en) Robot bionic indoor positioning air navigation aid
CN114689038B (en) Fruit detection positioning and orchard map construction method based on machine vision
CN110210462A (en) A kind of bionical hippocampus cognitive map construction method based on convolutional neural networks
CN112509051A (en) Bionic-based autonomous mobile platform environment sensing and mapping method
Srivastava et al. Least square policy iteration for ibvs based dynamic target tracking
Yu et al. A deep-learning-based strategy for kidnapped robot problem in similar indoor environment
CN102401656A (en) A Position Cell Navigation Algorithm for Biomimetic Robot
CN107363834A (en) A kind of mechanical arm grasping means based on cognitive map
CN116528171A (en) Mobile sensor network target tracking method based on force guiding positioning
Yue et al. Semantic-driven autonomous visual navigation for unmanned aerial vehicles
CN111611869B (en) End-to-end monocular vision obstacle avoidance method based on serial deep neural network
Yu et al. Nidaloc: Neurobiologically inspired deep lidar localization
CN110774283A (en) A computer vision-based robot walking control system and method
CN108459614B (en) A real-time collision avoidance planning method for UUV based on CW-RNN network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant