CN110019582B - Cognitive map construction method based on space and motion joint coding - Google Patents

Cognitive map construction method based on space and motion joint coding Download PDF

Info

Publication number
CN110019582B
CN110019582B CN201710748763.8A CN201710748763A CN110019582B CN 110019582 B CN110019582 B CN 110019582B CN 201710748763 A CN201710748763 A CN 201710748763A CN 110019582 B CN110019582 B CN 110019582B
Authority
CN
China
Prior art keywords
speed
space
cells
velocity
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710748763.8A
Other languages
Chinese (zh)
Other versions
CN110019582A (en
Inventor
斯白露
曾太平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Automation of CAS
Original Assignee
Shenyang Institute of Automation of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Automation of CAS filed Critical Shenyang Institute of Automation of CAS
Priority to CN201710748763.8A priority Critical patent/CN110019582B/en
Publication of CN110019582A publication Critical patent/CN110019582A/en
Application granted granted Critical
Publication of CN110019582B publication Critical patent/CN110019582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Manipulator (AREA)
  • Feedback Control In General (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a cognitive map construction method based on space and motion joint coding. Belongs to the technical field of robot navigation. Taking the inner olfactory cortex-hippocampal nerve loop of the internal positioning system of the brain of the mammal into consideration, a continuous attractor neural network model-space memory network is formed by utilizing the space navigation coding characteristics of position cells, head orientation cells, grid cells and velocity cells, and simultaneously the space and motion characteristics of the animal are coded, and path integration is carried out on angular velocity and linear velocity. In the network, asymmetric connection weights among various neurons generate a system inherent activity peak capable of spontaneously moving, and once a speed input exists, the attractor network can stably integrate the linear speed and the angular speed to form stable robot head direction and position codes. The single camera visual input forms a local view cell, which can correct the accumulated error of the path integral. The invention can utilize the single camera to robustly construct the consistent half-metric topological map.

Description

Cognitive map construction method based on space and motion joint coding
Technical Field
The invention belongs to the field of robot navigation, and particularly relates to a map construction method based on space and motion joint coding of mammal space cognition and adopting a single camera as information input.
Background
One of the most important challenges to integrate robots into our daily lives is how to give robots spatial awareness to explore the surrounding environment like animals. Over the past twenty years, many researchers have spent a great deal of effort in imparting autonomous navigation capabilities to robots, capable of simultaneous localization, mapping and navigation without human intervention. This has constituted an important area of robotics research-synchronized localization and mapping (Simultaneous Localization and Mapping, SLAM). Methods are mainly divided into two categories, classical methods are called probabilistic SLAM, and extended kalman filtering, particle filtering, etc. are generally used. Various filtering algorithms are used to estimate the pose of the robot. However, these methods are limited in that they require expensive sensors, a large amount of computational resources, and assume a static or structured environment. Although probabilistic methods provide perfect mathematical interpretation, they seriously affect the application in real physical environments due to lack of robustness.
Another approach derives from neuroscience heuristics, aimed at building neural network models that mimic mammalian spatial navigation mechanisms. With the sequential discovery of location cells, head-oriented cells, grid cells, velocity cells, a number of neural network models have been proposed to explain the navigation mechanisms within the brain, largely into three categories: an concussion interference model, a continuous attractor network model and a self-organizing model. The oscillation interference model is easy to generate drift errors and lacks robustness. The continuous attractor network model can still realize stable path integration under the condition that the connection weight is interfered. While a number of computational models have been proposed to explain the navigational mechanisms of mammals, few robotic navigational systems employ spatial navigational mechanisms. In the RatSLAM model, the biological fidelity is sacrificed, and the pose cells code different positions by moving energy peaks instead of adopting the dynamic characteristics of a network. In Jauffret, the release model of the grid cells is implemented by means of modulo mapping, but the grid cells neither help the mobile robot to position nor implement the construction of the environment. Yuan et al construct a cognitive map within the office environment using a grid cell model of pure location as proposed by Yoram Burak and Ila R Fiete in 2009 in combination with a location cell network. The joint coding characteristics of speed and position, head orientation are not considered, and tests are only performed on a small scale, with robust performance over a large scale anyway requiring further testing.
In short, the current mobile robots are far from practical, and cannot be integrated into the life of people. With further findings of experimental neuroscience, the spatial cognitive neural network model of mammals will be further improved. Further exploration of the spatial navigation capabilities of the brain will help to build autonomous mobile robots with humanoid navigation capabilities.
Disclosure of Invention
The invention aims to develop a cognitive map construction system for a mobile robot, which is based on local view cells of visual cortex of mammals, head orientation-speed joint coding cells of an inner olfactory cortex and grid-speed joint coding cells of a deep inner olfactory cortex, and the rotation and translation of the robot, the head orientation and the position are coded by adopting a continuous attraction network. The urban map construction of the outdoor large-scale environment with the length of 66km can be realized by means of low-cost single-camera information input, and a system block diagram is shown in fig. 1.
The technical scheme adopted for solving the technical problems is as follows: the cognitive map construction method based on the space and motion joint coding comprises the following steps:
the vision measuring unit receives the camera image, obtains the angular speed and the linear speed of the robot according to the change of the vision scene, and inputs the angular speed and the linear speed into the space memory network and the map construction node;
the local view cell extracts a local view template from the camera image to encode different scenes; when the current scene is the same as the previous scene, the position which has been visited before is considered to be reached, the corresponding local view cell is activated, and the corresponding grid cell and head orientation cell in the spatial memory network are activated;
the space memory network receives the input of the local view cells and the vision measurement unit, performs path integration and vision calibration, and sends instructions to the map construction node;
the map construction node reads out the space codes of the environment by using the space memory network, and forms an environment map according to the angular speed and the linear speed.
And if the current scene is different from the previous scene, establishing a new local view cell.
The steps of path integration and visual calibration comprise the following steps:
head orientation-speed joint coding attractor network model: the jointly encoded heads form circular attractors in neural space towards the cellular network, with a single peak of activity presented in the network; the real head orientation angle of the robot in the physical environment activates part of head orientation nerve cells through the modulated speed input, and the movement of the activity peak is proportional to the angular speed of the robot in the physical environment;
grid-velocity joint coding attractor network model: due to periodic boundary conditions, the jointly encoded grid cells form a circular ring-shaped attractor network in neural space; the position of the phase encoding robot in the physical space in the grid pattern, the grid cells are activated by head-facing cells and velocity cells located in the inner olfactory cortex; the movement of the grid pattern is proportional to the speed of the robot in the physical environment; head orientation cells and grid cells together form head orientation and position cluster codes of a robot physical environment in neural space.
The head orientation-speed joint encoding attractor network model:
Figure BDA0001390643700000031
wherein I is v Representing modulated speed input, I view A calibration input representing a local view cell; m represents the release rate of the cells;
f (x) =x when x >0
f (x) =0 others;
Figure BDA0001390643700000032
the weights of the pre-protrusion nerve cells (θ ', v') and the post-protrusion nerve cells (θ, v) are obtained by the following formula,
J(θ,v|θ′,v′)=J 0 +J 1 cos(θ-θ′-v′)cos(λ(v-v′))
wherein J is 0 <0 represents uniform inhibition, J 1 >Weight intensity of 0; θ and v represent postsynaptic neural cells encoding the lattice and neural cells encoding the velocity, respectively; θ 'and v' represent presynaptic lattice-encoding neural cells and velocity-encoding neural cells, respectively; λ represents the magnitude of the modulation speed peak; l (L) r An extremum representing the encoded speed;
the modulated speed input is obtained by:
the angular velocity input V is mapped to the ideal active peak position u (V) in neural space,
u(V)=arctan(τV)
where τ represents the time constant in the head-orientation-velocity joint encoding attractor network model;
the ideal active peak position is modulated by a Gaussian function and then is input into a head orientation-speed joint coding attractor network model, and the method is realized by adopting the following formula:
Figure BDA0001390643700000041
wherein I is r Representing the input amplitude of the angular velocity, ε represents the intensity of the velocity modulation, σ r Indicating how sharp the speed is adjusted;
the head orientation is obtained by:
estimating head orientation from fourier transforms:
ψ=∠(∫∫m(θ,v)exp(iθ)DθDv)
wherein + (Z) represents the angle of the complex number Z;
the angular velocity is obtained by:
estimating the phase of the angular velocity from the fourier transform:
Figure BDA0001390643700000042
then map it into the actual physical space, the formula is as follows:
Figure BDA0001390643700000043
the grid-velocity joint coding attractor network model is as follows:
Figure BDA0001390643700000044
Figure BDA0001390643700000045
Figure BDA0001390643700000046
I v representing modulated speed input, I view A calibration current input representing a local view cell; m represents the release rate of the cells; l (L) t An extremum representing a speed;
pre-herniated nerve cells
Figure BDA0001390643700000051
And post-herniating nerve cells->
Figure BDA0001390643700000052
The weight of (2) is obtained by the following formula:
Figure BDA0001390643700000053
wherein J is 0 <0, represents uniform inhibition, J k >0, which represents the weight intensity; k represents the number of active peaks;two location dimensions in the coding environment, +.>
Figure BDA0001390643700000059
Dimensions in two speed directions in the encoding environment; />
Figure BDA0001390643700000058
Neural cells expressing the pre-protrusion coding position and neural cells expressing the coding rate, respectively; lambda indicates the magnitude of the modulation speed peak.
The modulated speed input is obtained by:
in neural space, the velocities of the X-axis and Y-axis form a vector
Figure BDA00013906437000000510
Then mapped to the desired speed axis position +.>
Figure BDA00013906437000000511
The formula is as follows:
Figure BDA0001390643700000054
wherein S represents the speed transformation coefficients of the physical space and the nerve coding space, and k represents the number of active peaks; τ represents the time constant in the grid-velocity joint encoding attractor network model;
the velocity through Gaussian modulation is then input into the grid-velocity joint coding attractor network model as follows:
Figure BDA0001390643700000055
I t representing the input amplitude of the angular velocity, ε represents the intensity of the velocity modulation, σ r Indicating how sharp the speed is adjusted.
The linear velocities of the dimensions in the two velocity directions are obtained by:
first, the positions of the active peaks on the two velocity axes are estimated as follows:
Figure BDA0001390643700000056
wherein, the angle (Z) represents the angle of the complex number Z; phi (phi) j J is x, y, where φ j Representing the active peak position on the x, y axis in the velocity dimension;
the neural space is then mapped into physical space according to the phase of the active peak, as follows:
Figure BDA0001390643700000061
V j representing the speed of the robot in the physical environment, where j is x, y; s represents the speed transformation coefficients of the physical space and the nerve coding space, and k represents the number of active peaks; τ represents the time constant in the grid-velocity joint encoding attractor network model.
The position of the robot in the physical space is obtained through a grid-speed joint coding attractor network model by the following steps:
8.1 Phase of the grid cells):
the grid pattern is mapped onto three equally angled vectors, which are as follows:
Figure BDA0001390643700000062
the phase on the projection axis is then found, as follows:
Figure BDA0001390643700000063
wherein l 1 =1,l 2 =l 3 =sinα, α=arctan (2); j represents a sequence number, here 1, 2, 3;
8.2 Then find the grid pattern phase, the formula is as follows:
Figure BDA0001390643700000064
Figure BDA0001390643700000065
8.3 Finally, estimating the position of the robot in the physical space:
Figure BDA0001390643700000066
Figure BDA0001390643700000067
wherein the method comprises the steps of
Figure BDA0001390643700000068
Is a proportional primer for the physical space and the neural space of a robot.
The respective local view cells activating and activating respective grid cells and head-facing cells in the spatial memory network comprises:
1) The local view cell correction head orientation-speed joint coding attractor network model is realized by using a Gaussian adjustment mode, and the formula is as follows:
Figure BDA0001390643700000071
wherein I is d Represents the amplitude of the injection energy, ψ represents the associated phase of the local view cells, σ d Representing the sharpness of the gaussian adjustment;
2) The local view cell correction grid-speed joint coding attractor network model is used for generating a grid cell issuing model by using the concussion interference model, and the formula is as follows:
Figure BDA0001390643700000072
wherein I is p Representing the magnitude of the implant energy, C represents a constant,
Figure BDA0001390643700000073
representing the phase of local view cell association; k represents the number of encoded velocity peaks in neural space; />
Figure BDA0001390643700000074
Figure BDA0001390643700000075
j represents a sequence number, here 1, 2, 3; l (L) 1 =1,l 2 =l 3 =sinα,α=arctan(2)。
The map construction node reads out the space codes of the environment by using a space memory network, and forms an environment map according to the speed, and the map construction node comprises the following steps:
the map construction node utilizes angular velocity, linear velocity and position and head orientation of the spatial memory network code to establish nodes of the topological map and connection relations between the nodes and other nodes, and then the map is obtained through optimization in a general map optimization mode.
The invention has the following beneficial effects and advantages:
1. the invention can integrate the motion information and the perception information and realize the stable coding of the space environment. The neural network architecture adopted by the invention is consistent with the protruding connection mechanism of the mammalian entorhinal cortex, has the capacity of coding in combination with the speed based on a deep entorhinal cortex model, is similar to the experimental result of a single neuron acquired by neurobiology, and has high biological fidelity.
2. Meanwhile, the result obtained by the experiment proves that the cognitive map construction system can successfully construct a consistent topological map in suburbs (shown in fig. 4A) with large scale by adopting image information acquired by a single cheap camera, as shown in fig. 4B.
Drawings
FIG. 1 is a block diagram of a cognitive map construction system of the present invention;
FIG. 2 is a schematic diagram of a grid-mode phase estimation according to the present invention;
FIG. 3A shows neural activity of head orientation-velocity cells;
FIG. 3B shows neural activity of grid-velocity cells;
FIG. 3C shows visual information;
FIG. 3D shows a topological map;
FIG. 4A shows a map of a test environment;
fig. 4B shows a map generated by the cognitive environment construction system.
Detailed Description
The present invention will be described in further detail with reference to examples.
As shown in fig. 1. The invention discloses a cognitive map construction method based on space and motion joint coding. Belongs to the technical field of robot navigation. The space navigation coding characteristics of position cells, head orientation cells, grid cells and velocity cells are utilized to form a complex continuous attractor neural network model-space memory network by considering the inner olfactory cortex-hippocampal nerve loop of the internal positioning system of the mammal brain, and the space and motion characteristics of animals can be coded simultaneously to carry out path integration on angular velocity and linear velocity. In the network, the asymmetric connection weight among the neurons can generate a system inherent activity peak which can spontaneously move, and once the speed is input, the attractor network can stably integrate the linear speed and the angular speed to form stable robot head direction and position codes. The single camera visual input forms a local view cell, which can correct the accumulated error of the path integral. The provided cognitive map construction method can be used for robustly constructing a consistent half-metric topological map by using a single camera.
The invention is composed of five nodes, the vision measuring unit receives sensor data and measures angular velocity and linear velocity according to the change of vision scene; the local view cell judges whether the accessed scene is entered or not, so as to judge whether visual calibration is carried out or not; the space memory network comprises a head orientation-speed joint coding attractor network model and a grid-speed joint coding attractor network model, receives the input of local view cells and vision measurement, performs path integration and vision calibration, and sends instructions to map construction nodes for constructing a topological map;
head orientation-velocity joint coding attractor network model, joint coding head orientation cell network can form circular attractors in neural space, single activity peaks appear in the network. The motion peak encodes the real head orientation angle of the robot in the physical environment, the modulated speed input activates a small part of head orientation neurons, and the motion of the motion peak is proportional to the angular speed of the robot in the physical environment; neural activity of neurons in the head-orientation-velocity encoding attractor network model is shown in fig. 3A;
grid-velocity joint coding attractor network model due to periodic boundary conditions (0-2 pi), the joint coding grid cell model can form a circular ring-shaped attractor network in neural space. The phase of the grid pattern can encode the position of the robot in physical space, and the grid cells can be activated by head-facing cells and velocity cells located in the inner olfactory cortex. The movement of the grid model is proportional to the speed of the robot in the physical environment. Head orientation cells and grid cells together form a head orientation and position cluster code of the robot physical environment in neural space; neural activity of neurons in the grid-velocity encoded attractor network model is shown in fig. 3B;
and (3) visual measurement, wherein the robot estimates the linear speed and the angular speed by matching two continuous frames of pictures acquired by the camera. The estimated angular velocity forms head orientation information from a head orientation-velocity joint encoded attractor network model. The estimated linear velocity and the combined head orientation information are combined to complete path integration by a grid-velocity joint coding attractor network model;
the local view cells are used for extracting the local view templates from the camera images to encode different scenes, and if the scenes are different from all the previous observed scenes, a new local view cell is established; if the local view template is sufficiently identical to the previous local view template, then the location that has been previously visited is deemed to be reached, the corresponding local view cell is activated, and input into the spatial memory network is provided; FIG. 3C shows the input visual scene (top), partial view cell visual template (middle), current partial view cell template visual (bottom);
and (3) constructing a map, wherein the spatial memory network is utilized to read out the spatial codes of the environment to form an environment map. The movement speed of the robot in the topological map is measured by vision measurement, when the robot returns to the position visited before, a closed loop is formed, the map is further corrected by an optimization method, and the topological map in the map construction process is shown in fig. 3D.
The weights of the pre-protrusion neurons (θ ', v') and post-protrusion neurons (θ, v) of the head orientation-velocity joint encoding attractor network model are obtained by the following formula,
J(θ,v|θ′,v′)=J 0 +J 1 cos(θ-θ′-v′)cos(λ(v-v′))
wherein J0<0 represents uniform suppression, J1>0 weight intensity.
The firing rate of neurons (θ, v) for the attractor network dynamics model of the head orientation-velocity joint coding model is obtained by the following differential equation:
Figure BDA0001390643700000101
wherein I is v Representing a speed modulation input, I view The calibration current input representing the local view cell,
f (x) =x when x >0; f (x) =0 others.
The abbreviations are as follows
Figure BDA0001390643700000102
Angular velocity modulation input:
the external angular velocity input V is mapped to the ideal active peak position u (V) in neural space,
u(V)=arctan(τV)
where τ represents the time constant in the attractor network dynamics model.
The ideal active peak position is modulated by a Gaussian function and then is input into a head orientation-speed joint coding model, and the method is realized by adopting the following formula:
Figure BDA0001390643700000103
wherein I is r Representing the input amplitude of the rotational speed, ε represents the intensity of the speed adjustment, σ r Indicating how sharp the speed is adjusted.
Head orientation estimation:
estimating head orientation from fourier transforms:
ψ=∠(∫∫m(θ,v)exp(iθ)DθDv)
wherein + (Z) is the angle of the complex number Z.
Angular velocity estimation:
estimating the phase of the angular velocity from the fourier transform:
Figure BDA0001390643700000111
then map it into the actual physical space, the formula is as follows:
Figure BDA0001390643700000112
grid-velocity joint coding attractor network model highlights pre-neurons
Figure BDA0001390643700000116
And post-herniated neurons->
Figure BDA0001390643700000117
The weight of (2) is obtained by the following formula:
Figure BDA0001390643700000113
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0001390643700000118
two location dimensions in the coding environment, +.>
Figure BDA0001390643700000119
Dimensions in two speed directions in the encoding environment. At k=2, two active peaks occur in both position dimensions, while only one active peak occurs in both velocity dimensions. k represents the number of position dimension activity peaks. θ j 、θ j ' represents the position dimension pre-and post-protrusion neural cells, respectively; v j 、v j ' represents the velocity dimension pre-and post-herniated nerve cells, respectively.
The grid-speed joint coding model attractor network dynamics model is as follows:
Figure BDA0001390643700000114
wherein the abbreviation formula is
Figure BDA00013906437000001110
And->
Figure BDA00013906437000001111
Linear velocity modulation input formula:
in the space of the nerve,
Figure BDA00013906437000001112
mapping to the desired speed axis position +.>
Figure BDA00013906437000001113
The formula is as follows:
Figure BDA0001390643700000115
k represents the number of encoded velocity peaks in neural space.
The velocity through gaussian modulation is then input into the grid-velocity model as follows:
Figure BDA0001390643700000121
linear velocity estimation in two dimensions:
first, the positions of the active peaks on the two velocity axes are estimated as follows:
Figure BDA0001390643700000122
the neural space is then mapped into physical space according to the phase of the active peak, as follows:
Figure BDA0001390643700000123
phase estimation of grid cells:
the grid pattern is mapped onto three equally angled vectors, which are as follows:
Figure BDA0001390643700000124
the phase on the projection axis is then found, as follows:
Figure BDA0001390643700000125
wherein l 1 =1,l 2 =l 3 =sinα,α=arctan(2)
Then, the grid pattern phase can be found from the mapping relation shown in fig. 2, and the formula is as follows:
Figure BDA0001390643700000126
Figure BDA0001390643700000127
finally, the mapping relationship between the neural space and the physical space can estimate the position in the physical space:
Figure BDA0001390643700000128
Figure BDA0001390643700000129
wherein the method comprises the steps of
Figure BDA00013906437000001210
Is a proportional primer for the physical space and the neural space of a robot.
The stored position and head orientation information of the local view cells are fed back into the head orientation-speed joint coding model and the grid-speed joint coding model through modulation, and the local view cell correction head orientation-speed joint coding model is realized by using a Gaussian adjustment mode, wherein the formula is as follows:
Figure BDA0001390643700000131
wherein I is d Represents the amplitude of the injected energy, ψ represents the phase of the local view cells, σ d Representing the sharpness of the gaussian adjustment.
The local view cell correction head orientation-speed joint coding model utilizes the concussion interference model to generate a grid cell distribution model, and the formula is as follows:
Figure BDA0001390643700000132
wherein I is p Representing the magnitude of the implant energy, C represents a constant,
Figure BDA0001390643700000133
representing the phase of the local view cell association.
The cognitive map construction method comprises five nodes, wherein a vision measurement unit receives sensor data to measure angular speed and linear speed according to the change of a vision scene; the local view cell judges whether the accessed scene is entered or not, so as to judge whether visual calibration is carried out or not; the spatial memory network comprises a head orientation-speed joint coding attractor network model and a grid-speed joint coding attractor network model, receives input of local view cells and vision measurement, performs path integration and vision calibration, and sends instructions to a map construction node for constructing a topological map.
Step 1, data are collected from a web camera, read-in image data information is analyzed, information such as the size of an image, the frame rate and the like is obtained, the information is collected through a ROS node of a robot operating system, the information is sent out in a format of a sensor_msgs/image_msg message at the rate of 100ms per frame, and C in the figure represents an input visual scene (up). 3A-3D are operational screen shots of the cognitive map construction system, wherein 3A represents neural activity of head orientation-velocity cells; FIG. 3B shows neural activity of grid-velocity cells; FIG. 3C shows the input visual scene (up), partial view visual template (in), current partial view template (down); in the figure 3D represents a topological map.
And 2, receiving the image information by the vision measurement node, and comparing the continuous image matrixes to obtain the speed information. The difference of absolute average intensities is calculated by comparing the intensity profiles of two continuous image scan lines, the relative offset is calculated, and the absolute average intensity can be converted into the real rotation and translation speed in the physical space through a speed gain constant. And then issued in the format geometry_msgs/twist.
At the same time as step 3, the transmitted image information is also received by the local view cell node, if the observed current scene is sufficiently similar to the scene template observed before, we will activate the local view cell related to it. If the current scene does not have a proper match with the previous scene template, a new visual template is created and associated with a new corresponding local view cell. The current local view cell id is then sent out, where C represents the local view visual template (middle), the current local view template (bottom).
Step 4, the spatial memory network node receives the ROS message geometry_msgs/twist.msg of angular velocity and linear velocity, and the local view cell id activated by the current ROS (Robot Operating System) message. In the spatial memory network node, messages from both the vision measurement and the local view cells are processed by two different interrupts, angular velocity information is received by a head orientation-velocity joint coding attractor network model, and a robot head orientation response is generated; and the linear velocity and head orientation information are received by the grid-velocity joint encoded attractor network model together, producing a grid pattern position response. When the received local view cell id does not exist, the current position information and the head orientation information are related to the current local view cell by estimating the phase of the current grid cell and the phase of the head orientation cell, and meanwhile, the messages of the topological nodes and the edges for establishing a new empirical drawing are sent to a map construction node; when the received local view cell id exists, the position information and the head orientation information which are related before are found according to the id, and the head orientation and the dispensing mode of the raster cells are corrected according to a dispensing mode formula of the local view cell correction raster cells and a dispensing mode formula of the local view cell correction head orientation cells. Estimating current position information and head orientation information while correcting the dispensing modes of the head orientation cells and the grid cells, and transmitting a message to the map-building node to create an edge where a certain topological map node is connected to a certain topological map node if the current position information and the head orientation information are sufficiently identical to information of a certain empirical drawing node. Neural activity of neurons in the grid-velocity encoded attractor network model is shown in fig. 3A; fig. 3B shows neural activity of neurons in a grid-velocity encoded attractor network model.
And 5, the map construction node receives the information about how to create the edges and the nodes of the topological map transmitted by the spatial memory network node, and also receives the speed information transmitted by the vision measurement node. The received speed information is mainly used to accumulate the relative distance moved and the relative angle rotated before receiving a new command how to create a topological map. When a command to create a node is received, then a new node is created based on the previous topology map node based on the accumulated distance and angle information, and an edge is created that connects the current topology map node with the previous topology map node. When a command for creating an edge is received, the current node is connected with another node according to the current accumulated distance and angle, and meanwhile, the current map is further optimized through a topological map universal map optimization algorithm due to path integration accumulated error caused by vision measurement solving speed. Fig. 4A to 4B show a map construction, using the test environment shown in fig. 4A, located in suburbs of british, san ruxoma, australia, 3km×1.6km, and having a total length of 66km for the operating route, resulting in the cognitive topology map generated as shown in fig. 4B.
And 6, visualizing the extracted visual information, the distribution rate of the head orientation-speed joint coding cells and the grid-speed joint coding cells and the topological map, and displaying the information by using Python and C++.
And 7, storing and post-processing the data. The required information is recorded into the rosbag file by utilizing the recording function of the ROS bag, and is further analyzed, processed and displayed by utilizing the MATLAB script.
The corresponding values of the parameters used in the model of this embodiment are shown in table one:
list one
Figure BDA0001390643700000151
/>

Claims (7)

1. The cognitive map construction method based on the space and motion joint coding is characterized by comprising the following steps of:
the vision measuring unit receives the camera image, obtains the angular speed and the linear speed of the robot according to the change of the vision scene, and inputs the angular speed and the linear speed into the space memory network and the map construction node;
the local view cell extracts a local view template from the camera image to encode different scenes; when the current scene is the same as the previous scene, the position which has been visited before is considered to be reached, the corresponding local view cell is activated, and the corresponding grid cell and head orientation cell in the spatial memory network are activated;
the space memory network receives the input of the local view cells and the vision measurement unit, performs path integration and vision calibration, and sends instructions to the map construction node;
the steps of path integration and visual calibration comprise the following steps:
head orientation-speed joint coding attractor network model: the jointly encoded heads form circular attractors in neural space towards the cellular network, with a single peak of activity presented in the network; the real head orientation angle of the robot in the physical environment activates part of head orientation nerve cells through the modulated speed input, and the movement of the activity peak is proportional to the angular speed of the robot in the physical environment;
grid-velocity joint coding attractor network model: due to periodic boundary conditions, the jointly encoded grid cells form a circular ring-shaped attractor network in neural space; the position of the phase encoding robot in the physical space in the grid pattern, the grid cells are activated by head-facing cells and velocity cells located in the inner olfactory cortex; the movement of the grid pattern is proportional to the speed of the robot in the physical environment; head orientation cells and grid cells together form a head orientation and position cluster code of a robot physical environment in neural space;
the head orientation-speed joint encoding attractor network model:
Figure FDA0004211760400000012
wherein I is v Representing modulated speed input, I view A calibration input representing a local view cell; m represents the release rate of the cells;
f (x) =x when x >0
f (x) =0 others;
Figure FDA0004211760400000011
the weights of the pre-protrusion nerve cells (θ ', v') and the post-protrusion nerve cells (θ, v) are obtained by the following formula,
J(θ,ν|θ′,v′)=J 0 +J 1 cos(θ-θ′-ν′)cos(λ(v-v′))
wherein J is 0 <0 represents uniform inhibition, J 1 >Weight intensity of 0; θ and v represent postsynaptic neural cells encoding the lattice and neural cells encoding the velocity, respectively; θ 'and v' represent presynaptic lattice-encoding neural cells and velocity-encoding neural cells, respectively; λ represents the magnitude of the modulation speed peak; l (L) r An extremum representing the encoded speed;
the modulated speed input is obtained by:
the angular velocity input V is mapped to the ideal active peak position u (V) in neural space,
u(V)=arctan(τV)
where τ represents the time constant in the head-orientation-velocity joint encoding attractor network model;
the ideal active peak position is modulated by a Gaussian function and then is input into a head orientation-speed joint coding attractor network model, and the method is realized by adopting the following formula:
Figure FDA0004211760400000021
wherein I is r Representing the input amplitude of the angular velocity, ε represents the intensity of the velocity modulation, σ r Indicating how sharp the speed is adjusted;
the head orientation is obtained by:
estimating head orientation from fourier transforms:
ψ=∠(∫∫m(θ,v)exp(iθ)DθDv)
wherein + (Z) represents the angle of the complex number Z;
the angular velocity is obtained by:
estimating the phase of the angular velocity from the fourier transform:
Figure FDA0004211760400000022
then map it into the actual physical space, the formula is as follows:
Figure FDA0004211760400000023
the grid-velocity joint coding attractor network model is as follows:
Figure FDA0004211760400000031
I v representing modulated speed input, I view A calibration current input representing a local view cell; m represents the release rate of the cells; l (L) t An extremum representing a speed;
pre-herniated nerve cells
Figure FDA0004211760400000032
And post-herniating nerve cells->
Figure FDA0004211760400000033
The weight of (2) is obtained by the following formula:
Figure FDA0004211760400000034
wherein J is 0 <0, represents uniform inhibition, J k >0, which represents the weight intensity; k represents the number of active peaks;
Figure FDA0004211760400000035
two location dimensions in the coding environment, +.>
Figure FDA0004211760400000036
Dimensions in two speed directions in the encoding environment; />
Figure FDA0004211760400000037
Neural cells expressing the pre-protrusion coding position and neural cells expressing the coding rate, respectively; λ represents the magnitude of the modulation speed peak;
the map construction node reads out the space codes of the environment by using the space memory network, and forms an environment map according to the angular speed and the linear speed.
2. The cognitive map construction method based on spatial and motion joint coding according to claim 1, wherein the method comprises the following steps: and if the current scene is different from the previous scene, establishing a new local view cell.
3. The cognitive map construction method based on spatial and motion joint coding according to claim 1, characterized in that the modulated speed input is obtained by:
in neural space, the velocities of the X-axis and Y-axis form a vector
Figure FDA0004211760400000038
And then mapped to a desired velocity axis position
Figure FDA0004211760400000039
The formula is as follows:
Figure FDA00042117604000000310
wherein S represents the speed transformation coefficients of the physical space and the nerve coding space, and k represents the number of active peaks; τ represents the time constant in the grid-velocity joint encoding attractor network model;
the velocity through Gaussian modulation is then input into the grid-velocity joint coding attractor network model as follows:
Figure FDA0004211760400000041
I t representing the input amplitude of the angular velocity, ε represents the intensity of the velocity modulation, σ r Indicating how sharp the speed is adjusted.
4. The cognitive map construction method based on space and motion joint coding according to claim 1, characterized in that the linear velocities of the dimensions in two velocity directions are obtained by:
first, the positions of the active peaks on the two velocity axes are estimated as follows:
Figure FDA0004211760400000042
wherein, the angle (Z) represents the angle of the complex number Z; phi (phi) j J is x, y, where φ j Representing the active peak position on the x, y axis in the velocity dimension;
the neural space is then mapped into physical space according to the phase of the active peak, as follows:
Figure FDA0004211760400000043
V j representing the speed of the robot in the physical environment, where j is x, y; s represents the speed transformation coefficients of the physical space and the nerve coding space, and k represents the number of active peaks; τ represents the time constant in the grid-velocity joint encoding attractor network model.
5. The cognitive map construction method based on space and motion joint coding according to claim 1, wherein the position of the robot in the physical space is obtained by a grid-speed joint coding attractor network model, and the method comprises the following steps:
8.1 Phase of the grid cells):
the grid pattern is mapped onto three equally angled vectors, which are as follows:
Figure FDA0004211760400000044
the phase on the projection axis is then found, as follows:
Figure FDA0004211760400000051
wherein l 1 =1,l 2 =l 3 =sinα, α=arctan (2); j represents a sequence number, here 1, 2, 3;
8.2 Then find the grid pattern phase, the formula is as follows:
Figure FDA0004211760400000052
Figure FDA0004211760400000053
8.3 Finally, estimating the position of the robot in the physical space:
Figure FDA0004211760400000054
Figure FDA0004211760400000055
wherein the method comprises the steps of
Figure FDA0004211760400000056
Is a proportional primer for the physical space and the neural space of a robot.
6. The cognitive map construction method based on spatial and motion joint coding according to claim 1, characterized in that the respective local view cells are activated and the respective grid cells and head-oriented cells in the spatial memory network are activated comprising:
1) The local view cell correction head orientation-speed joint coding attractor network model is realized by using a Gaussian adjustment mode, and the formula is as follows:
Figure FDA0004211760400000057
wherein I is d Represents the amplitude of the injection energy, ψ represents the associated phase of the local view cells, σ d Representing the sharpness of the gaussian adjustment;
2) The local view cell correction grid-speed joint coding attractor network model is used for generating a grid cell issuing model by using the concussion interference model, and the formula is as follows:
Figure FDA0004211760400000058
wherein I is p Representing the magnitude of the implant energy, C represents a constant,
Figure FDA0004211760400000059
representing the phase of local view cell association; k represents the number of encoded velocity peaks in neural space; />
Figure FDA00042117604000000510
Figure FDA0004211760400000061
j represents a sequence number, here 1, 2, 3; l (L) 1 =1,l 2 =l 3 =sinα,α=arctan(2)。
7. The cognitive map construction method based on the space and motion joint coding according to claim 1, wherein the map construction node reads out the space coding of the environment by using a space memory network, and forms an environment map according to the speed, comprising the steps of:
the map construction node utilizes angular velocity, linear velocity and position and head orientation of the spatial memory network code to establish nodes of the topological map and connection relations between the nodes and other nodes, and then the map is obtained through optimization in a general map optimization mode.
CN201710748763.8A 2017-08-28 2017-08-28 Cognitive map construction method based on space and motion joint coding Active CN110019582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710748763.8A CN110019582B (en) 2017-08-28 2017-08-28 Cognitive map construction method based on space and motion joint coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710748763.8A CN110019582B (en) 2017-08-28 2017-08-28 Cognitive map construction method based on space and motion joint coding

Publications (2)

Publication Number Publication Date
CN110019582A CN110019582A (en) 2019-07-16
CN110019582B true CN110019582B (en) 2023-07-14

Family

ID=67186149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710748763.8A Active CN110019582B (en) 2017-08-28 2017-08-28 Cognitive map construction method based on space and motion joint coding

Country Status (1)

Country Link
CN (1) CN110019582B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112527929B (en) * 2020-10-20 2023-12-08 深圳银星智能集团股份有限公司 Grid map coding method and device and electronic equipment
CN112906884B (en) * 2021-02-05 2023-04-18 鹏城实验室 Brain-like prediction tracking method based on pulse continuous attractor network
CN113009917B (en) * 2021-03-08 2022-02-15 安徽工程大学 Mobile robot map construction method based on closed loop detection and correction, storage medium and equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699125A (en) * 2013-12-09 2014-04-02 北京工业大学 Robot simulated navigation method based on rat brain-hippocampal navigation
CN106814737A (en) * 2017-01-20 2017-06-09 安徽工程大学 A kind of SLAM methods based on rodent models and RTAB Map closed loop detection algorithms

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7467115B2 (en) * 2004-07-15 2008-12-16 Neurosciences Research Foundation, Inc. Mobile brain-based device having a simulated nervous system based on the hippocampus
US9840003B2 (en) * 2015-06-24 2017-12-12 Brain Corporation Apparatus and methods for safe navigation of robotic devices

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699125A (en) * 2013-12-09 2014-04-02 北京工业大学 Robot simulated navigation method based on rat brain-hippocampal navigation
CN106814737A (en) * 2017-01-20 2017-06-09 安徽工程大学 A kind of SLAM methods based on rodent models and RTAB Map closed loop detection algorithms

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
OpenRatSLAM: an open source brain-based SLAM system;David Ball 等;Autonomous Robots;第34卷;第149–176页 *
一种基于海马认知机理的仿生机器人认知地图构建方法;于乃功;苑云鹤;李倜;蒋晓军;罗子维;;自动化学报(第01期);全文 *
一种改进的RatSLAM仿生导航算法;张潇;胡小平;张礼廉;马涛;王玉杰;;导航与控制(第05期);全文 *

Also Published As

Publication number Publication date
CN110019582A (en) 2019-07-16

Similar Documents

Publication Publication Date Title
CN109579843B (en) Multi-robot cooperative positioning and fusion image building method under air-ground multi-view angles
CN110019582B (en) Cognitive map construction method based on space and motion joint coding
Alismail et al. Continuous trajectory estimation for 3D SLAM from actuated lidar
CN110480634B (en) Arm guide motion control method for mechanical arm motion control
US20190240840A1 (en) Methods and Apparatus for Early Sensory Integration and Robust Acquisition of Real World Knowledge
Weingarten et al. EKF-based 3D SLAM for structured environment reconstruction
Sweeney et al. Solving for relative pose with a partially known rotation is a quadratic eigenvalue problem
CN110146099B (en) Synchronous positioning and map construction method based on deep learning
JP5012615B2 (en) Information processing apparatus, image processing method, and computer program
CN111145251B (en) Robot and synchronous positioning and mapping method thereof and computer storage device
Tahri et al. Robust image-based visual servoing using invariant visual information
JP4016180B2 (en) Planar extraction method, apparatus thereof, program thereof, recording medium thereof, and imaging apparatus
CN112097769A (en) Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method
Zeng et al. Cognitive mapping based on conjunctive representations of space and movement
CN109282810A (en) A kind of snake-shaped robot Attitude estimation method of inertial navigation and angular transducer fusion
JP4535096B2 (en) Planar extraction method, apparatus thereof, program thereof, recording medium thereof, and imaging apparatus
Truong et al. Learning navigation skills for legged robots with learned robot embeddings
Choi et al. Efficient simultaneous localization and mapping based on ceiling-view: ceiling boundary feature map approach
WO2023283186A1 (en) Two-wheeled, self-balancing robot
Ranft et al. 3d perception for autonomous navigation of a low-cost mav using minimal landmarks
CN205537632U (en) Impact system is prevented to mobile concrete pump cantilever crane
Ahmadzadeh et al. Generalized Cylinders for Learning, Reproduction, Generalization, and Refinement of Robot Skills.
Herau et al. MOISST: Multimodal Optimization of Implicit Scene for SpatioTemporal Calibration
Du et al. A human–robot collaborative system for robust three-dimensional mapping
Sergeant-Perthuis et al. Action of the Euclidean versus Projective group on an agent's internal space in curiosity driven exploration: a formal analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant