CN110764498A - Intelligent mobile robot motion state and position cognition method based on rat brain hippocampus cognition mechanism - Google Patents
Intelligent mobile robot motion state and position cognition method based on rat brain hippocampus cognition mechanism Download PDFInfo
- Publication number
- CN110764498A CN110764498A CN201910872030.4A CN201910872030A CN110764498A CN 110764498 A CN110764498 A CN 110764498A CN 201910872030 A CN201910872030 A CN 201910872030A CN 110764498 A CN110764498 A CN 110764498A
- Authority
- CN
- China
- Prior art keywords
- cell
- robot
- cell plate
- perception
- cells
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000033001 locomotion Effects 0.000 title claims abstract description 35
- 230000007246 mechanism Effects 0.000 title claims abstract description 32
- 230000019771 cognition Effects 0.000 title claims abstract description 31
- 210000001320 hippocampus Anatomy 0.000 title claims abstract description 22
- 210000004556 brain Anatomy 0.000 title claims abstract description 19
- 230000000694 effects Effects 0.000 claims abstract description 65
- 230000008447 perception Effects 0.000 claims abstract description 63
- 230000005284 excitation Effects 0.000 claims abstract description 45
- 230000000007 visual effect Effects 0.000 claims abstract description 20
- 230000008569 process Effects 0.000 claims abstract description 16
- 230000001149 cognitive effect Effects 0.000 claims abstract description 12
- 239000011664 nicotinic acid Substances 0.000 claims abstract description 10
- 238000003062 neural network model Methods 0.000 claims abstract description 8
- 210000004027 cell Anatomy 0.000 claims description 323
- 238000013528 artificial neural network Methods 0.000 claims description 27
- 230000003287 optical effect Effects 0.000 claims description 27
- 230000002964 excitative effect Effects 0.000 claims description 21
- 230000005540 biological transmission Effects 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 11
- 239000013598 vector Substances 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 8
- 230000000737 periodic effect Effects 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 7
- 108010074864 Factor XI Proteins 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 5
- 230000016776 visual perception Effects 0.000 claims description 4
- 150000001875 compounds Chemical class 0.000 claims description 3
- 230000004438 eyesight Effects 0.000 claims description 3
- 210000002569 neuron Anatomy 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000011065 in-situ storage Methods 0.000 claims description 2
- 238000013178 mathematical model Methods 0.000 claims description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims 1
- 239000011541 reaction mixture Substances 0.000 claims 1
- 238000010276 construction Methods 0.000 abstract 1
- 238000011160 research Methods 0.000 description 11
- 241001465754 Metazoa Species 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000007613 environmental effect Effects 0.000 description 5
- 230000009023 proprioceptive sensation Effects 0.000 description 5
- 241000282414 Homo sapiens Species 0.000 description 4
- 241000124008 Mammalia Species 0.000 description 3
- 125000004122 cyclic group Chemical group 0.000 description 3
- 230000010354 integration Effects 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 206010001497 Agitation Diseases 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000004256 retinal image Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 210000003710 cerebral cortex Anatomy 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000003930 cognitive ability Effects 0.000 description 1
- 230000036992 cognitive tasks Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Feedback Control In General (AREA)
Abstract
The invention provides a method for recognizing the motion state and position of an intelligent mobile robot based on a rat brain and hippocampus cognitive mechanism, belongs to the technical field of robot environment cognition and navigation, and is mainly applied to tasks of environment cognition, map construction, navigation and the like of the intelligent mobile robot. The specific process comprises the following steps: the method comprises the steps of collecting image information through a camera, collecting angle and direction information of the robot through a gyroscope and an encoder, and transmitting the information to a CPU. Providing a perception speed solving method based on speed cells and visual information to obtain the perception speed of the robot; the discharge mechanism of the head facing to the cell is simulated by utilizing the cell model in one-dimensional annular connection, and the angle information is input into the head facing to the cell model, so that the robot can acquire the perception angle in a bionic mode. And then the perception speed and the perception angle are input into a forward coding neural network model of the position cell, the movement of the excitation activity packet on the position cell plate is driven, and the position of the robot in the environment is obtained.
Description
The technical field is as follows:
the invention belongs to the technical field of environment cognition and navigation of intelligent mobile robots, and particularly relates to a method for recognizing self motion state and environment information of an intelligent mobile robot based on a rat brain hippocampus cognitive mechanism.
Background
The intelligent mobile robot can sense the self state and the environmental information by combining the bionic cognitive mechanism and the acquisition of the environmental information and the motion information by the sensor. The robot system can simulate the powerful target-oriented navigation capability of organisms in a complex space environment, realize autonomous movement and further complete specific work and functions. Position recognition is a basic function of human beings and animals, and is also a basic task and a key problem of intelligent mobile robots.
Physiological studies have shown that hippocampus in cerebral cortex is an important structure for higher animals to perform environmental cognitive tasks, and O' Keefe et al, 1971, found a cell selective to spatial location in rat brain hippocampus, which only undergoes discharge activity when the rat is in a specific location in space. Such cells are called site cells, and the spatial region corresponding to their discharge is called the site field. Ranck et al, in 1984, found in the mouse fore-lower a neuronal cell with a strong discharge effect on the orientation of the head, which was designated as a head-oriented cell, when the mouse head was facing a specific direction, the maximum discharge occurred in the head-oriented cell, and when the head was deviated from this direction, the discharge effect was reduced. In 1998, O' keefe et al, in the research on the model of the calculation of the location cell, the first guess about the possible velocity signal in the process of path integration, thought that the cell discharge rate and the movement velocity are in a linear positive correlation. Subsequent researchers also often describe velocity versus frequency using a direct proportional function. An article published by Emilio Kropff and the like in Nature in 2015 says that the existence of the velocity cells is confirmed through experiments, and the thinking of the coding and action mechanism of the velocity signals in the spatial cognition process in the academic community is caused.
Although the artificial intelligence technology is rapidly developed in recent years, research on position cognition and motion states of the intelligent mobile robot in the environment is relatively lagged behind, and research and popularization of the intelligent mobile robot are restricted to a certain extent due to insufficient cognitive ability. In the aspect of perception and understanding of the environment, the intelligent mobile robot is far inferior to a human or other higher animals, which is mainly caused by the insufficient performance of the sensor at the present stage and the insufficient theoretical research on the cognitive environment of the robot. Therefore, when the smart mobile robot faces a complex and variable external environment, the smart mobile robot is required to have higher adaptability and robustness. The environment cognition mechanism imitating the advanced mammals is a research hotspot of the current intelligent mobile robot. The invention provides a method for recognizing self motion state and position information of an intelligent mobile robot based on the environmental cognition and development mechanism of mouse brain hippocampus and oriented to the problem of autonomous exploration of the intelligent mobile robot and unknown environment thereof.
Disclosure of Invention
The invention mainly aims to provide a method for recognizing the motion state and position of an intelligent mobile robot based on a rat brain hippocampus cognitive mechanism, which simulates the discharge mechanism of various space cells in a rat brain hippocampus body and completes the functions of environment recognition and positioning of the intelligent mobile robot in a complex environment. The following problems are mainly faced:
1. at present, the intelligent mobile robot is far inferior to human beings or other higher animals and is limited by the development of hardware systems (sensor precision, CPU operation speed and the like).
2. Previous related research work was discrete research at the perceptual level and it was difficult to provide sufficient information for mobile robot navigation in unknown dynamic environments.
3. The traditional behavior learning method cannot meet the requirement of human beings on robot intelligence, and is difficult to provide enough information for robot navigation in a complex unknown environment.
4. Most of the conventional cognitive models are expressed based on symbolic knowledge, and the robot system with the cognitive models has the defects of poor mobility, poor adaptability and expandability, excessive design components and poor imitability.
In order to solve the problems, the invention provides an intelligent mobile robot motion state and position cognition method based on a rat brain hippocampus cognition mechanism. The method utilizes a unified calculation mechanism to simulate a calculation model of velocity cells, head orientation cells and position cells in a mouse brain hippocampus body. The robot control system comprises a camera, a gyroscope, an encoder, a CPU and a control module, wherein the camera is used for acquiring image information of an environment, the gyroscope and the encoder are used for acquiring angle and direction information of the robot, and the information is transmitted to the CPU. And (3) providing a perception speed solving method based on the speed cell and the visual information by combining the visual information to obtain the perception speed of the robot. And then, simulating a discharge mechanism of the head facing to the cell by using a cell model in one-dimensional annular connection, and inputting the angle information acquired by the gyroscope into the head facing to the cell model, so that the intelligent mobile robot acquires the current perception angle information in a bionic mode. And then, inputting the perception speed information and the perception angle information into a predictive coding neural network model of the position cell, driving the excitation activity package on the position cell plate to move, and acquiring the coordinate of the excitation activity package on the position cell plate by analyzing the activity condition of the position cell plate, so that the robot is positioned in the environment, and the environment cognition function of the intelligent mobile robot is completed. The specific working process of the method of the invention is as follows:
s1 acquisition of perception velocity
There are two main aspects affecting the perception speed: visual flow and proprioception. Visual flow refers to the continuous change in retinal images that results in the estimation of the organism's own velocity, proprioception, i.e., the intrinsic ability of velocity cells to perceive the organism's own velocity. Based on the above, the invention provides a perception speed solving method based on speed cells and visual information to obtain the perception speed of the robot after the speed action mechanism of the organism is fully researched.
S1.1, acquiring an image in the advancing direction of the current robot through a camera.
S1.2, converting the acquired image into a gray image, and acquiring an optical flow field between two frames of images by using a Lucas-Kanade optical flow field calculation method.
S1.3, resolving the component size of all optical flow vectors in the optical flow field in the vertical direction of the image
S1.4, the magnitude of components of all optical flow vectors in the vertical direction of the image is calculated and used as the input of a BP neural network, the magnitude of the advancing speed of the robot acquired by a current encoder is used as the output of the BP neural network, and the BP neural network is trained.
S1.5, the advancing speed of the robot obtained by the current encoder is used as the input of a speed cell model, and the visual perception speed output by the BP neural network and the accurate speed output by the speed cell model are subjected to selection and weighting processing to obtain the perception speed information of the robot.
S2 acquisition of Angle information
The invention relates to an intelligent mobile robot, in particular to an intelligent mobile robot which is characterized in that the angle measured by a gyroscope is directly used as information input, and the imitative property is lacked.
S2.1, angle information of the current advancing direction of the robot is obtained through a gyroscope, and the difference value of the angle information measured in two times before and after is calculated and used as the increment of the direction angle.
And S2.2, the increment of the direction angle is used as the input of the one-dimensional annular head to the cell model, and the excitation activity packet of the one-dimensional annular head to the cell model moves along with the increment input of the direction angle.
S2.3, in order to enable the position of the exciting movable packet, facing the cell model, of the annular head to be consistent with the angle information obtained by the current encoder, a proportional-differential controller is designed, and closed-loop control is conducted on the position of the exciting movable packet.
S2.4, acquiring the position of the circular head facing to the exciting activity bag on the cell model through an algorithm to obtain the perception angle information of the robot.
Movement of cell plate excitation activity bag at S3 position
The position cell is the main information source for the rat to recognize the position of the rat in the environment, the rat has a single discharge field, and the excitation activity packet on the position cell plate is distributed in the shape of a Gaussian cap. The position of the excitatory events coated on the cell plate moved with the rat exploration environment. Based on the forward coding position cell neural network model, the intelligent mobile robot takes the sensing speed and the sensing angle as input, drives the exciting activity packet to move on the position cell plate, and obtains the coordinate of the exciting activity packet on the position cell plate, so that the intelligent mobile robot is positioned in the environment.
S3.1 the excitatory activity of all the positional cells on the positional cell plate is initialized and the excitatory activity package in the shape of a Gaussian cap is placed in the center of the positional cell plate.
And S3.2, transmitting the excitation activity on the cell plate at the current position to the predictively coded position cell plate according to the input perception angle information.
And S3.3, adjusting the connection weight of the predictive coding neural network model according to the input perception speed information. And transmitting the excitation information on the predictive coding position cell plate to the original position cell plate again through the connection weight.
S4 resolving the position of the robot
And S4.1, acquiring coordinates of the exciting activity packet on the cell plate at the original position, and converting the coordinates into the position of the robot in the environment.
And S4.2, determining whether the real position of the robot in the environment needs to be periodically coded according to the variation of the coordinates of the two times, so as to realize the position cognition of the robot in a large-scale space.
The invention has the following advantages:
the invention provides an intelligent mobile robot motion state and environment cognition method based on a rat brain hippocampus cognition mechanism, which imitates the environment cognition mechanism of high-class mammals. The method utilizes a unified computing mechanism to simulate a computing model of velocity cells, head orientation cells and position cells in a mouse brain hippocampus body, and realizes information transfer among different types of space cells in a hippocampus structure. The accurate path integration function of the position cell is realized, and meanwhile, the requirements of a bionic mechanism on hardware and a sensor are low, so that the whole model has good expansibility and adaptability. The invention provides an experiment and research result with reference value for the relevant research in the relevant field, and can be widely applied to various autonomous mobile robots.
Drawings
FIG. 1: flow chart of intelligent mobile robot motion state and position cognition method of rat brain hippocampus cognition mechanism
FIG. 2: execution flow chart of perception speed acquisition
FIG. 3: graph comparing visual estimation speed with actual speed
FIG. 4: schematic representation of cyclic head-to-cell excitatory events
FIG. 5: proportional differential control block diagram of excitation activity packet position on ring model
FIG. 6: schematic diagram of position cell plate structure
FIG. 7: schematic diagram of effect of exciting activity on position cell plate
FIG. 8: schematic diagram of operation mechanism of predictive position-coding cell plate
Detailed Description
The method is described in detail below with reference to the accompanying drawings and examples.
Fig. 1 is an execution flow chart of the intelligent mobile robot motion state and position cognition method based on the rat brain hippocampus cognitive mechanism provided by the invention. The method comprises the steps of collecting image information of an environment through a camera, collecting angle and direction information of the robot through a gyroscope and an encoder, and transmitting the information to a CPU. The invention provides a perception velocity solving method based on velocity cells and visual information, which is used for obtaining the perception velocity of a robot; the discharging mechanism that the head faces to the cell is simulated by utilizing the cell model connected in a one-dimensional annular mode, and the angle information acquired by the gyroscope is input into the head facing cell model, so that the robot acquires the current angle information in a bionic mode. And then, inputting the information of the perception speed and the angle into a forward coding neural network model of the position cell, driving the excitation activity packet on the position cell plate to move, and acquiring the position of the robot in the environment, thereby realizing the environment cognition function of the bionic robot. The method comprises the following specific steps:
1. acquisition of perceived speed
Fig. 2 is a flow chart of executing the perceived speed acquisition, and for higher mammals, the speed information directly participating in the environmental cognitive process should be the result of cooperative coding of multiple signals, and an unavoidable error exists between the perceived speed and the actual speed. The main factors affecting the perception speed are: visual flow, proprioception. Visual flow refers to the continuous change in the retinal image that causes an organism to estimate its own velocity. Proprioception is an accurate velocity code system formed by velocity cells, and the actual motion rate without direction information is obtained as proprioception velocity. Based on the above, the invention provides a perception velocity solving method based on velocity cells and visual information.
1.1 visual velocity perception based on optical flow method
The Lucas-Kanade optical flow algorithm is used for calculating the optical flow field between continuous images, and the operation principle of the Lucas-Kanade optical flow algorithm is as follows, the original image is assumed to be I (x, y, z, t), the time of the previous frame is assumed to be t, and the time of the next frame is assumed to be t + delta t. The position of the pixel point of the previous frame I in the next frame is I (x + Δ x, y + Δ y, z + Δ z, t + Δ t).
According to the assumption of constant brightness:
I(x,y,z,t)=I(x+Δx,y+Δy,z+Δz,t+Δt) (1)
assuming little motion between the two previous and subsequent images, the right side of the above equation is expanded by a taylor series:
wherein, H.O.T is a high-order term of Taylor series expansion, and can be ignored when the motion distance is very small.
From the equations (1) and (2) it follows:
for two-dimensional images, only x, y, t need to be considered, where Ix,Iy,ItThe difference of the image in the (x, y, t) direction, respectively, is written as follows:
IxVx+IyVy=-It(4)
the following is abbreviated from the spatial consistency assumption:
writing in matrix form:
the optical flow vector is obtained as:
taking the absolute magnitude | V of the optical flow vector with respect to the component in the vertical direction of the imageyThe expression is as follows:
|Vyi|=|Vi|sin(arctan(|Vyi|/|Vxi|)) (9)
designing a multilayer feedforward inverse propagation error neural network (BP neural network) as an algorithm for estimating linear velocity through vision, and taking the absolute value size | V of an optical flow vector relative to a component in the vertical direction of an imageyAnd taking the linear velocity obtained by the encoder when each frame of picture is shot as an output sample to train the BP neural network. The resolution of the optical flow field calculated by using the Lucas-Kanade optical flow algorithm is set to 10 × 13, that is, the corresponding BP neural network has 130 inputs and 1 output (linear velocity perceived by vision). The number of hidden layer neurons is set to be 9, the iteration number is 100, and the learning rate of the neural network is set to be 0.02The mean square error cut-off threshold (MSE) is set to 0.004.
The process of training the BP neural network can be regarded as a parameter optimization process, that is, a set of optimal parameters is found in a parameter space so that the mean square error is minimized. During a particular training process, the neural network may be trapped in a local minimum. Therefore, in order to prevent the local minimum phenomenon, the BP neural network connecting the grid cells and the position cells is initialized by a plurality of groups of different parameter values, and the solution with the minimum error after training is taken as a final parameter, so that the accuracy and the generalization of the model are improved. After training is completed, the estimated speed obtained by the BP neural network is compared with the actual speed obtained by the current encoder as shown in fig. 3.
1.2 ontology velocity perception based on the velocity cell model
The velocity cells constitute an accurate velocity code system reflecting the actual rate of motion without directional information. Biological research shows that the discharge rate of the velocity cells is in a positive correlation with the current linear velocity of the rat movement, and when the rat is in a static state, the discharge phenomenon of the velocity cells also exists. Velocity cell a mathematical model of velocity information is obtained as follows:
the first order linear equation for the velocity cell model is:
Fi=Ai1(V+ViT)+Ai2(11)
in formula (11) FiRepresents the discharge rate of the ith velocity cell, Ai1Representing the rate of cell perception, Ai2Offset, V, representing the perceived velocity of the velocity celliTRepresenting a perturbation in the perceived velocity of the velocity cell. To achieve the authenticity of the model while ensuring the accuracy of the model, Ai1、Ai2、ViTIs set to be normally distributed, wherein A is seti1The value range of (a) is normal distribution with a position parameter of 1.8 and a variance of 0.1; a. thei2The value range of (a) is normal distribution with the position parameter of 0.5 and the variance of 0.05; viTThe value range of (1) is a normal distribution with a position parameter of 0 and a variance of 0.1, and the number of velocity cells is set to 20.
The discharge rate of all velocity cells was normalized:
in the formula (12), FmaxRepresents the maximum of the cell discharge rates at all rates, FminRepresents the minimum value of the cell discharge rate at all rates,the discharge rate of the ith velocity cell after the normalization treatment.
Solving the mathematical expectation E of the cell discharge rates at all velocities, first the average of the cell discharge rates at all velocities is foundThe absolute value of the difference between each rate cell discharge rate and the average is then calculated and the mathematical expectation E of the rate cell discharge rate, i.e. the weighted average, is then solved according to equation (13).
1.3 solving for perceptual velocity
Mathematical expectation E for determining rate of cell discharge and output E of BP neural networkoutThen, E and E are first put togetheroutTransformation to the same dimension corresponds toAndthen, the two are required to be integrated to obtain the final sensing speed. Since the degree of continuous change of the visual stream is easily affected by the additional velocity, the misestimation of the self velocity is caused by generating the visual contrast. Therefore, the perception velocity V is solvedgWhen the speed is sensed, the following formula is adoptedSolving the degree:
in the formula (14), α represents a weight coefficient, TH represents an adjustment threshold, and considering that the difference exists between the external illumination intensity and the self moving speed of the robot in the indoor ring and outdoor environment exploration processes, α value is usually 0.8-0.9 in the indoor environment robot exploration process, 0.75-0.85 in the outdoor environment α value in the robot exploration process, 0.1-0.2 in the indoor environment robot exploration process, 0.9-1.5 in the outdoor environment TH valueAndwhen the difference between the two is greater than TH, judging that the current visual contrast is generated to cause misestimation of the speed of the user, and then making the current perception speed beWhen in useAndwhen the difference value is less than TH, judging that the current visual perception speed is correctly estimated, and then commandingAndthe weighted average value obtained by solving the weight coefficient α is used as the current perception velocity Vg。
2. Acquisition of perception angle
Fig. 4 is a calculation flowchart of perception angle acquisition, in which a head-oriented cell model is modeled by a one-dimensional annularly-connected cell model, and the position of an excitation activity bag moving smoothly on the one-dimensional annular model is acquired to realize perception angle acquisition. In order to realize that the position of the exciting activity packet on the ring model strictly follows the current angle, the position of the exciting activity packet on the ring model is subjected to closed-loop control, and the form of a design controller is proportional-differential control (PD controller).
2.1 modeling of head-oriented cells
The head towards the cell has an important role in directing the movement of the animal's movements, when the rat head faces a specific direction, the head is maximally discharged towards the cell, and when the head deviates from this direction, the cell discharge gradually decreases. Starting from the horizontal angle, the discharge intensity of the rat increases with the increase of the head orientation angle, when the head orientation of the rat reaches the optimal direction of the head orientation cell, the discharge rate reaches the peak value, and then the discharge intensity gradually decreases with the increase of the head orientation angle and gradually deviates from the optimal direction, and the process can be approximately expressed by a Gaussian function, and the discharge rate formula is as follows:
in the formula (15), RhRepresents the discharge rate of the head towards the cell, theta represents the current head orientation angle0Representing the optimal direction of the head towards the cell, theta and theta0In radians. Sigma represents a head-to-cell discharge adjustment factor, and the range of the value size of the sigma is 1-3.
2.2 modeling of one-dimensional circular head-oriented cells
The one-dimensional annular head-to-cell modeling is to connect head-to-tail arranged head-to-cell in sequence into a closed ring, wherein each head faces to the corresponding optimal direction theta of the cell0i,θ0iThe calculation formula of (a) is as follows:
wherein i represents the number of head-oriented cells; n represents the number of cells facing the head in the one-dimensional annular model, and the value of n is usually a multiple of 36 in order to facilitate calculation;representing the angular offset.
2.3 obtaining angular information from one-dimensional annular head towards cell model
From the discharge characteristics of the head toward the cell, it is known that a gaussian cap-shaped excitation packet appears in the one-dimensional circular cell model, and the excitation packet moves on the circular cell model as the head orientation angle changes. The current head orientation information can be obtained by acquiring the position of the excitation activity packet on the annular cell model. The excitatory activity of the one-dimensional cyclic head towards the cell model is shown in figure 4.
The initial excitation activity of the one-dimensional cyclic head towards the cell model is determined by equation (16), the movement process of the activity packet is completed by relying on excitation transfer between cells, and the driving formula of the excitation activity transfer is as follows:
in the formula (17), the compound represented by the formula (I),the drive signal for exciting the movement of the activity bag is subjected to a scaling factor ξ in combination with the angular velocity ω (rate of change of perceived angle) of the head orientation change, where the magnitude of the scaling factor ξ is determined by the output of the closed-loop controller referred to hereinafter.Andrespectively representing the discharge rate of the ith head towards the cells at the t moment and the t +1 moment;andthe discharge rates of the i-1 th and i +1 th heads toward the cells at time t + 1, respectively.
As the excitation activity package moves on the circular cell model for a long time, the excitation activity package gradually spreads, which causes an error between the angle information obtained from the circular cell model and the actual angle information. The invention therefore proposes a neural network model of the excitatory connections between head-oriented cells, the variation Δ H of the head-oriented cells due to local excitatory connectionsEθAs shown in the following equation:
in the formula (18), n represents the number of cells in the head orientation in the one-dimensional circular model, εnRepresents the weight of the connection between the head-oriented cells,represents the magnitude of the discharge rate of the ith head towards the cell at time t, epsilonnIs a matrix, and the relation between the size of the element in the matrix and the value of the row and the column in the matrix obeys Gaussian distribution. Through the excitability connection between the head towards the cells, the diffusion of the excitability activity package in the moving process can be prevented, and the accuracy of the angle information is ensured. After the excitement is delivered, the discharge rate of all heads towards the cells needs to be adjusted to be greater than 0 and then normalized, the mathematical expression of which is as follows:
2.4 proportional differential control of the position of the excitatory motile bag
In order to ensure that the position of the exciting activity wrapped on the circular cell model strictly follows the change of the real head orientation angle, a proportional-differential controller is designed to wrap the exciting activity on the circular cell modelThe position on the former is closed loop controlled. Firstly, the current real head orientation angle theta and the perception angle theta obtained from the annular cell modelgThe angle error Δ θ is obtained by taking the difference, the output value of the controller is used as the magnitude of the scale factor ξ by proportional-differential control, the block diagram of proportional-differential control of the position of the excitement moving bag is shown in fig. 5, and the scale coefficient k of the controllerpThe setting range of the size is 1.27 +/-0.3, and the differential coefficient kdThe setting range of the size is 5.16 +/-0.2.
3. Predictively encoded positional cell model
3.1 cell plate model of positional cell population
The position cell is a basic unit for sensing the position of the rat in the environment, the position cell can perform characteristic coding on the spatial relative position, the discharge activity provides continuous and dynamic spatial position expression, and the discharge rate formula of the position cell is shown as the formula (20). The discharge activity of the position cells is the output of the path integration system, in order to realize the quantification of the discharge rate of the position cells in the actual physical environment, the position cell groups are modeled into a two-dimensional cell plate structure, the arrangement of the position cells on the cell plate is arranged in a matrix form, the row values and the column values of the cell plate are equal, the lower left corner of the position cell plate is defined as the origin, and the position cell plate structure is shown in fig. 6.
In the formula (20), Rpc(r) is the discharge rate of the site cell i at site r, r ═ x, y]Characterizing the current rat position coordinates in the environment; r is0=[x0,y0]The coordinates of the position of the discharge field center of the position cell i in the environment; sigma2The cell discharge field adjustment coefficient is the position. Thus, the excitatory events on the site cell plate appear as a two-dimensional gaussian cap, and the effect of the excitatory events on the site cell plate is shown in fig. 7.
3.2 predictive coded positional cell plates
The excitatory activity of the cell plate at the initial position is determined by equation (20), and the signal driving the excitatory activity packets on the cell plate at the subsequent position to move depends on the perceived speed and the perceived angle, wherein the perceived speed encodes the direction in which the activity packets move on the cell plate, and the perceived speed encodes how fast the activity packets move on the cell plate. Based on the position cell plate model with the predictive coding, the invention provides a position cell plate model with the predictive coding, and two blocks of predictive coding position cell plates are added on the basis of the original position cell plate model, so that the transverse predictive coding and the longitudinal predictive coding of the information on the original position cell plate are respectively realized. The operation mechanism of the cell plate based on the predictive coding position is shown in FIG. 8, and the main steps are as follows:
step 1: first, the information of the original position cell is transmitted to the predictive position-coding cell plate
Step 2: the predictive coding position cell plate determines the transmission direction of excitation information on the cell plate in the transverse direction and the longitudinal direction according to the size of the perception angle. The specific operation rule is as follows: after the lateral predictive coding position cell plate receives the angle information, judging the direction of lateral movement, when the lateral predictive coding position cell plate judges that the lateral predictive coding position cell plate moves leftwards, all position cells on the lateral predictive coding position cell plate respectively transmit self discharge information to position cells on the left side of the lateral predictive coding position cell plate, and the position cells on the left side boundary transmit the self discharge information to position cells on the right side boundary; when the lateral movement is judged to be moved to the right, all the position cells on the lateral predictive coding position cell plate respectively transmit the self discharge information to the position cells on the right side, and the position cells on the right side boundary transmit the self discharge information to the position cells on the left side boundary. The rules for the vertical predictive coding of positional cell plate runs and so on. The mathematical expression for the mechanism of operation is as follows:
in the formulae (21) and (22),i and j are the respective size of the corresponding row and column values on the cell plate, P(i,j)The cell discharge rate of the position with coordinates (i, j) on the original position cell plate is obtained;andrepresents the cell discharge rate at the position with coordinates (i, j) on the horizontal and vertical predictive coding position cell plate, respectively. L isneuroRepresents the number of site cells in each row or column of the site cell plate, θgIs the size of the perception angle obtained by the one-dimensional annular head facing the cell model.
3.3 in situ cell excitation replacement
After the predictive coding is executed, the position information on the cell plate of the predictive coding position needs to be retransmitted to the original position cell plate, so that the intelligent mobile robot can know the position of the intelligent mobile robot in the environment. Based on the above, the invention realizes the excitation transmission of the predictive coding position cell plate to the original position cell plate in a connection mode of a neural network, and the connection weight value is changed from a perception speed VgAnd a sensing angle thetagAnd (4) jointly determining. Because the dimension of the predictive coding position cell plate is the same as that of the original position cell plate, the network transmission structure is the connection between the position cells under the same coordinate, and the specific connection mode is as follows: the magnitude of the connection weight of the excitation transmission of the horizontal predictive coding position cell to the original position cell is VgcosθgThe magnitude of the connection weight of excitation transmission of the longitudinal predictive coding position cell to the original position cell is VgsinθgThe cells in the original position also have excitation transmission to the cells, and the connection weight is 1-Vgsinθg-VgcosθgThe mathematical expression for excitement transmission is as follows:
in the formula (23), the compound represented by the formula,representing the discharge rate of cells at the original position at the time t,the discharge rate of the cells at the original position at the next moment. After excitation of cells at all home sites, it is necessary to adjust the cells at all home sites to greater than 0 before normalization.
4. Resolving the position of the robot
4.1 obtaining the coordinates of the excitatory activity package from the position cell plate
The cells at the position have a single discharge field, and the position information of the rat sensing the position of the rat in the environment is provided by the discharge activity of the cells at the position. Based on the physiological research, the intelligent mobile robot of the invention acquires the coordinate of the exciting activity packet on the cell plate of the original position as the current position information, and the exciting activity packet is positioned on the coordinate (P) of the cell plate of the positionX,PY) The calculation formula of (a) is as follows:
in the formula (24), LneuroRepresenting the number of position cells in each row or column of the position cell plate, i and j being the size of the corresponding row and column values on the cell plate, P(i,j)The cell discharge rate of the position with coordinates (i, j) on the original position cell plate is obtained.
4.2 periodic encoding of position coordinates
Physiological studies have shown that when a rat comes into a new environment, a location field corresponding to the environment can be established very quickly, i.e. the rat can encode a large spatial area using a limited number of location cells in the hippocampus. Based on this, the present invention utilizes periodic encoding of the position cell plate to achieve position awareness over a wide range of spaces. Starting from the initial position of the bionic robot, the excitation activity is wrapped in the center of the cell plate in the initial state, i.e. (P)X,PY)=(Lneuro/2,LneuroAnd/2), wherein the initial position coordinate of the bionic robot in the environment is (X, Y) ═ β (P)X-Lneuro/2),β(PY-Lneuroβ) of the cell plate, wherein the scale factor of the coordinate conversion to the real position coordinate is β, the size of β value in the indoor environment is usually set to 5.5 + -0.8, the size of β value in the outdoor environment is usually set to 14 + -2.0, according to the activity characteristics of the predictive coding position cell plate, the exciting activity packet periodically and continuously moves on the original position cell plate, namely, when the exciting activity packet leaves from one side boundary of the cell plate, the exciting activity packet enters the cell plate from the other side boundary of the cell plate, based on this, the invention provides the periodic coding method of the position coordinate, which can realize the position perception of the intelligent mobile robot in a large-scale space area under a limited number of position cells, and the mathematical expression is as follows:
in formula (25), XtAnd YtRespectively represent the position coordinates, X, of the robot in the environment at the moment tt+1And Yt+1Respectively represent the position coordinates of the robot in the environment at the moment t +1,andrepresents the abscissa of t and t +1 excitatory events coated on the cell plate,andrepresents the ordinate, L, of t and t +1 excitatory events, respectively, coated on the cell plateneuroRepresenting the number of position cells in each row or column of the position cell plate, by periodic code calculation of position coordinatesThe position of the robot in a large space is known.
Claims (5)
1. An intelligent mobile robot motion state and position cognition method based on rat brain hippocampus cognition mechanism is characterized in that: firstly, acquiring image information of an environment through a camera, acquiring angle and direction information of a robot through a gyroscope and an encoder, and transmitting the information to a CPU; combining visual information, providing a perception speed solving method based on speed cells and the visual information to obtain the perception speed of the robot; secondly, simulating a discharge mechanism of a head facing to cells by using a one-dimensional annularly connected cell model, and inputting angle information acquired by a gyroscope into the head facing cell model, so that the robot acquires current perception angle information in a bionic manner; then, inputting the perception speed information and the perception angle information into a predictive coding neural network model of the position cell, driving the excitation activity package on the position cell plate to move, and acquiring the coordinate of the excitation activity package on the position cell plate by analyzing the activity condition of the position cell plate, so that the robot is positioned in the environment, and the environment cognition function of the intelligent mobile robot is completed;
the specific working process is as follows:
s1 acquisition of perception velocity
S1.1, acquiring an image in the advancing direction of the current robot through a camera;
s1.2, converting the acquired image into a gray image, and acquiring an optical flow field between two frames of images by using a Lucas-Kanade optical flow field resolving method;
s1.3, resolving the component size of all optical flow vectors in the optical flow field in the vertical direction of the image
S1.4, the magnitude of components of all optical flow vectors in the vertical direction of the image is calculated to be used as the input of a BP neural network, the magnitude of the advancing speed of the robot acquired by a current encoder is used as the output of the BP neural network, and the BP neural network is trained;
s1.5, taking the advancing speed of the robot obtained by a current encoder as the input of a speed cell model, and selecting and weighting the visual perception speed output by the BP neural network and the accurate speed output by the speed cell model to obtain the perception speed information of the robot;
s2 acquisition of Angle information
S2.1, acquiring angle information of the current advancing direction of the robot through a gyroscope, and calculating the difference value of the angle information measured in two times before and after as the increment of the direction angle;
s2.2, the increment of the direction angle is used as the input of the one-dimensional annular head towards the cell model, so that the excitation activity packet of the one-dimensional annular head towards the cell model moves along with the increment input of the direction angle;
s2.3, in order to enable the position of the exciting movable packet, facing the cell model, of the annular head to be consistent with the angle information obtained by the current encoder, a proportional-differential controller is designed, and closed-loop control is conducted on the position of the exciting movable packet;
s2.4, acquiring the position of the circular head facing to the exciting movable bag on the cell model through an algorithm to obtain the perception angle information of the robot;
movement of cell plate excitation activity bag at S3 position
S3.1, initializing the excitation activities of cells at all positions on the position cell plate, and arranging the excitation activities in a Gaussian cap shape at the center of the position cell plate;
s3.2, transmitting the excitation activity on the cell plate at the current position to the position cell plate with the predictive coding according to the input perception angle information;
s3.3, adjusting the connection weight of the predictive coding neural network model according to the input perception speed information; transmitting excitation information on the predictive coding position cell plate to the original position cell plate again through the connection weight;
s4 resolving the position of the robot
S4.1, acquiring coordinates of the exciting activity bag on the cell plate at the original position, and converting the coordinates into the position of the robot in the environment;
and S4.2, determining whether the real position of the robot in the environment needs to be periodically coded according to the variation of the coordinates of the two times, so as to realize the position cognition of the bionic robot in a large-scale space.
2. The intelligent mobile robot motion state and position cognition method based on the rat brain hippocampus cognitive mechanism according to claim 1, characterized in that: providing a perception velocity solving method based on velocity cells and visual information, and based on the visual velocity perception of an optical flow method, wherein the specific flow is as follows;
calculating an optical flow field between continuous images by using a Lucas-Kanade optical flow algorithm, and solving an optical flow vector of the images as follows:
taking the absolute value magnitude of the optical flow vector with respect to the component in the vertical direction of the image, the expression thereof is as follows:
|Vyi|=|Vi|sin(arctan(|Vyi|/|Vxi|)) (3)
designing a multilayer feedforward inverse propagation error neural network (BP neural network) as an algorithm for estimating linear velocity through vision, and taking the absolute value size | V of an optical flow vector relative to a component in the vertical direction of an imageyTaking | as an input sample of the BP neural network, taking the linear velocity obtained by an encoder when each frame of picture is shot as an output sample, and training the BP neural network; setting the resolution of the optical flow field calculated by using a Lucas-Kanade optical flow algorithm to be 10 × 13, namely 130 inputs exist in the corresponding BP neural network, and 1 output exists; (ii) a Setting the number of hidden layer neurons as 9, the iteration number as 100, the learning rate of the neural network as 0.02 and the mean square error cut-off threshold as 0.004; (ii) a
Ontology velocity perception based on a velocity cell model
Velocity cell a mathematical model of velocity information is obtained as follows:
let the first order linear equation for the velocity cell model be:
Fi=Ai1(V+ViT)+Ai2(5)
in the formula (5) FiRepresents the discharge rate of the ith velocity cell, Ai1Representing the rate of cell perception, Ai2Offset, V, representing the perceived velocity of the velocity celliTA perturbation representing the perceived velocity of the velocity cell; (ii) a To achieve the authenticity of the model while ensuring the accuracy of the model, Ai1、Ai2、ViTIs set to be normally distributed, wherein A is seti1The value range of (a) is normal distribution with a position parameter of 1.8 and a variance of 0.1; a. thei2The value range of (a) is normal distribution with the position parameter of 0.5 and the variance of 0.05; viTThe value range of (1) is normal distribution with the position parameter of 0 and the variance of 0.1, and the number of the velocity cells is set to be 20; (ii) a
Then, the discharge rate of all the velocity cells is normalized:
in the formula (6), FmaxRepresents the maximum of the cell discharge rates at all rates, FminRepresents the minimum value of the cell discharge rate at all rates,the discharge rate of the ith velocity cell after normalization treatment is obtained; (ii) a Solving the mathematical expectation E of the cell discharge rates at all velocities, first the average of the cell discharge rates at all velocities is foundThen calculating the absolute value of the difference between the cell discharge rate of each speed and the average value, and then solving the mathematical expectation E of the cell discharge rate of the speed according to the formula (7), namely the weighted average value; (ii) a
Solution of perceptual speed
Mathematical expectation E for determining rate of cell discharge and output E of BP neural networkoutThen, E and E are first put togetheroutTransformation to the same dimension corresponds toAndthen, the two are integrated to obtain the final perception speed, and the perception speed V is obtained by adopting the following formulagAnd (3) solving:
in the formula (8), α represents a weight coefficient, TH represents an adjustment threshold, α value is 0.8-0.9 of robot exploration under indoor environment, α value is 0.75-0.85 of robot exploration under outdoor environment, TH value is 0.1-0.2 of robot exploration under indoor environment, TH value is 0.9-1.5 of robot exploration under outdoor environmentAndwhen the difference between the two is greater than TH, judging that the current visual contrast is generated to cause misestimation of the speed of the user, and then making the current perception speed beWhen in useAndwhen the difference value is less than TH, judging that the current visual perception speed is correctly estimated, and then commandingAndthe weighted average value obtained by solving the weight coefficient α is used as the current perception velocity Vg。
3. The method for cognizing the motion state and the position of the navigation robot based on the cognitive mechanism of the mouse brain and hippocampus as claimed in claim 1, wherein the method comprises the following steps: modeling a head-oriented cell model through a one-dimensional annularly-connected cell model, and acquiring the position of an excitation activity bag which smoothly moves on the one-dimensional annular model to realize acquisition of a perception angle; designing the controller in the form of proportional-derivative control;
the exciting bag moves on the one-dimensional annular head towards the cell model
The moving process of the activity package is completed by relying on the excitation transmission among cells, and the driving formula of the excitation activity transmission is as follows:
in the formula (9), the reaction mixture is,the drive signal for the movement of the excitatory activity packets is subjected to a scaling factor ξ in combination with the angular velocity ω of the head orientation change, wherein the magnitude of the scaling factor ξ is determined by the output of the closed-loop controller referred to hereinafter;andrespectively representing the discharge rate of the ith head towards the cells at the t moment and the t +1 moment;andthe discharge rates of the i-1 th head and the i +1 th head towards the cell at the t +1 moment respectively;
variation of head towards cell Δ H due to local excitatory junctionsEθAs shown in the following equation:
in the formula (10), n represents the number of cells headed in the one-dimensional circular model, εnRepresents the weight of the connection between the head-oriented cells,represents the magnitude of the discharge rate of the ith head towards the cell at time t, epsilonnThe matrix is a matrix, and the relationship between the size of elements in the matrix and the values of rows and columns where the elements are located obeys Gaussian distribution; after the excitement is delivered, the discharge rate of all heads towards the cells needs to be adjusted to be greater than 0 and then normalized, the mathematical expression of which is as follows:
designing a proportional-differential controller to carry out closed-loop control on the position of the exciting activity packet on the annular cell model; firstly, the current real head orientation angle theta and the perception angle theta obtained from the annular cell modelgTaking difference to obtain angle error delta theta, and using output value of controller as scale factor ξ and scale coefficient k of controller by means of proportional-differential controlpThe setting range of the size is 1.27 +/-0.3,differential coefficient kdThe setting range of the size is 5.16 +/-0.2.
4. The intelligent mobile robot motion state and position cognition method based on the rat brain hippocampus cognitive mechanism according to claim 1, characterized in that: proposing a predictively encoded positional cell model;
predictive coded positional cell plate
On the basis of the original position cell plate model, two predictive coding position cell plates are added to respectively realize the transverse predictive coding and the longitudinal predictive coding of the information on the original position cell plate; the method comprises the following steps:
step 1: first, the information of the original position cell is transmitted to the predictive position-coding cell plate
Step 2: the predictive coding position cell plate determines the transmission directions of excitation information on the cell plate in the transverse direction and the longitudinal direction according to the magnitude of the perception angle; the mathematical expression for the mechanism of operation is as follows:
in the formulas (12) and (13), i and j are the magnitudes of the corresponding row and column values on the cell plate, respectively, and P(i,j)The cell discharge rate of the position with coordinates (i, j) on the original position cell plate is obtained;andrepresenting the cell discharge rate of the position with coordinates (i, j) on the horizontal and vertical predictive coding position cell plate respectively; l isneuroRepresents the number of site cells in each row or column of the site cell plate, θgThe one-dimensional annular head is oriented to be thinThe size of the perception angle obtained in the cell model;
in situ cell excitation replacement
The excitation transmission of the predictive coding position cell plate to the original position cell plate is realized through the connection mode of a neural network, and the connection weight value is changed from a perception speed VgAnd a sensing angle thetagJointly determining; because the dimension of the predictive coding position cell plate is the same as that of the original position cell plate, the network transmission structure is the connection between the position cells under the same coordinate, and the specific connection mode is as follows: the magnitude of the connection weight of the excitation transmission of the horizontal predictive coding position cell to the original position cell is VgcosθgThe magnitude of the connection weight of excitation transmission of the longitudinal predictive coding position cell to the original position cell is VgsinθgThe cells in the original position also have excitation transmission to the cells, and the connection weight is 1-Vgsinθg-VgcosθgThe mathematical expression for excitement transmission is as follows:
in the formula (14), the compound represented by the formula (I),representing the discharge rate of cells at the original position at the time t,the discharge rate of the original position cells at the next moment; after the excitation of cells at all home sites is transmitted, it is also necessary to adjust the cells at all home sites to greater than 0 and then normalize them.
5. The intelligent mobile robot motion state and position cognition method based on the rat brain hippocampus cognitive mechanism according to claim 1, characterized in that: calculating the position of the robot in the environment by acquiring the coordinates of the exciting activity bag on the position cell plate;
coordinates of excitatory activity packages from a position cell plate
The intelligent mobile robot obtains the coordinate of the exciting activity bag on the cell plate at the original position as the current position information, and the exciting activity bag is positioned at the coordinate (P) of the cell plate at the positionX,PY) The calculation formula of (a) is as follows:
in the formula (15), LneuroRepresenting the number of position cells in each row or column of the position cell plate, i and j being the size of the corresponding row and column values on the cell plate, P(i,j)The cell discharge rate of the position with coordinates (i, j) on the original position cell plate is obtained;
periodic encoding of position coordinates
The periodic coding of the position cell plate is utilized to realize the position cognition in a large-scale space; starting from the initial position of the robot movement, the excitatory activity is wrapped in the center of the position cell plate in the initial state, i.e. (P)X,PY)=(Lneuro/2,LneuroAnd/2), when the initial position coordinates of the robot in the environment are (X, Y) ═ (β (P)X-Lneuro/2),β(PY-Lneuroβ) of the cell plate, wherein β is a proportionality coefficient for converting coordinates on the cell plate to real position coordinates, the value of β in an indoor environment is set to be 5.5 +/-0.8, the value of β in an outdoor environment is set to be 14 +/-2.0, the exciting activity packet is periodically and continuously moved on the cell plate at the original position according to the activity characteristics of the cell plate at the predictive coding position, namely, when the exciting activity packet leaves one side boundary of the cell plate, the exciting activity packet enters the cell plate from the other side boundary of the cell plate, and the periodic coding method of the position coordinates is given, so that the intelligent mobile robot realizes position perception in a large-range space area under a limited number of position cells, and the mathematical expression is as follows:
in formula (16), XtAnd YtRespectively represent the position coordinates, X, of the robot in the environment at the moment tt+1And Yt+1Respectively represent the position coordinates of the robot in the environment at the moment t +1,andrepresents the abscissa of t and t +1 excitatory events coated on the cell plate,andrepresents the ordinate, L, of t and t +1 excitatory events, respectively, coated on the cell plateneuroThe number of the position cells in each row or each column of the position cell plate is represented, and the position recognition of the robot in a large-range space is realized through the periodic coding calculation of the position coordinates.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910872030.4A CN110764498B (en) | 2019-09-16 | 2019-09-16 | Intelligent mobile robot motion state and position cognition method based on rat brain hippocampus cognition mechanism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910872030.4A CN110764498B (en) | 2019-09-16 | 2019-09-16 | Intelligent mobile robot motion state and position cognition method based on rat brain hippocampus cognition mechanism |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110764498A true CN110764498A (en) | 2020-02-07 |
CN110764498B CN110764498B (en) | 2022-09-09 |
Family
ID=69329867
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910872030.4A Active CN110764498B (en) | 2019-09-16 | 2019-09-16 | Intelligent mobile robot motion state and position cognition method based on rat brain hippocampus cognition mechanism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110764498B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111552298A (en) * | 2020-05-26 | 2020-08-18 | 北京工业大学 | Bionic positioning method based on rat brain hippocampus spatial cells |
CN112525194A (en) * | 2020-10-28 | 2021-03-19 | 北京工业大学 | Cognitive navigation method based on endogenous and exogenous information of hippocampus-striatum |
CN113657574A (en) * | 2021-07-28 | 2021-11-16 | 哈尔滨工业大学 | Construction method and system of bionic space cognitive model |
WO2023184223A1 (en) * | 2022-03-30 | 2023-10-05 | 中国电子科技集团公司信息科学研究院 | Robot autonomous positioning method based on brain-inspired space coding mechanism and apparatus |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140364721A1 (en) * | 2011-09-09 | 2014-12-11 | The Regents Of The University Of California | In vivo visualization and control of patholigical changes in neural circuits |
CN106125730A (en) * | 2016-07-10 | 2016-11-16 | 北京工业大学 | A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell |
CN106949896A (en) * | 2017-05-14 | 2017-07-14 | 北京工业大学 | A kind of situation awareness map structuring and air navigation aid based on mouse cerebral hippocampal |
CN109668566A (en) * | 2018-12-05 | 2019-04-23 | 大连理工大学 | Robot scene cognition map construction and navigation method based on mouse brain positioning cells |
-
2019
- 2019-09-16 CN CN201910872030.4A patent/CN110764498B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140364721A1 (en) * | 2011-09-09 | 2014-12-11 | The Regents Of The University Of California | In vivo visualization and control of patholigical changes in neural circuits |
CN106125730A (en) * | 2016-07-10 | 2016-11-16 | 北京工业大学 | A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell |
CN106949896A (en) * | 2017-05-14 | 2017-07-14 | 北京工业大学 | A kind of situation awareness map structuring and air navigation aid based on mouse cerebral hippocampal |
CN109668566A (en) * | 2018-12-05 | 2019-04-23 | 大连理工大学 | Robot scene cognition map construction and navigation method based on mouse brain positioning cells |
Non-Patent Citations (2)
Title |
---|
于乃功等: "大鼠脑海马结构认知机理及其在机器人导航中的应用", 《北京工业大学学报》 * |
邹强等: "仿鼠脑海马的机器人地图构建与路径规划方法", 《华中科技大学学报(自然科学版)》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111552298A (en) * | 2020-05-26 | 2020-08-18 | 北京工业大学 | Bionic positioning method based on rat brain hippocampus spatial cells |
CN111552298B (en) * | 2020-05-26 | 2023-04-25 | 北京工业大学 | Bionic positioning method based on mouse brain hippocampus space cells |
CN112525194A (en) * | 2020-10-28 | 2021-03-19 | 北京工业大学 | Cognitive navigation method based on endogenous and exogenous information of hippocampus-striatum |
CN112525194B (en) * | 2020-10-28 | 2023-11-03 | 北京工业大学 | Cognitive navigation method based on in vivo source information and exogenous information of sea horse-striatum |
CN113657574A (en) * | 2021-07-28 | 2021-11-16 | 哈尔滨工业大学 | Construction method and system of bionic space cognitive model |
WO2023184223A1 (en) * | 2022-03-30 | 2023-10-05 | 中国电子科技集团公司信息科学研究院 | Robot autonomous positioning method based on brain-inspired space coding mechanism and apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN110764498B (en) | 2022-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110764498B (en) | Intelligent mobile robot motion state and position cognition method based on rat brain hippocampus cognition mechanism | |
CN109668566B (en) | Robot scene cognition map construction and navigation method based on mouse brain positioning cells | |
CN106949896B (en) | Scene cognition map construction and navigation method based on mouse brain hippocampus | |
US9844873B2 (en) | Apparatus and methods for haptic training of robots | |
Kaiser et al. | Towards a framework for end-to-end control of a simulated vehicle with spiking neural networks | |
KR101126774B1 (en) | Mobile brain-based device having a simulated nervous system based on the hippocampus | |
US20150032258A1 (en) | Apparatus and methods for controlling of robotic devices | |
Low et al. | A hybrid mobile robot architecture with integrated planning and control | |
Zhao et al. | Brain–machine interfacing-based teleoperation of multiple coordinated mobile robots | |
CN111552298B (en) | Bionic positioning method based on mouse brain hippocampus space cells | |
Zhao et al. | Closed-loop spiking control on a neuromorphic processor implemented on the iCub | |
Téllez et al. | Evolving the walking behaviour of a 12 dof quadruped using a distributed neural architecture | |
Zhang et al. | A neural network based framework for variable impedance skills learning from demonstrations | |
Ijspeert et al. | Nonlinear dynamical systems for imitation with humanoid robots | |
Tan et al. | A hierarchical framework for quadruped locomotion based on reinforcement learning | |
Zahra et al. | Differential mapping spiking neural network for sensor-based robot control | |
CN114037050B (en) | Robot degradation environment obstacle avoidance method based on internal plasticity of pulse neural network | |
Naya et al. | Spiking neural network discovers energy-efficient hexapod motion in deep reinforcement learning | |
Hong et al. | Vision-locomotion coordination control for a powered lower-limb prosthesis using fuzzy-based dynamic movement primitives | |
CN113031002A (en) | SLAM running car based on Kinect3 and laser radar | |
Antonelo et al. | Learning slow features with reservoir computing for biologically-inspired robot localization | |
Xing et al. | A brain-inspired approach for collision-free movement planning in the small operational space | |
Gaudiano et al. | Adaptive vector integration to endpoint: Self-organizing neural circuits for control of planned movement trajectories | |
Antonelli et al. | Learning the visual–oculomotor transformation: Effects on saccade control and space representation | |
Stoelen et al. | Adaptive collision-limitation behavior for an assistive manipulator |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |