CN110764498B - Intelligent mobile robot motion state and position cognition method based on rat brain hippocampus cognition mechanism - Google Patents

Intelligent mobile robot motion state and position cognition method based on rat brain hippocampus cognition mechanism Download PDF

Info

Publication number
CN110764498B
CN110764498B CN201910872030.4A CN201910872030A CN110764498B CN 110764498 B CN110764498 B CN 110764498B CN 201910872030 A CN201910872030 A CN 201910872030A CN 110764498 B CN110764498 B CN 110764498B
Authority
CN
China
Prior art keywords
cell
robot
cell plate
perception
cells
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910872030.4A
Other languages
Chinese (zh)
Other versions
CN110764498A (en
Inventor
于乃功
廖诣深
冯慧
王宗侠
黄静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201910872030.4A priority Critical patent/CN110764498B/en
Publication of CN110764498A publication Critical patent/CN110764498A/en
Application granted granted Critical
Publication of CN110764498B publication Critical patent/CN110764498B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions

Abstract

The invention provides a method for recognizing the motion state and position of an intelligent mobile robot based on a mouse brain hippocampus cognitive mechanism, belongs to the technical field of robot environment cognition and navigation, and is mainly applied to tasks such as environment cognition, map construction and navigation of the intelligent mobile robot. The specific process comprises the following steps: the method comprises the steps of collecting image information through a camera, collecting angle and direction information of the robot through a gyroscope and an encoder, and transmitting the information to a CPU. Providing a perception speed solving method based on speed cells and visual information to obtain the perception speed of the robot; the discharge mechanism of the head facing the cell is simulated by utilizing the cell model in one-dimensional annular connection, and the angle information is input into the head facing the cell model, so that the robot can acquire the perception angle in a bionic manner. And then the perception speed and the perception angle are input into a forward coding neural network model of the position cell, the movement of the excitation activity packet on the position cell plate is driven, and the position of the robot in the environment is obtained.

Description

Intelligent mobile robot motion state and position cognition method based on rat brain hippocampus cognition mechanism
The technical field is as follows:
the invention belongs to the technical field of environment cognition and navigation of intelligent mobile robots, and particularly relates to a method for recognizing self motion state and environment information of an intelligent mobile robot based on a rat brain hippocampus cognitive mechanism.
Background
The intelligent mobile robot can sense the self state and the environmental information by combining the bionic cognitive mechanism and the acquisition of the environmental information and the motion information by the sensor. The robot system can simulate the powerful target-oriented navigation capability of organisms in a complex space environment, realize autonomous movement and further complete specific work and functions. Position recognition is a basic function of human beings and animals, and is also a basic task and a key problem of intelligent mobile robots.
Physiological studies have shown that hippocampus in cerebral cortex is an important structure for higher animals to perform environmental cognitive tasks, and O' Keefe et al, 1971, found a cell selective to spatial location in rat brain hippocampus, which only undergoes discharge activity when the rat is in a specific location in space. Such cells are called site cells, and the spatial region corresponding to their discharge is called the site field. Ranck et al, in 1984, found in the mouse fore-lower a neuronal cell with a strong discharge effect on the orientation of the head, which was designated as a head-oriented cell, when the mouse head was facing a specific direction, the maximum discharge occurred in the head-oriented cell, and when the head was deviated from this direction, the discharge effect was reduced. In 1998, O' keefe et al, in the research on the model of the calculation of the location cell, the first guess about the possible velocity signal in the process of path integration, thought that the cell discharge rate and the movement velocity are in a linear positive correlation. Subsequent researchers also often describe velocity versus frequency using a direct proportional function. An article published by Emilio Kropff and the like in Nature in 2015 says that the existence of the velocity cells is confirmed through experiments, and the thinking of the coding and action mechanism of the velocity signals in the spatial cognition process in the academic community is caused.
Although the artificial intelligence technology is rapidly developed in recent years, research on position cognition and motion states of the intelligent mobile robot in the environment is relatively lagged behind, and research and popularization of the intelligent mobile robot are restricted to a certain extent due to insufficient cognitive ability. In the aspect of perception and understanding of the environment, the intelligent mobile robot is far inferior to a human or other higher animals, which is mainly caused by the insufficient performance of the sensor at the present stage and the insufficient theoretical research on the cognitive environment of the robot. Therefore, when the smart mobile robot faces a complex and variable external environment, the smart mobile robot is required to have higher adaptability and robustness. The environment cognition mechanism imitating the advanced mammals is a research hotspot of the current intelligent mobile robot. The invention provides a method for recognizing self motion state and position information of an intelligent mobile robot based on the environmental cognition and development mechanism of mouse brain hippocampus and oriented to the problem of autonomous exploration of the intelligent mobile robot and unknown environment thereof.
Disclosure of Invention
The invention mainly aims to provide a method for recognizing the motion state and position of an intelligent mobile robot based on a rat brain hippocampus cognitive mechanism, which simulates the discharge mechanism of various space cells in a rat brain hippocampus body and completes the functions of environment recognition and positioning of the intelligent mobile robot in a complex environment. The following problems are mainly faced:
1. at present, the intelligent mobile robot is far inferior to human beings or other higher animals and is limited by the development of hardware systems (sensor precision, CPU operation speed and the like).
2. Previous related research work was discrete research at the perceptual level and it was difficult to provide sufficient information for mobile robot navigation in unknown dynamic environments.
3. The traditional behavior learning method cannot meet the requirement of human beings on robot intelligence, and is difficult to provide enough information for robot navigation in a complex and unknown environment.
4. Most of the conventional cognitive models are expressed based on symbolic knowledge, and the robot system with the cognitive models has the defects of poor mobility, poor adaptability and expandability, excessive design components and poor imitability.
In order to solve the problems, the invention provides an intelligent mobile robot motion state and position cognition method based on a rat brain hippocampus cognition mechanism. The method utilizes a unified calculation mechanism to simulate a calculation model of velocity cells, head orientation cells and position cells in a mouse brain hippocampus body. The robot control system comprises a camera, a gyroscope, an encoder, a CPU and a control module, wherein the camera is used for acquiring image information of an environment, the gyroscope and the encoder are used for acquiring angle and direction information of the robot, and the information is transmitted to the CPU. And (3) providing a perception speed solving method based on the speed cell and the visual information by combining the visual information to obtain the perception speed of the robot. And then, simulating a discharge mechanism of the head facing to the cell by using a cell model in one-dimensional annular connection, and inputting the angle information acquired by the gyroscope into the head facing to the cell model, so that the intelligent mobile robot acquires the current perception angle information in a bionic mode. And then, inputting the perception speed information and the perception angle information into a predictive coding neural network model of the position cell, driving the excitation activity package on the position cell plate to move, and acquiring the coordinate of the excitation activity package on the position cell plate by analyzing the activity condition of the position cell plate, so that the robot is positioned in the environment, and the environment cognition function of the intelligent mobile robot is completed. The specific working process of the method of the invention is as follows:
s1 acquisition of perception velocity
There are two main aspects affecting the perception speed: visual flow and proprioception. Visual flow refers to the continuous change in retinal images that results in the estimation of the organism's own velocity, proprioception, i.e., the intrinsic perceptibility of the velocity cell to the organism's own velocity. Based on the above, the invention provides a perception speed solving method based on speed cells and visual information to obtain the perception speed of the robot after the speed action mechanism of the organism is fully researched.
S1.1, acquiring an image in the advancing direction of the current robot through a camera.
S1.2, converting the acquired image into a gray image, and acquiring an optical flow field between two frames of images by using a Lucas-Kanade optical flow field calculation method.
S1.3, resolving the component size of all optical flow vectors in the optical flow field in the vertical direction of the image
S1.4, the magnitude of components of all optical flow vectors in the vertical direction of the image is calculated and used as the input of a BP neural network, the magnitude of the advancing speed of the robot acquired by a current encoder is used as the output of the BP neural network, and the BP neural network is trained.
S1.5, the advancing speed of the robot obtained by the current encoder is used as the input of a speed cell model, and the visual perception speed output by the BP neural network and the accurate speed output by the speed cell model are subjected to selection and weighting processing to obtain the perception speed information of the robot.
S2 acquisition of Angle information
The invention relates to an intelligent mobile robot, in particular to an intelligent mobile robot which is characterized in that the angle measured by a gyroscope is directly used as information input, and the imitative property is lacked.
S2.1, angle information of the current advancing direction of the robot is obtained through a gyroscope, and the difference value of the angle information measured in the previous and subsequent times is calculated and used as the increment of the direction angle.
S2.2, the increment of the direction angle is used as the input of the one-dimensional annular head to the cell model, and the excitation activity packet of the one-dimensional annular head to the cell model moves along with the increment input of the direction angle.
S2.3, in order to enable the position of the exciting movable packet, facing the cell model, of the annular head to be consistent with the angle information obtained by the current encoder, a proportional-differential controller is designed, and closed-loop control is conducted on the position of the exciting movable packet.
S2.4, acquiring the position of the exciting activity packet of the annular head facing the cell model through an algorithm to obtain the perception angle information of the robot.
S3 migration of cell plate excitatory activity bag
The position cell is the main information source for the rat to recognize the position of the rat in the environment, the rat has a single discharge field, and the excitation activity packet on the position cell plate is distributed in the shape of a Gaussian cap. The position of the excitatory events coated on the cell plate moved with the rat exploration environment. Based on the forward coding position cell neural network model, the intelligent mobile robot takes the sensing speed and the sensing angle as input, the exciting activity packet is driven to move on the position cell plate, and the coordinate of the exciting activity packet on the position cell plate is obtained, so that the intelligent mobile robot is positioned in the environment.
S3.1 the excitatory events of all the positional cells on the positional cell plate are initialized and the excitatory events in the shape of a Gaussian cap are wrapped in the center of the positional cell plate.
And S3.2, transmitting the excitation activity on the cell plate at the current position to the predictively coded position cell plate according to the input perception angle information.
And S3.3, adjusting the connection weight of the predictive coding neural network model according to the input perception speed information. And transmitting the excitation information on the predictive coding position cell plate to the original position cell plate again through the connection weight.
S4 resolving the position of the robot
And S4.1, acquiring coordinates of the exciting activity packet on the cell plate at the original position, and converting the coordinates into the position of the robot in the environment.
And S4.2, determining whether the real position of the robot in the environment needs to be periodically coded according to the variation of the coordinates of the two times, so as to realize the position cognition of the robot in a large-scale space.
The invention has the following advantages:
the invention provides an intelligent mobile robot motion state and environment cognition method based on a rat brain hippocampus cognition mechanism, which imitates the environment cognition mechanism of high-class mammals. The method utilizes a unified computing mechanism to simulate a computing model of velocity cells, head orientation cells and position cells in a mouse brain hippocampus body, and realizes information transfer among different types of space cells in a hippocampus structure. The accurate path integration function of the position cell is realized, and meanwhile, the requirements of a bionic mechanism on hardware and a sensor are low, so that the whole model has good expansibility and adaptability. The invention provides an experiment and research result with reference value for the relevant research in the relevant field, and can be widely applied to various autonomous mobile robots.
Drawings
FIG. 1: flow chart of intelligent mobile robot motion state and position cognition method of rat brain hippocampus cognition mechanism
FIG. 2: execution flow chart of perception speed acquisition
FIG. 3: graph comparing visual estimation speed with actual speed
FIG. 4: schematic representation of cyclic head-to-cell excitatory events
FIG. 5: proportional differential control block diagram of excitation activity packet position on ring model
FIG. 6: schematic diagram of position cell plate structure
FIG. 7: schematic diagram of effect of exciting activity on position cell plate
FIG. 8: operation mechanism diagram of predictive coding position cell plate
Detailed Description
The method is described in detail below with reference to the accompanying drawings and examples.
Fig. 1 is an execution flow chart of the intelligent mobile robot motion state and position cognition method based on the rat brain hippocampus cognitive mechanism provided by the invention. The method comprises the steps of collecting image information of an environment through a camera, collecting angle and direction information of the robot through a gyroscope and an encoder, and transmitting the information to a CPU. The invention provides a perception velocity solving method based on velocity cells and visual information, which is used for obtaining the perception velocity of a robot; the discharging mechanism that the head faces to the cell is simulated by utilizing the cell model connected in a one-dimensional annular mode, and the angle information acquired by the gyroscope is input into the head facing cell model, so that the robot acquires the current angle information in a bionic mode. And then, inputting the information of the perception speed and the angle into a forward coding neural network model of the position cell, driving the excitation activity packet on the position cell plate to move, and acquiring the position of the robot in the environment, thereby realizing the environment cognition function of the bionic robot. The method comprises the following specific steps:
1. acquisition of perceived speed
Fig. 2 is a flow chart of the implementation of the perceived speed acquisition, and for higher mammals, the speed information directly involved in the environmental cognitive process should be the result of cooperative coding of multiple signals, and there is an inevitable error between the perceived speed and the actual speed. The main factors affecting the perception speed are: visual flow, proprioception. Visual flow refers to the continuous change in the retinal image that causes the organism to estimate its own velocity. Proprioception is an accurate speed code system formed by speed cells, and the actual movement speed without direction information is obtained as proprioception speed. Based on the above, the invention provides a perception velocity solving method based on velocity cells and visual information.
1.1 visual velocity perception based on optical flow method
The Lucas-Kanade optical flow algorithm is used for calculating the optical flow field between continuous images, and the operation principle of the Lucas-Kanade optical flow algorithm is as follows, the original image is assumed to be I (x, y, z, t), the time of the previous frame is assumed to be t, and the time of the next frame is assumed to be t + delta t. The position of the pixel point of the previous frame I in the next frame is I (x + Δ x, y + Δ y, z + Δ z, t + Δ t).
According to the assumption of constant brightness:
I(x,y,z,t)=I(x+Δx,y+Δy,z+Δz,t+Δt) (1)
assuming little motion between the two previous and subsequent images, the right side of the above equation is expanded by a taylor series:
Figure RE-GDA0002317645010000061
wherein, H.O.T is a high-order term of Taylor series expansion, and can be ignored when the motion distance is very small.
From the equations (1) and (2) it follows:
Figure RE-GDA0002317645010000062
for two-dimensional images, only x, y, t need to be considered, where I x ,I y ,I t The difference of the images in the (x, y, t) direction, respectively, is written as follows:
I x V x +I y V y =-I t (4)
the following is abbreviated from the spatial consistency assumption:
Figure RE-GDA0002317645010000063
writing in matrix form:
Figure RE-GDA0002317645010000064
the optical flow vector is obtained as:
Figure RE-GDA0002317645010000065
Figure RE-GDA0002317645010000066
taking the absolute magnitude | V of the optical flow vector with respect to the component in the vertical direction of the image y The expression is as follows:
|V yi |=|V i |sin(arctan(|V yi |/|V xi |)) (9)
Figure RE-GDA0002317645010000067
designing a multilayer feedforward inverse propagation error neural network (BP neural network) as an algorithm for estimating linear velocity through vision, and taking the absolute value size | V of an optical flow vector relative to a component in the vertical direction of an image y And taking the linear velocity obtained by the encoder when each frame of picture is shot as an output sample to train the BP neural network. The resolution of the optical flow field calculated by using the Lucas-Kanade optical flow algorithm is set to 10 × 13, that is, the corresponding BP neural network has 130 inputs and 1 output (linear velocity perceived by vision). The number of hidden layer neurons is set to be 9, the number of iterations is 100, the learning rate of the neural network is set to be 0.02, and a mean square error cut-off threshold (MSE) is set to be 0.004.
The process of training the BP neural network can be regarded as a parameter optimization process, that is, a set of optimal parameters is found in a parameter space so that the mean square error is minimized. During a particular training process, the neural network may be trapped in a local minimum. Therefore, in order to prevent the local minimum phenomenon, the BP neural network connecting grid cells and position cells is initialized by a plurality of groups of different parameter values, and the solution with the minimum error after training is taken as a final parameter, so that the accuracy and the generalization of the model are improved. After training is completed, the estimated speed obtained by the BP neural network is compared with the actual speed obtained by the current encoder as shown in fig. 3.
1.2 ontology velocity perception based on the velocity cell model
The velocity cell constitutes an accurate velocity code system reflecting the actual motion rate without direction information. Biological research shows that the discharge rate of the velocity cells is in a positive correlation with the current linear velocity of the rat movement, and when the rat is in a static state, the discharge phenomenon of the velocity cells also exists. Velocity cell a mathematical model of velocity information is obtained as follows:
the first order linear equation for the velocity cell model is:
F i =A i1 (V+V iT )+A i2 (11)
in formula (11) F i Represents the discharge rate of the ith velocity cell, A i1 Representing the rate of cell perception, A i2 Offset, V, representing the perceived velocity of the velocity cell iT Representing a perturbation in the perceived velocity of the velocity cell. To achieve the authenticity of the model while ensuring the accuracy of the model, A i1 、A i2 、V iT Is set to be normally distributed, wherein A is set i1 The value range of (a) is normal distribution with a position parameter of 1.8 and a variance of 0.1; a. the i2 The value range of (a) is normal distribution with the position parameter of 0.5 and the variance of 0.05; v iT The value range of (1) is a normal distribution with a position parameter of 0 and a variance of 0.1, and the number of velocity cells is set to 20.
The discharge rate of all velocity cells was normalized:
Figure RE-GDA0002317645010000071
in the formula (12), F max Represents the maximum of the cell discharge rates at all rates, F min Represents the minimum value of the cell discharge rate at all rates,
Figure RE-GDA0002317645010000072
the discharge rate of the ith velocity cell after the normalization treatment.
Solving the mathematical expectation E of the cell discharge rates at all velocities, first the average of the cell discharge rates at all velocities is found
Figure RE-GDA0002317645010000073
The absolute value of the difference between each rate cell discharge rate and the average is then calculated and the mathematical expectation E of the rate cell discharge rate, i.e. the weighted average, is then solved according to equation (13).
Figure RE-GDA0002317645010000074
1.3 solving for perceived velocity
Mathematical expectation E for determining rate of cell discharge and output E of BP neural network out Then, E and E are first put together out Transformation to the same dimension corresponds to
Figure RE-GDA0002317645010000081
And
Figure RE-GDA0002317645010000082
then, the two are required to be integrated to obtain the final sensing speed. Since the degree of continuous change of the visual stream is easily affected by the additional velocity, the self-velocity is misestimated by generating the visual contrast. Therefore, the perception velocity V is solved g And then, solving the sensing speed by adopting the following formula:
Figure RE-GDA0002317645010000083
in equation (14), α represents a weight coefficient, and TH represents an adjustment threshold, and since there is a difference between the external illumination intensity and the self moving speed of the robot in the indoor loop and outdoor environment exploration process, it is usually taken as: of value alphaThe size of the alpha value is 0.8-0.9 in the indoor robot exploration process and 0.75-0.85 in the outdoor robot exploration process; the TH value is usually 0.1-0.2 in the indoor environment for robot exploration, and 0.9-1.5 in the outdoor environment for robot exploration. When in use
Figure RE-GDA0002317645010000084
And
Figure RE-GDA0002317645010000085
when the difference between the two is greater than TH, judging that the current visual contrast is generated to cause misestimation of the speed of the user, and then making the current perception speed be
Figure RE-GDA0002317645010000086
When in use
Figure RE-GDA0002317645010000087
And
Figure RE-GDA0002317645010000088
when the difference value is less than TH, judging that the current visual perception speed is correctly estimated, and then commanding
Figure RE-GDA0002317645010000089
And
Figure RE-GDA00023176450100000810
the weighted average value obtained by solving the weight coefficient alpha is used as the current perception velocity V g
2. Acquisition of perception angle
Fig. 4 is a calculation flowchart of acquisition of a perception angle, in which a head-oriented cell model is modeled by a one-dimensional annularly-connected cell model, and the position of an excitation activity bag that smoothly moves on the one-dimensional annular model is acquired to realize acquisition of the perception angle. In order to realize that the position of the exciting activity packet on the ring model strictly follows the current angle, the position of the exciting activity packet on the ring model is subjected to closed-loop control, and the form of a design controller is proportional-differential control (PD controller).
2.1 modeling of head-oriented cells
The head towards the cell has an important role in directing the movement of the animal's movements, when the rat head faces a specific direction, the head is maximally discharged towards the cell, and when the head deviates from this direction, the cell discharge gradually decreases. Starting from the horizontal angle, the discharge intensity of the rat increases with the increase of the head orientation angle, when the head orientation of the rat reaches the optimal direction of the head orientation cell, the discharge rate reaches the peak value, and then the discharge intensity gradually decreases with the increase of the head orientation angle and gradually deviates from the optimal direction, and the process can be approximately expressed by a Gaussian function, and the discharge rate formula is as follows:
Figure RE-GDA0002317645010000091
in the formula (15), R h Represents the head-to-cell discharge rate, θ represents the current head-to-angle, θ 0 Representing the optimal direction of the head towards the cell, theta and theta 0 In radians. Sigma represents a head-to-cell discharge adjustment factor, and the range of the value size of the sigma is 1-3.
2.2 modeling of one-dimensional circular head-oriented cells
The one-dimensional annular head-to-cell modeling is to connect head-to-tail arranged head-to-cell in sequence into a closed ring, wherein each head faces to the corresponding optimal direction theta of the cell 0i ,θ 0i The calculation formula of (a) is as follows:
Figure RE-GDA0002317645010000092
wherein i represents the number of head-oriented cells; n represents the number of cells facing the head in the one-dimensional annular model, and the value of n is usually a multiple of 36 in order to facilitate calculation;
Figure RE-GDA0002317645010000093
representing the angular offset.
2.3 obtaining angular information from one-dimensional annular head towards cell model
From the discharge characteristics of the head toward the cell, it is known that a gaussian cap-shaped excitation packet appears in the one-dimensional circular cell model, and the excitation packet moves on the circular cell model as the head orientation angle changes. The current head orientation information can be obtained by acquiring the position of the excitation activity packet on the annular cell model. The excitatory activity of the one-dimensional cyclic head towards the cell model is shown in figure 4.
The initial excitation activity of the one-dimensional cyclic head towards the cell model is determined by equation (16), the movement process of the activity packet is completed by relying on excitation transfer between cells, and the driving formula of the excitation activity transfer is as follows:
Figure RE-GDA0002317645010000094
in the formula (17), the compound represented by the formula (I),
Figure RE-GDA0002317645010000099
the drive signal for exciting the movement of the activity package is subjected to a combination of a scale factor ξ whose magnitude is determined by the output of the closed-loop controller mentioned hereinafter and an angular velocity ω (rate of change of perceived angle) at which the head orientation changes.
Figure RE-GDA0002317645010000095
And
Figure RE-GDA0002317645010000096
respectively representing the discharge rate of the ith head towards the cells at the t moment and the t +1 moment;
Figure RE-GDA0002317645010000097
and
Figure RE-GDA0002317645010000098
the discharge rates of the i-1 th and i +1 th heads toward the cells at time t +1, respectively.
Is enclosed in a ring with excitationThe excitation activity package will gradually spread due to long-time movement on the cell model, which will cause an error between the angle information obtained from the circular cell model and the actual angle information. The invention therefore proposes a neural network model of the excitatory connections between head-oriented cells, the variation Δ H of the head-oriented cells due to local excitatory connections As shown in the following equation:
Figure RE-GDA0002317645010000101
in the formula (18), n represents the number of cells in the head orientation in the one-dimensional circular model, ε n Represents the weight of the connection between the head-oriented cells,
Figure RE-GDA0002317645010000102
represents the magnitude of the discharge rate of the ith head towards the cell at time t, epsilon n Is a matrix, and the relation between the size of the element in the matrix and the value of the row and the column in the matrix obeys Gaussian distribution. Through the excitability connection between the head towards the cells, the diffusion of the excitability activity package in the moving process can be prevented, and the accuracy of the angle information is ensured. After the excitement is delivered, the discharge rate of all heads towards the cells needs to be adjusted to be greater than 0 and then normalized, the mathematical expression of which is as follows:
Figure RE-GDA0002317645010000103
2.4 proportional differential control of the position of the excitatory activity packets
In order to lead the position of the exciting activity packet on the annular cell model to strictly follow the change of the real head orientation angle, a proportional-differential controller is designed to carry out closed-loop control on the position of the exciting activity packet on the annular cell model. Firstly, the current real head orientation angle theta and the perception angle theta obtained from the annular cell model g Making difference to obtain angle error delta theta, using output value of controller as size of scale factor xi and proportional differential control of exciting activity bag positionBlock diagram as shown in fig. 5, the proportional coefficient k of the controller p The setting range of the size is 1.27 +/-0.3, and the differential coefficient k d The setting range of the size is 5.16 +/-0.2.
3. Predictively encoded positional cell model
3.1 cell plate model of positional cell population
The position cell is a basic unit for sensing the position of a rat in the environment, the position cell can perform characteristic coding on the spatial relative position, the discharge activity provides continuous and dynamic spatial position expression, and the discharge rate formula of the position cell is shown as the formula (20). The discharge activity of the position cells is the output of the path integration system, in order to realize the quantification of the discharge rate of the position cells in the actual physical environment, the position cell groups are modeled into a two-dimensional cell plate structure, the arrangement of the position cells on the cell plate is arranged in a matrix form, the row values and the column values of the cell plate are equal, the lower left corner of the position cell plate is defined as the origin, and the position cell plate structure is shown in fig. 6.
Figure RE-GDA0002317645010000104
In the formula (20), R pc (r) is the discharge rate of the site cell i at site r, r ═ x, y]Characterizing the current rat position coordinates in the environment; r is 0 =[x 0 ,y 0 ]The coordinates of the position of the discharge field center of the position cell i in the environment; sigma 2 The cell discharge field adjustment coefficient is the position. Thus, the excitatory events on the site cell plate appear as a two-dimensional gaussian cap, and the effect of the excitatory events on the site cell plate is shown in fig. 7.
3.2 predictive coded positional cell plates
The excitatory activity of the cell plate at the initial position is determined by equation (20), and the signal driving the excitatory activity packets on the cell plate at the subsequent position to move depends on the perceived speed and the perceived angle, wherein the perceived speed encodes the direction in which the activity packets move on the cell plate, and the perceived speed encodes how fast the activity packets move on the cell plate. Based on the position cell plate model with the predictive coding, the invention provides a position cell plate model with the predictive coding, and two blocks of predictive coding position cell plates are added on the basis of the original position cell plate model, so that the transverse predictive coding and the longitudinal predictive coding of the information on the original position cell plate are respectively realized. The operation mechanism of the cell plate based on the predictive coding position is shown in FIG. 8, and the main steps are as follows:
step 1: first, the information of the original position cell is transmitted to the predictive position-coding cell plate
Step 2: the predictive coding position cell plate determines the transmission direction of excitation information on the cell plate in the transverse direction and the longitudinal direction according to the size of the perception angle. The specific operation rule is as follows: after the lateral predictive coding position cell plate receives the angle information, judging the direction of lateral movement, when the lateral predictive coding position cell plate judges that the lateral predictive coding position cell plate moves leftwards, all position cells on the lateral predictive coding position cell plate respectively transmit self discharge information to position cells on the left side of the lateral predictive coding position cell plate, and the position cells on the left side boundary transmit the self discharge information to position cells on the right side boundary; when the lateral movement is judged to be moved to the right, all the position cells on the lateral predictive coding position cell plate respectively transmit the self discharge information to the position cells on the right side, and the position cells on the right side boundary transmit the self discharge information to the position cells on the left side boundary. The rules for the vertical predictive coding of positional cell plate runs and so on. The mathematical expression for the mechanism of operation is as follows:
Figure RE-GDA0002317645010000111
Figure RE-GDA0002317645010000112
in the formulae (21) and (22), i and j are the values of the corresponding row and column values on the cell plate, respectively, and P (i,j) The cell discharge rate of the position with coordinates (i, j) on the original position cell plate is obtained;
Figure RE-GDA0002317645010000121
and
Figure RE-GDA0002317645010000122
represents the cell discharge rate at the position with coordinates (i, j) on the horizontal and vertical predictive coding position cell plate, respectively. L is neuro Represents the number of site cells in each row or column of the site cell plate, θ g Is the size of the perception angle obtained in the one-dimensional circular head oriented cell model.
3.3 in situ cell excitation replacement
After the predictive coding is executed, the position information on the cell plate of the predictive coding position needs to be retransmitted to the original position cell plate, so that the intelligent mobile robot can know the position of the intelligent mobile robot in the environment. Based on the method, excitation transmission of the predictive coding position cell plate to the original position cell plate is realized in a connection mode of a neural network, and the connection weight value is changed from a perception speed V g And a sensing angle theta g And (4) jointly determining. Because the dimension of the predictive coding position cell plate is the same as that of the original position cell plate, the network transmission structure is the connection between the position cells under the same coordinate, and the specific connection mode is as follows: the magnitude of the connection weight of the excitation transmission of the horizontal predictive coding position cell to the original position cell is V g cosθ g The magnitude of the connection weight of excitation transmission of the longitudinal predictive coding position cell to the original position cell is V g sinθ g The cells in the original position also have excitation transmission to the cells, and the connection weight is 1-V g sinθ g -V g cosθ g The mathematical expression for excitement transmission is as follows:
Figure RE-GDA0002317645010000123
in the formula (23), the compound represented by the formula,
Figure RE-GDA0002317645010000124
representing the discharge rate of cells at the original position at the time t,
Figure RE-GDA0002317645010000125
the discharge rate of the cells at the original position at the next moment. After excitation of cells at all home sites, it is necessary to adjust the cells at all home sites to greater than 0 before normalization.
4. Resolving the position of the robot
4.1 obtaining the coordinates of the excitatory activity package from the position cell plate
The cells at the position have a single discharge field, and the position information of the rats sensing the self-position in the environment is provided by the discharge activity of the cells at the position. Based on the physiological research, the intelligent mobile robot of the invention acquires the coordinate of the exciting activity packet on the cell plate of the original position as the current position information, and the exciting activity packet is positioned on the coordinate (P) of the cell plate of the position X ,P Y ) The calculation formula of (a) is as follows:
Figure RE-GDA0002317645010000126
in the formula (24), L neuro Representing the number of position cells in each row or column of the position cell plate, i and j being the size of the corresponding row and column values on the cell plate, P (i,j) The cell discharge rate of the position with coordinates (i, j) on the original position cell plate is obtained.
4.2 periodic encoding of position coordinates
Physiological studies have shown that when a rat comes into a new environment, a positional field corresponding to the environment can be established quickly, i.e., the rat can encode a large spatial region using a limited number of positional cells in the hippocampus. Based on this, the present invention utilizes periodic encoding of the position cell plate to achieve position awareness over a wide range of spaces. Starting from the initial position of the bionic robot, and the excitatory activity is wrapped in the center of the cell plate at the initial state, i.e. (P) X ,P Y )=(L neuro /2,L neuro And/2), wherein the initial position coordinate of the bionic robot in the environment is (X, Y) ═ beta (P) at the moment X -L neuro /2),β(P Y -L neuro 2)), where β is a proportionality coefficient for the transformation of coordinates on a cell plate to coordinates of a true position, and the magnitude of β value in an indoor environment is usually set to 5.5 ± 0.8; the magnitude of the β value is usually set to 14 ± 2.0 in an outdoor environment. According to the activity characteristics of the cell plate with the predictive coding position, the exciting activity packet periodically and continuously moves on the cell plate at the original position, namely, when the exciting activity packet leaves one side boundary of the cell plate, the exciting activity packet enters the cell plate from the other side boundary of the cell plate. Based on the above, the invention provides the periodic coding method of the position coordinates, which can realize the position perception of the intelligent mobile robot in a large-range space area under a limited number of position cells, and the mathematical expression of the method is as follows:
Figure RE-GDA0002317645010000131
in formula (25), X t And Y t Respectively represent the position coordinates, X, of the robot in the environment at the moment t t+1 And Y t+1 Respectively represent the position coordinates of the robot in the environment at the moment t +1,
Figure RE-GDA0002317645010000132
and
Figure RE-GDA0002317645010000133
represents the abscissa of t and t +1 excitatory events coated on the cell plate,
Figure RE-GDA0002317645010000134
and
Figure RE-GDA0002317645010000135
represents the ordinate, L, of t and t +1 excitatory events, respectively, coated on the cell plate neuro The number of the position cells in each row or each column of the position cell plate is represented, and the position recognition of the robot in a large-range space can be realized through the periodic coding calculation of the position coordinates.

Claims (5)

1. An intelligent mobile robot motion state and position cognition method based on rat brain hippocampus cognition mechanism is characterized in that: firstly, acquiring image information of an environment through a camera, acquiring angle and direction information of a robot through a gyroscope and an encoder, and transmitting the information to a CPU (central processing unit); combining visual information, providing a perception speed solving method based on speed cells and the visual information to obtain the perception speed of the robot; then, simulating a discharge mechanism of a head facing cells by using a cell model in one-dimensional annular connection, and inputting angle information acquired by a gyroscope into the head facing cell model, so that the robot acquires current perception angle information in a bionic manner; then, inputting the perception speed information and the perception angle information into a predictive coding neural network model of the position cell, driving the excitation activity package on the position cell plate to move, and acquiring the coordinate of the excitation activity package on the position cell plate by analyzing the activity condition of the position cell plate, so that the robot is positioned in the environment, and the environment cognition function of the intelligent mobile robot is completed;
the specific working process is as follows:
s1 acquisition of perception velocity
S1.1, acquiring an image in the advancing direction of the current robot through a camera;
s1.2, converting the acquired image into a gray image, and acquiring an optical flow field between two frames of images by using a Lucas-Kanade optical flow field calculation method;
s1.3, resolving the component size of all optical flow vectors in the optical flow field in the vertical direction of the image
S1.4, training the BP neural network by using the component size of all the optical flow vectors in the vertical direction of the image as the input of the BP neural network and the advancing speed of the robot acquired by a current encoder as the output of the BP neural network;
s1.5, taking the advancing speed of the robot obtained by a current encoder as the input of a speed cell model, and carrying out selection and weighting processing on the visual perception speed output by the BP neural network and the accurate speed output by the speed cell model to obtain the perception speed information of the robot;
s2 acquisition of Angle information
S2.1, acquiring angle information of the current advancing direction of the robot through a gyroscope, and calculating the difference value of the angle information measured in two times before and after as the increment of the direction angle;
s2.2, the increment of the direction angle is used as the input of the one-dimensional annular head towards the cell model, so that the excitation activity packet of the one-dimensional annular head towards the cell model moves along with the increment input of the direction angle;
s2.3, in order to enable the position of the exciting movable packet, facing the cell model, of the annular head to be consistent with the angle information obtained by the current encoder, a proportional-differential controller is designed, and closed-loop control is conducted on the position of the exciting movable packet;
s2.4, acquiring the position of the exciting activity packet of the annular head facing the cell model through an algorithm to obtain the perception angle information of the robot;
movement of cell plate excitation activity bag at S3 position
S3.1, initializing the exciting activities of cells at all positions on the position cell plate, and arranging the exciting activities in a Gaussian cap shape at the center of the position cell plate;
s3.2, transmitting the excitation activity on the cell plate at the current position to the position cell plate with the predictive coding according to the input perception angle information;
s3.3, adjusting the connection weight of the predictive coding neural network model according to the input perception speed information; transmitting excitation information on the predictive coding position cell plate to the original position cell plate again through the connection weight;
s4 resolving the position of the robot
S4.1, acquiring coordinates of the exciting activity bag on the cell plate at the original position, and converting the coordinates into the position of the robot in the environment;
and S4.2, determining whether the real position of the robot in the environment needs to be periodically coded according to the variation of the coordinates of the two times, so as to realize the position cognition of the bionic robot in a large-scale space.
2. The intelligent mobile robot motion state and position cognition method based on the rat brain hippocampus cognitive mechanism according to claim 1, characterized in that: providing a perception velocity solving method based on velocity cells and visual information, and based on the visual velocity perception of an optical flow method, wherein the specific flow is as follows;
calculating an optical flow field between continuous images by using a Lucas-Kanade optical flow algorithm, and solving an optical flow vector of the images as follows:
Figure FDA0002203129300000021
Figure FDA0002203129300000022
taking the absolute value magnitude of the optical flow vector with respect to the component in the vertical direction of the image, the expression thereof is as follows:
|V yi |=|V i |sin(arctan(|V yi |/|V xi |)) (3)
Figure FDA0002203129300000023
designing a multilayer feedforward inverse propagation error neural network (BP neural network) as an algorithm for estimating linear velocity through vision, and taking the absolute value size | V of an optical flow vector relative to a component in the vertical direction of an image y Taking | as an input sample of the BP neural network, taking the linear velocity obtained by an encoder when each frame of picture is shot as an output sample, and training the BP neural network; setting the resolution of the optical flow field calculated by using a Lucas-Kanade optical flow algorithm to be 10 × 13, namely 130 inputs exist in the corresponding BP neural network, and 1 output exists; setting the number of hidden layer neurons as 9, the iteration number as 100, the learning rate of the neural network as 0.02 and the mean square error cut-off threshold as 0.004;
ontology velocity perception based on the velocity cell model:
velocity cell a mathematical model for obtaining velocity information is shown as follows:
let the first order linear equation for the velocity cell model be:
F i =A i1 (V+V iT )+A i2 (5)
in the formula (5) F i Represents the discharge rate of the ith velocity cell, A i1 Representing the rate of cell perception, A i2 Bias, V, representing the perceived velocity of the velocity cell iT A perturbation representing the perceived velocity of the velocity cell; (ii) a To achieve the authenticity of the model while ensuring the accuracy of the model, A i1 、A i2 、V iT Is set to be normally distributed, wherein A is set i1 The value range of (a) is normal distribution with a position parameter of 1.8 and a variance of 0.1; a. the i2 The value range of (a) is normal distribution with the position parameter of 0.5 and the variance of 0.05; v iT The value range of (1) is normal distribution with the position parameter of 0 and the variance of 0.1, and the number of the velocity cells is set to be 20;
then, the discharge rate of all the velocity cells is normalized:
Figure FDA0002203129300000031
in formula (6) F max Represents the maximum of the cell discharge rates at all rates, F min Represents the minimum value of the cell discharge rate at all rates,
Figure FDA0002203129300000032
the discharge rate of the ith velocity cell after normalization treatment is obtained; solving for the mathematical expectation E of the cell discharge rates at all velocities, first averaging the cell discharge rates at all velocities
Figure FDA0002203129300000033
Then calculating the absolute value of the difference between the cell discharge rate of each speed and the average value, and then solving the mathematical expectation E of the cell discharge rate of the speed according to the formula (7), namely the weighted average value;
Figure FDA0002203129300000034
solving the perception speed:
mathematical expectation E for determining rate of cell discharge and output E of BP neural network out Then, E and E are first put together out Transformation to the same dimension corresponds to
Figure FDA0002203129300000035
And
Figure FDA0002203129300000036
then, the two are integrated to obtain the final perception speed, and the perception speed V is obtained by adopting the following formula g And (3) solving:
Figure FDA0002203129300000041
in the formula (8), alpha represents a weight coefficient, TH represents an adjustment threshold, the alpha value is 0.8-0.9 in the indoor environment and 0.75-0.85 in the outdoor environment in the robot exploration process; the TH value is 0.1-0.2 in the indoor environment by the robot exploration, and 0.9-1.5 in the outdoor environment by the robot exploration process; when in use
Figure FDA0002203129300000042
And
Figure FDA0002203129300000043
when the difference between the two is greater than TH, judging that the current visual contrast is generated to cause misestimation of the speed of the user, and then making the current perception speed be
Figure FDA0002203129300000044
When in use
Figure FDA0002203129300000045
And
Figure FDA0002203129300000046
when the difference value is less than TH, judging that the current visual perception speed is correctly estimated, and then commanding
Figure FDA0002203129300000047
And
Figure FDA0002203129300000048
the weighted average value obtained by solving the weight coefficient alpha is used as the current perception velocity V g
3. The intelligent robot motion state and position cognition method based on the rat brain hippocampus cognitive mechanism according to claim 1, characterized in that: modeling a head-oriented cell model through a one-dimensional annularly-connected cell model, and acquiring the position of an excitation activity bag which smoothly moves on the one-dimensional annular model to realize acquisition of a perception angle; designing a controller in the form of proportional-derivative control;
the excitatory events were packed on a one-dimensional cyclic head moving towards the cell model:
the moving process of the activity package is completed by relying on the excitation transmission among cells, and the driving formula of the excitation activity transmission is as follows:
Figure FDA0002203129300000049
in the formula (9), the reaction mixture is,
Figure FDA00022031293000000410
the driving signal, which is the movement of the exciting activity package, is subjected to the combined action of a scale factor xi and an angular velocity ω of the change of head orientation, wherein the magnitude of the scale factor xi is determined by the output of the closed-loop controller mentioned hereinafter;
Figure FDA00022031293000000411
and
Figure FDA00022031293000000412
respectively representing the discharge rate of the ith head towards the cells at the t moment and the t +1 moment;
Figure FDA00022031293000000413
and
Figure FDA00022031293000000414
the discharge rates of the i-1 th head and the i +1 th head towards the cell at the t +1 moment respectively;
variation of head towards cell Δ H due to local excitatory junctions As shown in the following equation:
Figure FDA00022031293000000415
in the formula (10), n represents the number of cells heading toward the head in the one-dimensional circular model, ε n Represents the weight of the connection between the head-oriented cells,
Figure FDA00022031293000000416
represents the magnitude of the discharge rate of the ith head towards the cell at time t, epsilon n The matrix is a matrix, and the relationship between the size of elements in the matrix and the values of rows and columns where the elements are located obeys Gaussian distribution; after the excitement is delivered, the discharge rate of all heads towards the cells needs to be adjusted to be greater than 0 and then normalized, the mathematical expression of which is as follows:
Figure FDA0002203129300000051
designing a proportional-differential controller to carry out closed-loop control on the position of the exciting activity packet on the annular cell model; firstly, the current real head orientation angle theta and the perception angle theta obtained from the annular cell model g Making a difference to obtain an angleDegree error delta theta, through proportional-derivative control, the output value of the controller is used as the size of a proportional factor xi; proportional coefficient k of controller p The setting range of the size is 1.27 +/-0.3, and the differential coefficient k d The setting range of the size is 5.16 +/-0.2.
4. The intelligent mobile robot motion state and position cognition method based on the rat brain hippocampus cognitive mechanism according to claim 1, characterized in that: proposing a predictively encoded positional cell model;
predictively encoded positional cell plate:
on the basis of the original position cell plate model, two predictive coding position cell plates are added to respectively realize the transverse predictive coding and the longitudinal predictive coding of the information on the original position cell plate; the method comprises the following steps:
step 1: first, the information of the original position cell is transmitted to the predictive position-coding cell plate
Step 2: the predictive coding position cell plate determines the transmission directions of excitation information on the cell plate in the transverse direction and the longitudinal direction according to the size of the perception angle; the mathematical expression for the mechanism of operation is as follows:
Figure FDA0002203129300000052
Figure FDA0002203129300000053
in the formulas (12) and (13), i and j are the values of the corresponding row and column on the cell plate, respectively, and P (i,j) The cell discharge rate of the position with coordinates (i, j) on the original position cell plate is obtained;
Figure FDA0002203129300000054
and
Figure FDA0002203129300000055
respectively representing the cell discharge rate of the position with coordinates (i, j) on the transverse and longitudinal predictive coding position cell plate; l is neuro Represents the number of site cells in each row or each column of the site cell plate, θ g The size of a perception angle obtained by the one-dimensional annular head facing to the cell model;
orthotopic cellular excitation reset:
the excitation transmission of the predictive coding position cell plate to the original position cell plate is realized through the connection mode of a neural network, and the connection weight value is changed from a perception speed V g And a sensing angle theta g Jointly determining; because the dimension of the predictive coding position cell plate is the same as that of the original position cell plate, the network transmission structure is the connection between the position cells under the same coordinate, and the specific connection mode is as follows: the magnitude of the connection weight of the excitation transmission of the horizontal predictive coding position cell to the original position cell is V g cosθ g The magnitude of the connection weight of excitation transmission of the longitudinal predictive coding position cell to the original position cell is V g sinθ g The cells in the original position also have excitation transmission to the cells, and the connection weight is 1-V g sinθ g -V g cosθ g The mathematical expression for excitement delivery is as follows:
Figure FDA0002203129300000061
in the formula (14), the compound represented by the formula (I),
Figure FDA0002203129300000062
representing the discharge rate of cells at the original position at the time t,
Figure FDA0002203129300000063
the discharge rate of the original position cells at the next moment; after the excitation of cells at all home sites is transmitted, it is also necessary to adjust the cells at all home sites to greater than 0 and then normalize them.
5. The intelligent mobile robot motion state and position cognition method based on the rat brain hippocampus cognitive mechanism according to claim 1, characterized in that: calculating the position of the robot in the environment by acquiring the coordinates of the exciting activity bag on the position cell plate;
coordinates of excitatory activity packages were obtained from the positional cell plate:
the intelligent mobile robot obtains the coordinate of the exciting activity bag on the cell plate at the original position as the current position information, and the exciting activity bag is positioned at the coordinate (P) of the cell plate at the position X ,P Y ) The calculation formula of (a) is as follows:
Figure FDA0002203129300000064
in formula (15), L neuro Representing the number of position cells in each row or column of the position cell plate, i and j being the size of the corresponding row and column values on the cell plate, P (i,j) The cell discharge rate of the position with coordinates (i, j) on the original position cell plate is obtained;
periodic encoding of position coordinates
The periodic coding of the position cell plate is utilized to realize the position cognition of a large-scale space; starting from the initial position of the robot movement, the excitatory activity is wrapped in the center of the position cell plate in the initial state, i.e. (P) X ,P Y )=(L neuro /2,L neuro And/2), when the initial position coordinates of the robot in the environment are (X, Y) ═ beta (P) X -L neuro /2),β(P Y -L neuro 2)), wherein beta is a proportionality coefficient for converting coordinates on a cell plate to real position coordinates, and the value of beta under an indoor environment is set to be 5.5 +/-0.8; the beta value under the outdoor environment is set to be 14 +/-2.0; according to the activity characteristics of the cell plate with the predictive coding position, the exciting activity packet periodically and continuously moves on the cell plate at the original position, namely when the exciting activity packet leaves from the boundary of one side of the cell plate, the exciting activity packet enters the cell plate from the boundary of the other side of the cell plate; a periodic coding method for giving position coordinates realizes that the intelligent mobile robot is detailed at a limited number of positionsAnd position sensing in a large-range spatial region is realized in an intracellular manner, and the mathematical expression of the position sensing is as follows:
Figure FDA0002203129300000071
in formula (16), X t And Y t Respectively represent the position coordinate, X, of the robot in the environment at the time t t+1 And Y t+1 Respectively represent the position coordinates of the robot in the environment at the moment t +1,
Figure FDA0002203129300000072
and
Figure FDA0002203129300000073
represents the abscissa of t and t +1 excitatory events coated on the cell plate,
Figure FDA0002203129300000074
and
Figure FDA0002203129300000075
represents the ordinate, L, of t and t +1 excitatory events, respectively, coated on the cell plate neuro The number of the position cells in each row or each column of the position cell plate is represented, and the position recognition of the robot in a large-range space is realized through the periodic coding calculation of the position coordinates.
CN201910872030.4A 2019-09-16 2019-09-16 Intelligent mobile robot motion state and position cognition method based on rat brain hippocampus cognition mechanism Active CN110764498B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910872030.4A CN110764498B (en) 2019-09-16 2019-09-16 Intelligent mobile robot motion state and position cognition method based on rat brain hippocampus cognition mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910872030.4A CN110764498B (en) 2019-09-16 2019-09-16 Intelligent mobile robot motion state and position cognition method based on rat brain hippocampus cognition mechanism

Publications (2)

Publication Number Publication Date
CN110764498A CN110764498A (en) 2020-02-07
CN110764498B true CN110764498B (en) 2022-09-09

Family

ID=69329867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910872030.4A Active CN110764498B (en) 2019-09-16 2019-09-16 Intelligent mobile robot motion state and position cognition method based on rat brain hippocampus cognition mechanism

Country Status (1)

Country Link
CN (1) CN110764498B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111552298B (en) * 2020-05-26 2023-04-25 北京工业大学 Bionic positioning method based on mouse brain hippocampus space cells
CN112525194B (en) * 2020-10-28 2023-11-03 北京工业大学 Cognitive navigation method based on in vivo source information and exogenous information of sea horse-striatum
CN113657574A (en) * 2021-07-28 2021-11-16 哈尔滨工业大学 Construction method and system of bionic space cognitive model
WO2023184223A1 (en) * 2022-03-30 2023-10-05 中国电子科技集团公司信息科学研究院 Robot autonomous positioning method based on brain-inspired space coding mechanism and apparatus

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106125730A (en) * 2016-07-10 2016-11-16 北京工业大学 A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell
CN106949896A (en) * 2017-05-14 2017-07-14 北京工业大学 A kind of situation awareness map structuring and air navigation aid based on mouse cerebral hippocampal
CN109668566A (en) * 2018-12-05 2019-04-23 大连理工大学 Robot scene cognition map construction and navigation method based on mouse brain positioning cells

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013036965A2 (en) * 2011-09-09 2013-03-14 The Regents Of The Iniversity Of California In vivo visualization and control of pathological changes in neural circuits

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106125730A (en) * 2016-07-10 2016-11-16 北京工业大学 A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell
CN106949896A (en) * 2017-05-14 2017-07-14 北京工业大学 A kind of situation awareness map structuring and air navigation aid based on mouse cerebral hippocampal
CN109668566A (en) * 2018-12-05 2019-04-23 大连理工大学 Robot scene cognition map construction and navigation method based on mouse brain positioning cells

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
仿鼠脑海马的机器人地图构建与路径规划方法;邹强等;《华中科技大学学报(自然科学版)》;20181220(第12期);全文 *
大鼠脑海马结构认知机理及其在机器人导航中的应用;于乃功等;《北京工业大学学报》;20170330;第43卷(第03期);第434-442页 *

Also Published As

Publication number Publication date
CN110764498A (en) 2020-02-07

Similar Documents

Publication Publication Date Title
CN110764498B (en) Intelligent mobile robot motion state and position cognition method based on rat brain hippocampus cognition mechanism
CN109668566B (en) Robot scene cognition map construction and navigation method based on mouse brain positioning cells
CN106949896B (en) Scene cognition map construction and navigation method based on mouse brain hippocampus
Kaiser et al. Towards a framework for end-to-end control of a simulated vehicle with spiking neural networks
CN112666939B (en) Robot path planning algorithm based on deep reinforcement learning
KR101126774B1 (en) Mobile brain-based device having a simulated nervous system based on the hippocampus
CN112097769B (en) Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method
Huber et al. Using stereo vision to pursue moving agents with a mobile robot
CN109240279B (en) Robot navigation method based on visual perception and spatial cognitive neural mechanism
CN111044031B (en) Cognitive map construction method based on mouse brain hippocampus information transfer mechanism
CN111552298B (en) Bionic positioning method based on mouse brain hippocampus space cells
Zhao et al. Closed-loop spiking control on a neuromorphic processor implemented on the iCub
Guan et al. Robot formation control based on internet of things technology platform
CN112405542A (en) Musculoskeletal robot control method and system based on brain inspiring multitask learning
CN114037050B (en) Robot degradation environment obstacle avoidance method based on internal plasticity of pulse neural network
Naya et al. Spiking neural network discovers energy-efficient hexapod motion in deep reinforcement learning
Hu Research on robot fuzzy neural network motion system based on artificial intelligence
Hong et al. Vision-locomotion coordination control for a powered lower-limb prosthesis using fuzzy-based dynamic movement primitives
Antonelo et al. Learning slow features with reservoir computing for biologically-inspired robot localization
Xing et al. A brain-inspired approach for collision-free movement planning in the small operational space
Schmudderich et al. Estimating object proper motion using optical flow, kinematics, and depth information
Gaudiano et al. Adaptive vector integration to endpoint: Self-organizing neural circuits for control of planned movement trajectories
Stoelen et al. Adaptive collision-limitation behavior for an assistive manipulator
Ball et al. A navigating rat animat
Han Action Planning and Design of Humanoid Robot Based on Sports Analysis in Digital Economy Era

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant