CN114812565B - Dynamic navigation method based on artificial intelligence network - Google Patents

Dynamic navigation method based on artificial intelligence network Download PDF

Info

Publication number
CN114812565B
CN114812565B CN202210718568.1A CN202210718568A CN114812565B CN 114812565 B CN114812565 B CN 114812565B CN 202210718568 A CN202210718568 A CN 202210718568A CN 114812565 B CN114812565 B CN 114812565B
Authority
CN
China
Prior art keywords
unit
dimensional
network
head orientation
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210718568.1A
Other languages
Chinese (zh)
Other versions
CN114812565A (en
Inventor
刘杨
姜荣坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202210718568.1A priority Critical patent/CN114812565B/en
Publication of CN114812565A publication Critical patent/CN114812565A/en
Application granted granted Critical
Publication of CN114812565B publication Critical patent/CN114812565B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a dynamic navigation method based on an artificial intelligence network, which utilizes a continuous attractor network to construct a navigation frame, realizes the estimation of dynamic position and posture through a grid unit, a head orientation unit and a visual perception model, combines an empirical environment model to realize the output of three-dimensional position and posture information, corrects and updates the empirical environment model according to the observation information of the grid unit network, the head orientation unit network and the visual unit network, constructs a long-time memory network, predicts the position and posture of the next moment according to the observation information of the grid unit, the head orientation unit and the visual unit obtained historically, corrects a comparison error by adjusting long-time memory network parameters, and utilizes the empirical environment model to realize the three-dimensional navigation output. The method can be applied to a complex unknown environment, multi-source navigation observation information is fully utilized, dynamic accurate navigation under the condition of partial prior information loss is realized, and effective technical support is provided for intelligent information acquisition and perception.

Description

Dynamic navigation method based on artificial intelligence network
Technical Field
The invention belongs to the field of artificial intelligence and navigation, and particularly relates to a dynamic navigation method based on an artificial intelligence network.
Background
The 2005 nobel physiological or medical prize revealed that "grid cells" in biological brain cells provided multi-scale periodic spatial characterization to the brain, were key to biological brain spatial coding, and helped organisms to implement path planning and integration, revealing the main reason for the powerful navigation capabilities of humans and most animals. With the rapid development of deep learning network technology in recent years, simulating biological brain grid cells, orientation cells and visual cells to realize dynamic position and posture estimation in an empirical and non-empirical environment becomes a research hotspot in related fields. The bionic navigation can realize path planning and accurate navigation in a complex environment, can be further implanted into various unmanned intelligent systems, and is one of the directions of the development of the field of artificial intelligence in the future. Therefore, the artificial intelligence module and the system which utilize the multilayer deep neural network to construct the bionic navigation unit so as to realize the navigation capability similar to that of some organisms have important scientific research and engineering realization values.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method utilizes a continuous attractor network to construct a navigation frame, realizes estimation of dynamic position and posture through a grid unit, a head orientation unit and visual perception modeling, and utilizes an empirical environment model to realize three-dimensional navigation output. Compared with the traditional method, the method can be applied to a complex unknown environment, multi-source navigation observation information such as visual perception, inertial sensing, a milemeter, satellite navigation and a wireless network is fully utilized, dynamic accurate navigation under the condition of partial prior information loss is realized, and effective technical support is provided for intelligent information acquisition and perception.
The technical scheme of the invention is as follows: a dynamic navigation method based on artificial intelligence network, the concrete implementation steps are:
determining models and parameters of a continuous attractor network, constructing a head orientation unit model and a local view unit model by adopting a two-dimensional continuous attractor network, and constructing a grid unit model by adopting a three-dimensional continuous attractor network. Wherein the kinetic model of the sequential attractor network is described as:
Figure 236782DEST_PATH_IMAGE001
wherein, the first and the second end of the pipe are connected with each other,s p (t) Is the rate of activation of the firing of the neuron p,τ E activating a firing response time constant for the neuron;γ E is a constant representing the suppression offset;ω pq a weight value representing the connection between neuron p to neuron q, which is inversely proportional to the distance between the two neurons;ω EE is also a constant representing the stimulus value of the cyclic excitability;u(t) Is a suppression function of the global dynamics and,ω EI is a regulatory factor of global inhibitory excitation;X p (t) Representing the external input of the neuron.
Step (ii) of(2) The grid unit model represents a three-dimensional space coordinate, and the construction process comprises the following steps: firstly, updating target activities by utilizing an attractor kinetic model with local excitation and global inhibition; secondly, the movement of local excitation is realized through three-dimensional path integration by combining the translation speed and the rotation speed; finally, when similar path pictures are input, the movement of the object is updated by the local view unit. The local excitation model of the three-dimensional grid unit is described by a weight matrix, which is realized by creating an excitation weight matrix through a three-dimensional Gaussian function
Figure 357185DEST_PATH_IMAGE002
uvwRepresenting the distance between each cell.
Figure 402501DEST_PATH_IMAGE003
Wherein the content of the first and second substances,δx,δy,δzare the variances of the three-dimensional spatial distribution, which are all constants; activity changes in three-dimensional grid cells through matrices
Figure 797711DEST_PATH_IMAGE004
The calculation method comprises the following steps:
Figure 612083DEST_PATH_IMAGE005
whereinn x n y n z Are the three dimensions of this matrix. The relationship with u, v, w is
Figure 762442DEST_PATH_IMAGE006
Each three-dimensional grid cell can enable adjacent cells to be suppressed through the function of local suppression. In the process of suppressing, the weight matrix is suppressed
Figure 29475DEST_PATH_IMAGE007
To update the activity; the local inhibition and the global inhibition are carried out through the processes
Figure 493954DEST_PATH_IMAGE008
The non-negative value in (1) is calculated, specifically:
Figure 162833DEST_PATH_IMAGE009
wherein, the first and the second end of the pipe are connected with each other,φis a global suppression function.
The overall activity of the three-dimensional grid cell will eventually be normalized to bring all cells back to the same state, expressed as:
Figure 359459DEST_PATH_IMAGE010
wherein
Figure 113788DEST_PATH_IMAGE011
Representing a particular grid cell.
Three-dimensional path integration maps the activity of a three-dimensional grid cell into other nearby cells, the activity of the cell being at the current head orientation angleθBy the speed v of the translation, and the speed of the altitude changev h Mapping onto the x, y plane and z axis, respectively. The calculation method of the unit excitation activity change comprises the following steps:
Figure 850800DEST_PATH_IMAGE012
whereinδ x0 ,δ y0 ,δ z0 An initial variance of the three-dimensional spatial distribution is represented,γwhich represents the residual error, is,l,m,nis shown asl,m,nA grid cell.
The amount of unit activity is determined by two inputs. Two input quantities one fromTransmitting unitT gc . The other from the residue gamma, which is based on the minimum fraction of the compensation valueδ xf ,δ yf ,δ zf Calculated, specifically described as:
Figure 639765DEST_PATH_IMAGE013
Figure 69609DEST_PATH_IMAGE014
whereink x ,k y ,k z Is a constant in the three-dimensional path integral. γ is calculated as:
Figure 639131DEST_PATH_IMAGE015
Figure 914254DEST_PATH_IMAGE016
whereina,bAre coefficients.
The local view unit is connected with the three-dimensional grid unit and the head orientation unit, and the connection matrix C is used for storing the learned relation among the three-dimensional grid unit matrix, the local view unit vector and the head orientation unit matrix. This connection is described using the adjusted hebrs law, which is expressed in particular as:
Figure 823304DEST_PATH_IMAGE017
whereinτWhich represents the efficiency of the learning process,V i denotes the firstiThe activity of the individual partial-view units,
Figure 158471DEST_PATH_IMAGE018
to representtTime of dayx,y,z,θIn four-degree-of-freedom postureiThe activity change in the three-dimensional grid cells and head orientation cells of the connection matrix is:
Figure 90655DEST_PATH_IMAGE019
wherein constant isδRepresenting the strength of the local visual alignment.n act Is the number of active local view elements.
Step (3) the head orientation unit model represents the direction information of the specific area. The information of the azimuth angle is represented by a multi-layered head-oriented unit model in a three-dimensional vertical space. The head orientation unit and the partial view unit are connected for orientation calibration. The activation procedure of the head-oriented unit is as follows: firstly, updating target activities by using a dynamic model of a multi-dimensional continuous attraction subnetwork; and secondly, the head orientation unit formed by the multi-dimensional continuous attractor network carries out path integration on the three-dimensional grid unit according to the rotation speed, the height change speed and the translation speed provided by the information in the visual odometer to obtain the outputs of the direction change and the height change, and updates the head orientation according to the outputs. Finally, as with the grid cells, the movement of the target is updated by the local view when similar path pictures are input.
Activation model of head orientation unit creates excitation weight matrix by two-dimensional Gaussian function
Figure 169469DEST_PATH_IMAGE020
To achieve, in matrix: (A)hθ) The distance luminance between the middle cells is denoted by u, v. The calculation method of the weight matrix comprises the following steps:
Figure 667447DEST_PATH_IMAGE021
whereinδ θ Andδ h two variance constants, respectively. The change in activity in the head orientation unit can be described as:
Figure 704673DEST_PATH_IMAGE022
whereinn θ n h Are the two dimensions of the head towards the matrix of cells. The relationship to u, v can be described as:
Figure 655311DEST_PATH_IMAGE023
hwhich represents the height of the object to be inspected,θrepresenting the rotation angle.
Each head-facing cell can have adjacent cells suppressed by a local suppression function. The local inhibition and the global inhibition are carried out through the processes
Figure 865713DEST_PATH_IMAGE024
The non-negative value of (a) is calculated and can be described as:
Figure 218197DEST_PATH_IMAGE025
wherein
Figure 160745DEST_PATH_IMAGE026
Is a weight matrix for local suppression. By passingA hdc And (4) calculating non-negative values of the values. The overall head movement towards the cell will eventually be normalized to bring all cells back to the same state, described as:
Figure 864259DEST_PATH_IMAGE027
the head orientation unit updates the head orientation by transferring excitatory activations to other adjacent units, through orientation changes and height changes. Speed of change according to direction of rotationω θ And speed of altitude changev h Of a unitThe excitatory activity is mapped into a yaw matrix and an altitude matrix, respectively. The mapped unit excitation activity is determined by two inputs. Two input quantities-one from the transmitting unitT hdc . The other from the residualηBased on a minimum fraction of the compensation valueδ θf δ hf Calculated, amount of change in the unit's excitatory activity
Figure 284876DEST_PATH_IMAGE028
The calculation method and the related calculation method are as follows:
Figure 960708DEST_PATH_IMAGE029
wherein (A) and (B)l,m) Pointing to a particular head-facing unit,δ θ0 ,δ h0 representing the initial variance of the three-dimensional spatial distribution,ηrepresenting a residual matrix.
Figure 74157DEST_PATH_IMAGE030
k θ ,k h Which represents the path integration constant, is,ω θ ,v h indicating angular velocity and altitude change velocity, respectively.
Figure 264967DEST_PATH_IMAGE031
Whereink x ,k y ,k z Is a constant in the three-dimensional path integral,γthe calculation method of (A) is as follows:
Figure 489275DEST_PATH_IMAGE032
Figure 550772DEST_PATH_IMAGE033
in whicha,bAre coefficients.
And (4) driving the visual perception modeling by the excitation activities in the local visual module, the three-dimensional grid unit and the head orientation unit. Each of the local view cell information, the three-dimensional mesh cell information, and the head orientation cell information is associated. The empirical position and attitude information is sensed and estimated by the translational velocity and rotational velocity of the visual odometer. The output of the visual perception module includes self-motion information and visual information, wherein the visual information is an integral of the three-dimensional grid cells and the input of the head towards the network of cells along the path.
The visual perception information can be obtained by an optical camera, wherein the pixel intensity and the translation speed are calculated by the following method:
Figure 163019DEST_PATH_IMAGE034
whereinμ xy Which represents the average value of the values,δ xy the standard deviation is indicated.
Figure 575546DEST_PATH_IMAGE035
Wherein is constantμIn order to measure the physical velocity.S h Representing the amount of translation in the image column dimension,v max is the maximum threshold, the error of the measurement is filtered out.
The calculation method of the rotating speed comprises the following steps:
Figure 869124DEST_PATH_IMAGE036
by combining two sets of dataI i And withI i+1 Moves h To the complex dimension and then calculate the difference in average intensity between them. WhereinωIs the width of the picture. Rotation speedθBy
Figure 50706DEST_PATH_IMAGE037
And constantσ h Multiplied to obtain a constantσ h The size is usually determined empirically.
Figure 240379DEST_PATH_IMAGE038
Figure 343464DEST_PATH_IMAGE039
And (5) according to the output information of the visual perception module in the step (4), combining the empirical environment model to realize the output of three-dimensional position and posture information, and correcting and updating the empirical environment model according to the observation information of the grid unit, the head orientation unit and the visual unit.
And (6) constructing a long-time and short-time memory network, predicting the position and the posture of the next moment according to the observation information of the grid unit, the head orientation unit and the visual unit obtained in history, and comparing the prediction result with the position and the posture estimated at the next moment to obtain a comparison error. The calculation method of the comparison error comprises the following steps:
Figure 909575DEST_PATH_IMAGE040
i.e. by minimizing the location unit prediction of the network
Figure 476823DEST_PATH_IMAGE041
And synthesizing the positional Unit object
Figure 837397DEST_PATH_IMAGE042
Cross entropy between
Figure 552412DEST_PATH_IMAGE043
And head direction prediction
Figure 922213DEST_PATH_IMAGE044
And its target
Figure 812809DEST_PATH_IMAGE045
Cross entropy between
Figure 875443DEST_PATH_IMAGE046
To train the parameters of the grid cells. When the value of this error function reaches a minimum, the training may be considered to be complete.
And (7) correcting the comparison error by adjusting the long-time memory network parameters until the comparison error converges to the range meeting the output position and posture accuracy, and finishing the construction of the dynamic navigation network in the experience environment.
Compared with the prior art, the invention has the advantages that:
(1) Compared with the traditional calculation method, the method (shown in figure 1) realizes the dynamic estimation of the position and the posture in the experience environment scene by constructing the navigation models of the grid unit, the head orientation unit and the visual unit, can actively learn the characteristics of the experience environment scene, constructs the scene three-dimensional output based on multi-source sensing in real time, and can effectively meet various intelligent perception navigation application requirements.
(2) Compared with the traditional method, the method provided by the invention fully utilizes the sequence learning capability of the long-time memory network, can realize effective prediction of the position and the posture under the condition of navigation observation information loss and incompletion, and can be applied to various limited navigation observation scenes.
Drawings
FIG. 1 is a flow chart of an implementation of a dynamic navigation method based on an artificial intelligence network according to the present invention.
Detailed Description
The invention will be described in detail below with reference to the drawings and the detailed description, wherein the described embodiments are only intended to facilitate the understanding of the invention and do not limit the invention in any way.
As shown in FIG. 1, the invention provides a dynamic navigation method based on an artificial intelligence network, which utilizes a continuous attractor network to construct a navigation frame, estimates the dynamic position and posture through a grid unit, a head orientation unit and visual perception modeling, and utilizes an empirical environment model to realize three-dimensional navigation output. Compared with the traditional method, the method can be applied to complex unknown environments (such as strong electromagnetic countermeasure, GNSS and communication signal rejection), multi-source navigation observation information (visual images, inertial sensing systems and the like) is fully utilized, dynamic accurate navigation under the condition of partial prior information loss is realized, and effective technical support is provided for intelligent information acquisition and perception.
According to an embodiment of the present invention, a dynamic navigation method based on an artificial intelligence network is provided, as shown in fig. 1, including the following steps:
step A, determining a model and parameters of a continuous attractor network, constructing a head orientation unit model by adopting a two-dimensional continuous attractor network, and constructing a three-dimensional grid unit model by adopting a three-dimensional continuous attractor network;
b, the three-dimensional grid unit model in the step A represents a space three-dimensional coordinate, and the construction process comprises the following steps: firstly, updating target activities by utilizing a continuous attractor network dynamic model with local excitation and global inhibition; secondly, the movement of local excitation is realized through three-dimensional path integration by combining the translation speed and the rotation speed; finally, when similar path pictures are input, the movement of the target is updated by a local view unit;
step C, the head orientation unit model in the step A represents the direction information of a preset area, the multi-layer head orientation unit model is used for representing the azimuth angle information in a three-dimensional vertical space, and the head orientation unit is connected with the local view unit to calibrate the azimuth; the activation procedure of the head-oriented unit is as follows: firstly, updating target activities by using a dynamic model of a multi-dimensional continuous attraction subnetwork; secondly, a head orientation unit formed by the multi-dimensional continuous attractor network carries out path integration on the three-dimensional grid unit network according to the rotation speed, the height change speed and the translation speed provided by the visual odometer to obtain the outputs of direction change and height change, and updates the head orientation according to the outputs; finally, as with the three-dimensional grid cell network, when similar path pictures are input, the movement of the target is updated by the local view;
step D, a modeling visual perception module, which is driven by excitation activities in a local view unit, a three-dimensional grid unit and a head orientation unit, wherein the local view unit is associated with the three-dimensional grid unit network and the head orientation unit network, empirical position and posture information is perceived and estimated through the translation speed and the rotation speed of a visual odometer, and output information of the visual perception module comprises self motion information and visual information, wherein the visual information is the integral of the input of the three-dimensional grid unit network and the head orientation unit network along a path;
e, according to the output information of the visual perception module in the step D, combining the empirical environment model to realize the output of three-dimensional position and posture information, and correcting and updating the empirical environment model according to the observation information of the visual odometer and the output information of the local view unit;
step F, constructing a long-term memory network, predicting the position and the posture of the next moment according to observation information of a three-dimensional grid unit, a head orientation unit and a visual odometer which are obtained historically, and comparing the prediction result with the position and the posture which are estimated at the next moment to obtain a comparison error;
g, correcting the comparison error by adjusting the length-time memory network parameters until the comparison error converges to the range meeting the output position and posture accuracy, and finishing the construction of the dynamic navigation network under the empirical environment model
As shown in fig. 1, the specific implementation steps are as follows:
step 1, determining models and parameters of a continuous attractor network, constructing a head orientation unit model and a local view unit model by adopting a two-dimensional continuous attractor network, and constructing a grid unit model by adopting a three-dimensional continuous attractor network. Wherein the dynamical model of the continuous attractor network is described as:
Figure 484279DEST_PATH_IMAGE047
wherein the content of the first and second substances,s p (t) Is the rate of activation of the firing of the neuron p,τ E activating a firing response time constant for the neuron;γ E is a constant representing the suppression offset;ω pq a weight value representing the connection between neuron p to neuron q, which is inversely proportional to the distance between the two neurons;ω EE is also a constant representing the stimulus value of the circulatory excitability;u(t) Is a suppression function of the global dynamics and,ω EI is a regulatory factor of global inhibitory excitation;X p (t) Representing the external input of the neuron.
Step 2, the grid unit model represents a space three-dimensional coordinate, and the construction process comprises the following steps: firstly, updating target activities by utilizing an attractor kinetic model with local excitation and global inhibition; secondly, the movement of local excitation is realized through three-dimensional path integration by combining the translation speed and the rotation speed; finally, when a similar path picture is input, the movement of the object is updated by the local view unit. The local excitation model of the three-dimensional grid unit is described by a weight matrix, which is realized by creating an excitation weight matrix through a three-dimensional Gaussian function
Figure 657771DEST_PATH_IMAGE002
uvwRepresenting the distance between each cell.
Figure 871715DEST_PATH_IMAGE048
Whereinδx,δy,δzAre the variances of the three-dimensional spatial distribution, which are all constants. Activity changes in three-dimensional grid cells through a matrix
Figure 839671DEST_PATH_IMAGE004
Expressed, the calculation method is as follows:
Figure 201382DEST_PATH_IMAGE005
whereinn x n y n z Are the three dimensions of this matrix. The relationship between the compound and u, v and w is
Figure 912986DEST_PATH_IMAGE049
Each three-dimensional grid cell can enable adjacent cells to be suppressed through the function of local suppression. In the process of suppressing, the weight matrix is suppressed
Figure 106070DEST_PATH_IMAGE007
To update the activity. The local inhibition and the global inhibition are carried out through the processes
Figure 244927DEST_PATH_IMAGE008
The non-negative value in (1) is calculated, specifically:
Figure 828355DEST_PATH_IMAGE009
the overall activity of the three-dimensional grid cell is eventually normalized to bring all cells back to the same state, denoted as:
Figure 343650DEST_PATH_IMAGE010
three-dimensional path integration maps the activity of a three-dimensional grid cell into other adjacent cells, the activity of the cell being at the current head orientation angleθBy the speed v of the translation, and the speed of the altitude changev h Mapping onto the x, y plane and z axis, respectively. The units varying in excitatory activityThe calculation method comprises the following steps:
Figure 328924DEST_PATH_IMAGE012
the amount of unit activity is determined by two inputs. Two input quantities, one from the transmitting unitT gc . The other from the residue gamma, which is based on the minimum fraction of the compensation valueδ xf ,δ yf ,δ zf Calculated, specifically described as:
Figure 841945DEST_PATH_IMAGE013
Figure 912669DEST_PATH_IMAGE014
whereink x ,k y ,k z Is a constant in the three-dimensional path integral. γ is calculated as:
Figure 966076DEST_PATH_IMAGE050
Figure 71435DEST_PATH_IMAGE051
the local view unit is connected with the three-dimensional grid unit and the head orientation unit, and the connection matrix C is used for storing the learned relation among the three-dimensional grid unit matrix, the local view unit vector and the head orientation unit matrix. This connection is described using the adjusted hebry law, which is expressed in particular as:
Figure 552095DEST_PATH_IMAGE052
whereinτRepresenting the learning efficiency. The activity changes in the three-dimensional grid cells and the head orientation cells are:
Figure 172432DEST_PATH_IMAGE053
wherein constant isδRepresenting the intensity of the local visual alignment.n act Is the number of active local view elements.
And 3, representing the direction information of the specific area by the head orientation unit model. The information of the azimuth angle is represented by a multi-layered head-oriented unit model in a three-dimensional vertical space. The head orientation unit and the partial view unit are connected for orientation calibration. The activation procedure of the head-oriented unit is as follows: firstly, updating target activities by using a dynamic model of a multi-dimensional continuous attraction subnetwork; and secondly, the head orientation unit formed by the multi-dimensional continuous attractor network carries out path integration on the three-dimensional grid unit according to the rotation speed, the height change speed and the translation speed provided by the information in the visual odometer to obtain the outputs of the direction change and the height change, and updates the head orientation according to the outputs. Finally, as with the three-dimensional grid cells, the movement of the object is updated by the local view when similar path pictures are input.
Activation model of head orientation unit creates excitation weight matrix by two-dimensional Gaussian function
Figure 29530DEST_PATH_IMAGE020
To achieve, in matrix: (hθ) The distance luminance between the middle cells is denoted by u, v. The calculation method of the weight matrix comprises the following steps:
Figure 989395DEST_PATH_IMAGE021
whereinδ θ Andδ h two variance constants, respectively. The change in activity in the head orientation unit can be described as:
Figure 640957DEST_PATH_IMAGE022
whereinn θ n h Are the two dimensions of the head towards the matrix of cells. The relationship to u, v can be described as:
Figure 686273DEST_PATH_IMAGE023
each head-facing cell can have adjacent cells suppressed by a local suppression function. The local inhibition and the global inhibition are carried out through the processes
Figure 284745DEST_PATH_IMAGE024
The non-negative value of (a) is calculated and can be described as:
Figure 833538DEST_PATH_IMAGE025
wherein
Figure 187158DEST_PATH_IMAGE026
Is a weight matrix for local suppression. By passingA hdc And calculating the non-negative value of the point. The overall head-towards-cell activity will eventually be normalized to bring all cells back to the same state, described as:
Figure 454192DEST_PATH_IMAGE027
the head orientation unit updates the head orientation by transferring excitatory activations to other adjacent units, through orientation changes and height changes. Speed of change according to direction of rotationω θ And speed of altitude changev h The excitation activity of the cell is mapped into a yaw matrix and a height matrix, respectively. The mapped unit excitation activity is composed of twoAnd inputting the decision. Two input quantities-one from the transmitting unitT hdc . The other from the residualηBased on a minimum fraction of the compensation valueδ θf δ hf Calculated, amount of change in the unit's excitatory activity
Figure 980988DEST_PATH_IMAGE028
The calculation method and the related calculation method are as follows:
Figure 384288DEST_PATH_IMAGE054
Figure 908810DEST_PATH_IMAGE055
Figure 663139DEST_PATH_IMAGE056
whereink x ,k y ,k z Is a constant in the three-dimensional path integral.γThe calculation method of (A) is as follows:
Figure 400151DEST_PATH_IMAGE032
Figure 126799DEST_PATH_IMAGE057
and 4, driving the visual perception modeling by exciting activities in the local visual module, the three-dimensional grid unit and the head orientation unit. Each of the local view cell information, the three-dimensional mesh cell information, and the head orientation cell information is associated. The empirical position and attitude information is sensed and estimated by the translation speed and rotation speed of the visual odometer. The output of the visual perception module includes self-motion information and visual information, wherein the visual information is an integration of the three-dimensional mesh cell network with the input of the head-facing cell along the path.
In the visual perception information, the calculation method of the pixel intensity and the translation speed comprises the following steps:
Figure 556643DEST_PATH_IMAGE034
Figure 798268DEST_PATH_IMAGE035
wherein constant isμIs to measure the physical speed.v max In order to filter out measurement errors.
The calculation method of the rotating speed comprises the following steps:
Figure 338971DEST_PATH_IMAGE058
by combining two sets of dataI i AndI i+1 moves h To the complex dimension and then calculate the difference in average intensity between them. WhereinωIs the width of the picture. Rotation speedθBy
Figure 310338DEST_PATH_IMAGE059
And constantσ h Multiplied to obtain a constantσ h The size is usually determined empirically.
Figure 645505DEST_PATH_IMAGE038
Figure 374426DEST_PATH_IMAGE060
And 5, according to the output information of the visual perception module, combining the empirical environment model to realize the output of three-dimensional position and posture information, and correcting and updating the empirical environment model according to the observation information of the three-dimensional network unit, the head orientation unit and the visual unit.
And 6, constructing a long-term memory network, predicting the position and the posture of the next moment according to the observation information of the grid unit, the head orientation unit and the visual unit obtained in the history, and comparing the prediction result with the position and the posture estimated at the next moment to obtain a comparison error. The calculation method of the comparison error comprises the following steps:
Figure 718820DEST_PATH_IMAGE061
i.e. location unit prediction by minimizing the network
Figure 216797DEST_PATH_IMAGE062
And synthesizing the positional Unit targets
Figure 926127DEST_PATH_IMAGE042
Cross entropy between
Figure 142345DEST_PATH_IMAGE063
And head direction prediction
Figure 24850DEST_PATH_IMAGE044
And its target
Figure 642914DEST_PATH_IMAGE045
Cross entropy between
Figure 585462DEST_PATH_IMAGE046
To train the parameters of the three-dimensional grid cells. When the value of this error function reaches a minimum, the training may be considered to be complete.
And 7, correcting the comparison error by adjusting the long-time and short-time memory network parameters until the comparison error converges to the range meeting the output position and posture accuracy, and completing construction of the dynamic navigation network in the empirical environment.
In summary, the invention provides a dynamic navigation method based on an artificial intelligence network, which utilizes a continuous attractor network to construct a navigation frame, realizes the estimation of dynamic position and posture through a three-dimensional grid unit, a head orientation unit and visual perception modeling, and realizes three-dimensional navigation output by utilizing an empirical environment model. Compared with the traditional method, the method can be applied to a complex unknown environment, multi-source navigation observation information is fully utilized, dynamic accurate navigation under the condition of partial prior information loss is realized, and effective technical support is provided for intelligent information acquisition and perception.
The above description is only exemplary of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements and the like that are within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (5)

1. A dynamic navigation method based on an artificial intelligence network is characterized by comprising the following steps:
step A, determining models and parameters of a continuous attractor network, constructing a head orientation unit model by adopting a two-dimensional continuous attractor network, and constructing a three-dimensional grid unit model by adopting a three-dimensional continuous attractor network;
b, the three-dimensional grid unit model in the step A represents a space three-dimensional coordinate, and the construction process comprises the following steps: firstly, updating target activities by utilizing a continuous attractor network dynamic model with local excitation and global inhibition; secondly, the movement of local excitation is realized through three-dimensional path integration by combining the translation speed and the rotation speed; finally, when similar path pictures are input, the movement of the target is updated by a local view unit;
in the step B, the local excitation model of the three-dimensional grid unit is described by a weight matrix, and the realization method is to create an excitation weight matrix through a three-dimensional Gaussian function
Figure 47619DEST_PATH_IMAGE001
U, v, w represent the distance between each three-dimensional grid cell:
Figure 433601DEST_PATH_IMAGE002
wherein the content of the first and second substances,δx,δy,δzis the variance of three-dimensional space distribution, all are constants, and the activity change in three-dimensional grid unit passes through the matrix
Figure 478917DEST_PATH_IMAGE003
Expressed, the calculation method is as follows:
Figure 202023DEST_PATH_IMAGE004
wherein, the first and the second end of the pipe are connected with each other,n x n y n z is the three dimensions of this matrix, whose relationship to u, v, w is:
Figure 281974DEST_PATH_IMAGE005
x, y and z represent a three-dimensional coordinate space, and i, j, k represent the corresponding i, j, k-th
Figure 307699DEST_PATH_IMAGE006
Mod represents the remainder;
in the step B, each three-dimensional grid unit can inhibit adjacent units through a local inhibition function, and in the process of inhibition, the weight matrix is inhibited
Figure 840312DEST_PATH_IMAGE007
To update the activity; the local inhibition and the global inhibition are carried out through the processes
Figure 117840DEST_PATH_IMAGE008
The non-negative value in (1) is calculated, specifically:
Figure 786719DEST_PATH_IMAGE009
wherein, the first and the second end of the pipe are connected with each other,φis a global inhibit function;
the overall activity of the three-dimensional grid cell is eventually normalized to bring all cells back to the same state, denoted as:
Figure 45662DEST_PATH_IMAGE010
wherein the content of the first and second substances,x,y,za three-dimensional coordinate space is represented,i,j,kdenotes the firsti,j,kAn
Figure 3254DEST_PATH_IMAGE011
n x ,n y ,n z Three dimensions of the representation matrix;
in the step B, the three-dimensional path integration maps the activity of the three-dimensional grid cell to other adjacent cells, and the activity of the cell is in the current head orientation angleθBy the speed v of the translation, and the speed of the altitude changev h Respectively mapping the unit excitation activity change to an x plane, a y plane and a z axis, wherein the calculation method of the unit excitation activity change comprises the following steps:
Figure 333741DEST_PATH_IMAGE012
whereinδ x0 ,δ y0 ,δ z0 An initial variance of the three-dimensional spatial distribution is represented,γwhich represents the residual error of the image data,l,m,nis shown asl,m,nA grid cell;
the number of unit activities is determined by two inputs, one from the transmitting unitT gc The other from the residue gamma, which is based on the minimum fraction of the compensation valueδ xf ,δ yf ,δ zf Calculated, specifically described as:
Figure 857126DEST_PATH_IMAGE013
Figure 552550DEST_PATH_IMAGE014
wherein the content of the first and second substances,k x ,k y ,k z is a constant in the three-dimensional path integral, and the calculation mode of gamma is as follows:
Figure 997438DEST_PATH_IMAGE015
Figure 538140DEST_PATH_IMAGE016
whereina,bIs a coefficient;
in the step B, the local view unit is connected to the three-dimensional grid unit and the head orientation unit, the connection matrix C is used to store the learned relation between the three-dimensional grid unit matrix, the local view unit vector and the head orientation unit matrix, and the connection is described by using the adjusted hebry law, which is specifically expressed as:
Figure 260240DEST_PATH_IMAGE017
wherein, the first and the second end of the pipe are connected with each other,τwhich represents the efficiency of the learning process,V i is shown asiThe activity of the individual partial-view elements,
Figure 860985DEST_PATH_IMAGE018
to representtTime of dayx,y,z,θFirst in four-degree-of-freedom postureiA connection matrix of threeThe activity changes in the dimension grid cells and head orientation cells are:
Figure 793169DEST_PATH_IMAGE019
wherein constant isδRepresents the intensity of the local visual alignment,n act is the number of active local view elements;
step C, the head direction unit model in the step A represents the direction information of a preset area, the multi-layer head direction unit model is used for representing the azimuth angle information in a three-dimensional vertical space, and the head direction unit is connected with the local view unit to carry out the calibration of the azimuth; the activation procedure of the head-oriented unit is as follows: firstly, updating target activities by using a dynamic model of a multi-dimensional continuous attraction subnetwork; secondly, a head orientation unit formed by the multi-dimensional continuous attractor network carries out path integration on the three-dimensional grid unit network according to the rotating speed, the height change speed and the translation speed provided by the visual odometer to obtain the outputs of direction change and height change, and updates the head orientation according to the outputs; finally, as with the three-dimensional grid cell network, when similar path pictures are input, the movement of the target is updated by the local view;
in the step C, the excitation weight matrix is created by the two-dimensional Gaussian function through the activation model of the head orientation unit
Figure 137563DEST_PATH_IMAGE020
To achieve, in matrix: (hθ) The distance brightness between the middle units is represented by u, v, and the calculation method of the weight matrix is as follows:
Figure 963437DEST_PATH_IMAGE021
wherein the content of the first and second substances,δ θ andδ h two variance constants, respectively, change in head orientationThe description is as follows:
Figure 663DEST_PATH_IMAGE022
wherein the content of the first and second substances,n θ n h is the two dimensions of the head towards the matrix of cells, the relationship with u, v is described as:
Figure 154563DEST_PATH_IMAGE023
hwhich represents the height of the object to be inspected,θrepresents a rotation angle;
step D, a modeling visual perception module, which is driven by excitation activities in a local view unit, a three-dimensional grid unit and a head orientation unit, wherein the local view unit is associated with the three-dimensional grid unit network and the head orientation unit network, empirical position and posture information is perceived and estimated through the translation speed and the rotation speed of a visual odometer, and output information of the visual perception module comprises self motion information and visual information, wherein the visual information is the integral of the input of the three-dimensional grid unit network and the head orientation unit network along a path;
in the step D, in the visual perception information, the method for calculating the pixel intensity and the translation speed includes:
Figure 302648DEST_PATH_IMAGE024
wherein, the first and the second end of the pipe are connected with each other,μ xy which represents the average value of the values,δ xy represents the standard deviation;
Figure 733761DEST_PATH_IMAGE025
wherein is constantμIn order to measure the physical speed of the object,S h representing image column dimensionsAmount of translation ofv max Is the maximum threshold value in order to filter out errors in the measurement;
the calculation method of the rotating speed comprises the following steps:
Figure 941888DEST_PATH_IMAGE027
by combining two sets of dataI i AndI i+1 moves h To a plurality of dimensions and then calculating the difference between their average intensities; whereinωIs the width of the picture; rotation speedθBy
Figure 583085DEST_PATH_IMAGE028
And constantσ h Multiplied to obtain a constantσ h Judging and determining the size of the product through experience;
Figure 269281DEST_PATH_IMAGE029
Figure 69747DEST_PATH_IMAGE030
e, according to the output information of the visual perception module in the step D, combining the empirical environment model to realize the output of three-dimensional position and posture information, and correcting and updating the empirical environment model according to the observation information of the visual odometer and the output information of the local view unit;
step F, constructing a long-term memory network, predicting the position and the posture of the next moment according to observation information of a three-dimensional grid unit, a head orientation unit and a visual odometer which are obtained historically, and comparing the prediction result with the position and the posture which are estimated at the next moment to obtain a comparison error;
and G, correcting the comparison error by adjusting the length-time memory network parameters until the comparison error is converged into a range meeting the output position and posture accuracy, and finishing the construction of the dynamic navigation network under the empirical environment model.
2. The dynamic navigation method based on the artificial intelligence network as claimed in claim 1, wherein: the dynamic model of the continuous attractor network in step a is described as follows:
Figure 448776DEST_PATH_IMAGE032
wherein the content of the first and second substances,s p (t) Is the rate of activation of the firing of the neuron p,τ E activating a firing response time constant for the neuron;γ E is a constant representing the suppression offset;ω pq a weight value representing the connection between neuron p to neuron q, which is inversely proportional to the distance between the two neurons;ω EE is also a constant representing the stimulus value of the cyclic excitability;u(t) Is a suppression function of the global dynamics and,ω EI is a regulatory factor of global inhibitory excitation;X p (t) Representing the external input of the neuron.
3. The dynamic navigation method based on the artificial intelligence network as claimed in claim 1, wherein: in the step C, each head-oriented unit can enable adjacent units to be restrained through the function of local restraint, and the processes of local restraint and global restraint are carried out through
Figure 577269DEST_PATH_IMAGE033
The non-negative value of (a) is calculated and can be described as:
Figure 67156DEST_PATH_IMAGE034
wherein, the first and the second end of the pipe are connected with each other,
Figure 207281DEST_PATH_IMAGE035
is a weight matrix for the local suppression,φrepresenting a global suppression function byA hdc The non-negative values of (i) are calculated, and the overall head-to-cell activity is eventually normalized to bring all cells back to the same state, described as:
Figure 757211DEST_PATH_IMAGE036
4. the dynamic navigation method based on the artificial intelligence network as claimed in claim 1, wherein: in the step C, the head orientation unit updates the head orientation through orientation change and height change by transmitting excitation activation to other adjacent units; speed of change according to direction of rotationω θ And speed of altitude changev h The excitation activities of the units are mapped into a yaw matrix and a height matrix respectively; the mapped unit excitatory activity is determined by two inputs; two input quantities-one from the transmitting unitT hdc (ii) a The other from the residualηBased on the minimum fraction of the compensation valueδ θf δ hf Calculated, amount of change in the unit's excitatory activity
Figure 373000DEST_PATH_IMAGE037
The calculation method and the related calculation method are as follows:
Figure 666579DEST_PATH_IMAGE038
wherein (A) and (B)l,m) Pointing to a particular head-facing unit,δ θ0 ,δ h0 representing the initial variance of the three-dimensional spatial distribution,ηrepresenting a residual matrix;
Figure 176057DEST_PATH_IMAGE039
k θ ,k h which represents the path integration constant, is,ω θ ,v h respectively representing angular velocity and altitude change velocity;
Figure 834572DEST_PATH_IMAGE040
wherein the content of the first and second substances,k x ,k y ,k z is a constant in the three-dimensional path integral,γthe calculation method is as follows:
Figure 999974DEST_PATH_IMAGE041
Figure 644713DEST_PATH_IMAGE042
whereina,bAre coefficients.
5. The dynamic navigation method based on the artificial intelligence network as claimed in claim 1, wherein: in the step F, the calculation method of the comparison error is as follows:
Figure 211960DEST_PATH_IMAGE043
i.e. location unit prediction by minimizing the network
Figure 775797DEST_PATH_IMAGE044
And synthesizing the positional Unit targets
Figure 428495DEST_PATH_IMAGE045
Cross entropy between
Figure 126193DEST_PATH_IMAGE046
And head direction prediction
Figure 282368DEST_PATH_IMAGE047
And its target
Figure 282685DEST_PATH_IMAGE048
Cross entropy between
Figure DEST_PATH_IMAGE049
The parameters of the grid cell network are trained, and when the value of the error function reaches the minimum value, the training is considered to be completed.
CN202210718568.1A 2022-06-23 2022-06-23 Dynamic navigation method based on artificial intelligence network Active CN114812565B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210718568.1A CN114812565B (en) 2022-06-23 2022-06-23 Dynamic navigation method based on artificial intelligence network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210718568.1A CN114812565B (en) 2022-06-23 2022-06-23 Dynamic navigation method based on artificial intelligence network

Publications (2)

Publication Number Publication Date
CN114812565A CN114812565A (en) 2022-07-29
CN114812565B true CN114812565B (en) 2022-10-18

Family

ID=82521357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210718568.1A Active CN114812565B (en) 2022-06-23 2022-06-23 Dynamic navigation method based on artificial intelligence network

Country Status (1)

Country Link
CN (1) CN114812565B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11138503B2 (en) * 2017-03-22 2021-10-05 Larsx Continuously learning and optimizing artificial intelligence (AI) adaptive neural network (ANN) computer modeling methods and systems
CN107806876A (en) * 2017-09-29 2018-03-16 爱极智(苏州)机器人科技有限公司 A kind of cognitive map construction method based on ORB algorithms
CN112097769B (en) * 2020-08-05 2022-06-10 北京航空航天大学 Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method
CN112509051A (en) * 2020-12-21 2021-03-16 华南理工大学 Bionic-based autonomous mobile platform environment sensing and mapping method

Also Published As

Publication number Publication date
CN114812565A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
US11161241B2 (en) Apparatus and methods for online training of robots
CN107450593B (en) Unmanned aerial vehicle autonomous navigation method and system
CN112256056B (en) Unmanned aerial vehicle control method and system based on multi-agent deep reinforcement learning
US9193075B1 (en) Apparatus and methods for object detection via optical flow cancellation
CN112097769B (en) Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method
CN112106073A (en) Performing navigation tasks using grid code
Kreiser et al. An on-chip spiking neural network for estimation of the head pose of the iCub robot
CN110799983A (en) Map generation method, map generation equipment, aircraft and storage medium
Zhao et al. Vision-based tracking control of quadrotor with backstepping sliding mode control
WO2019191288A1 (en) Direct sparse visual-inertial odometry using dynamic marginalization
Roberts et al. Saliency detection and model-based tracking: a two part vision system for small robot navigation in forested environment
CN112506210B (en) Unmanned aerial vehicle control method for autonomous target tracking
CN111812978B (en) Cooperative SLAM method and system for multiple unmanned aerial vehicles
CN113076615A (en) High-robustness mechanical arm operation method and system based on antagonistic deep reinforcement learning
CN116679711A (en) Robot obstacle avoidance method based on model-based reinforcement learning and model-free reinforcement learning
CN115494879A (en) Rotor unmanned aerial vehicle obstacle avoidance method, device and equipment based on reinforcement learning SAC
Zhang et al. A bionic dynamic path planning algorithm of the micro UAV based on the fusion of deep neural network optimization/filtering and hawk-eye vision
CN114812565B (en) Dynamic navigation method based on artificial intelligence network
CN111611869B (en) End-to-end monocular vision obstacle avoidance method based on serial deep neural network
CN113031002A (en) SLAM running car based on Kinect3 and laser radar
CN115618749A (en) Error compensation method for real-time positioning of large unmanned aerial vehicle
Duhamel et al. Hardware in the loop for optical flow sensing in a robotic bee
Atsuzawa et al. Robot navigation in outdoor environments using odometry and convolutional neural network
Prophet et al. A synergetic approach to indoor navigation and mapping for aerial reconnaissance and surveillance
CN111221340B (en) Design method of migratable visual navigation based on coarse-grained features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant