CN115935787B - Memristor neural network state estimation method under coding and decoding mechanism - Google Patents

Memristor neural network state estimation method under coding and decoding mechanism Download PDF

Info

Publication number
CN115935787B
CN115935787B CN202211386982.3A CN202211386982A CN115935787B CN 115935787 B CN115935787 B CN 115935787B CN 202211386982 A CN202211386982 A CN 202211386982A CN 115935787 B CN115935787 B CN 115935787B
Authority
CN
China
Prior art keywords
matrix
time
row
neural network
kth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211386982.3A
Other languages
Chinese (zh)
Other versions
CN115935787A (en
Inventor
胡军
高岩
于浍
贾朝清
班立群
孙若姿
雷冰欣
郑凯文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202211386982.3A priority Critical patent/CN115935787B/en
Publication of CN115935787A publication Critical patent/CN115935787A/en
Application granted granted Critical
Publication of CN115935787B publication Critical patent/CN115935787B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Error Detection And Correction (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses a memristor neural network state estimation method under a coding and decoding mechanism, which comprises the following steps: step one, build up a device with H A memristor neural network dynamic model with performance constraint and sensor energy harvesting; step two, performing state estimation on the memristor neural network dynamic model under a coding and decoding mechanism; step three, calculating an error covariance matrix upper bound of the memristor neural network and H Performance constraints; step four, solving an estimator gain matrix K by solving a series of linear matrix inequalities by utilizing a random analysis method k The solution of (2) realizes the state estimation of the memristive neural network; judging whether k+1 reaches the total duration N, if k+1 is less than N, executing the second step, and otherwise, ending. The invention solves the problem that the prior state estimation method can not process H under the coding and decoding mechanism simultaneously The problem of low accuracy of estimated performance caused by performance constraint and state estimation of the variance-limited memristive neural network is solved, and therefore accuracy of estimated performance is improved.

Description

Memristor neural network state estimation method under coding and decoding mechanism
Technical Field
The invention relates to a memristive neural network state estimation method, in particular to a method for estimating the states of memristive neural networks with H under a coding and decoding mechanism State estimation of memristive neural networks with limited performance constraints and varianceCounting method.
Background
The neural network is characterized by being composed of a large number of interconnected dynamic networks. In many networks in reality, applications are in modeling and analysis of actual systems, such as pattern recognition, optimization problems, and associative memories.
Memristors are the fourth novel passive nano information devices after three basic circuit elements of resistance, capacitance and inductance. Compared with the prior art, the memristor has the advantages of low energy consumption, difficult volatilization, small volume and the like. In fact, memristances and biological synapses are very similar in structure and function. Accordingly, more and more researchers choose to replace synapses in artificial neural networks with memristors.
In many engineering practices, particularly in the current network environment, delay, bandwidth limitation and the like are unavoidable in the information transmission process due to machine faults, communication channel congestion and the like. Therefore, the design is simultaneously applicable to the coding and decoding mechanism with H State estimation methods for memristive neural networks with limited performance constraints and variances are necessary, especially when H is considered simultaneously Performance constraints and variance limited scenarios.
The existing state estimation method can not process H under the encoding and decoding mechanism at the same time The problem of state estimation of memristive neural networks with limited performance constraints and variances results in low accuracy of estimated performance.
Disclosure of Invention
The invention aims to provide a memristive neural network state estimation method under a coding and decoding mechanism, which solves the problem that the existing state estimation method cannot process H under the coding and decoding mechanism at the same time The method has the advantages that the problems of low estimation accuracy and low estimation performance accuracy are caused due to the performance constraint and the state estimation problem of the memristive neural network with limited variance, and the method can be used in the field of state estimation of the memristive neural network under the condition that information cannot be received at other moments under a coding and decoding mechanism.
The invention aims at realizing the following technical scheme:
a memristor neural network state estimation method under a coding and decoding mechanism comprises the following steps:
step one, establishing H under the encoding and decoding mechanism A memristor neural network dynamic model with performance constraint and sensor energy harvesting;
step two, under the coding and decoding mechanism, carrying out state estimation on the memristive neural network dynamic model established in the step one;
step three, giving H Performance index gamma, semi-positive definite matrix number oneSemi-positive definite matrix number two->Initial condition x 0 And->Calculating an error covariance matrix upper bound and H of the memristive neural network Performance constraints;
step four, solving an estimator gain matrix K by solving a series of linear matrix inequalities by utilizing a random analysis method k Implementing a decoding scheme with H under a coding/decoding scheme Performing state estimation on the memristor neural network subjected to performance constraint and sensor energy harvesting; judging whether k+1 reaches the total duration N, if k+1 is less than N, executing the second step, and otherwise, ending.
In the present invention, the neural network may be a network of mass springs, a network of vehicle suspensions, a non-linear truck trailer model, a network of spacecraft, or a network of radars.
Compared with the prior art, the invention has the following advantages:
1. the invention provides a coding and decoding mechanism with H State estimation method of memristor neural network with limited performance constraint and variance, and H under coding and decoding mechanism is considered Performance constraints,The method has the advantages that the effective information of the estimated error covariance matrix is comprehensively considered by utilizing a random analysis method and an inequality processing technology, and compared with the existing time-lag neural network state estimation method, the memristive neural network state estimation method simultaneously considers that the method has H under a coding and decoding mechanism The state estimation problem of the memristor neural network with limited performance constraint and variance is solved, and the error system simultaneously meets the condition that the estimated error covariance is upper bound and given H The memristor neural network state estimation method with the performance requirement achieves the purposes of simultaneously suppressing disturbance and improving estimation precision.
2. The invention utilizes a random analysis method, firstly, respectively considering an estimation error system to meet H Performance constraint conditions and sufficient conditions with upper bound on error covariance; then, the estimated error system is obtained simultaneously to satisfy H Performance constraint and error covariance are upper bound discrimination conditions; finally, the value of the gain matrix of the estimator is obtained by solving a series of inequality of the linear matrix, thereby realizing H under the coding and decoding mechanism Performance estimation is not affected under the condition that performance constraint and variance limitation occur simultaneously, so that estimation accuracy is improved.
3. The invention solves the problem that the prior state estimation method can not process H under the coding and decoding mechanism simultaneously The problem of low accuracy of estimated performance caused by performance constraint and state estimation of the variance-limited memristive neural network is solved, and therefore accuracy of estimated performance is improved. As can be seen from the simulation graph, the larger the lambda is, the state estimation performance of the memristive neural network is gradually reduced, and the estimation error is relatively large, so that the feasibility and the effectiveness of the state estimation method provided by the invention are further verified.
Drawings
FIG. 1 is a flow chart of a memristive neural network state estimation method under the encoding and decoding mechanism of the present invention;
FIG. 2 is a trace z of the actual state of a memristive neural network k State estimation trajectory in two different situationsIs z k A state variable at a kth time for a memristive neural network, wherein: />Is a system status track,/->Is the state estimation trace in the case, +.>The state estimation track in the second case;
FIG. 3 is an error contrast plot of memristive neural network control output estimation error plot in two different scenarios, wherein:is the state estimation trace in the case, +.>The state estimation track in the second case;
FIG. 4 is a trace plot of the actual state error covariance and the upper bound on the error covariance of the memristive neural network, whereTrace, which is the upper bound of error covariance, +.>Is the upper bound trajectory of the actual error covariance;
FIG. 5 is a plot of the impact of different energy harvesting rate λ selections on the upper bound for memristive neural network control outputs, where:control output trace in case of +.>The control output trace in case two.
Detailed Description
The following description of the present invention is provided with reference to the accompanying drawings, but is not limited to the following description, and any modifications or equivalent substitutions of the present invention should be included in the scope of the present invention without departing from the spirit and scope of the present invention.
The invention provides a memristor neural network state estimation method under a coding and decoding mechanism, as shown in fig. 1, the method comprises the following steps:
step one, establishing H under the encoding and decoding mechanism And (3) a memristor neural network dynamic model for performance constraint and sensor energy harvesting. The method comprises the following specific steps:
in this step, H is included in the encoding/decoding scheme The state space form of the memristor neural network dynamic model for performance constraint and sensor energy harvesting is as follows:
x k+1 =A(x k )x k +A d (x k )x k-d +B(x k )f(x k )+C k v 1k (1)
z k =H k x k (2)
in the method, in the process of the invention,neuron state variables of memristive neural network at k, k+1 and k-d time points respectively,/and->The Europe type space is the memristive neural network state, and the space dimension is n; />For the controlled measurement output at time k, < > in->The Europe space is the controlled output of the memristive neural network, and the dimension is r; x-shaped articles k For the initial value at time k, k= -d, -d+1, …,0, d is the discrete fixed network time lag; a (x) k )=diag n {a i (x ik ) The memristive neural network self-feedback diagonal matrix at the kth moment is shown in the specification, n is the dimension, diag {. Cndot }, a is shown in the specification i (x ik ) For at the kth time A (x k ) N is the dimension; a is that d (x k )={a ij,d (x i,k )} n*n A is a system matrix with known dimension at the kth moment and related to time lag ij,d (x i,k ) To at the kth time A d (x k ) Is the ith component form of B (x k )={b ij (x i,k )} n*n Weight matrix for a connection excitation function known at the kth moment, b ij (x i,k ) For at the kth time B (x k ) Is the i-th component form of (a); f (x) k ) Is a nonlinear excitation function at the kth time; c (C) k A noise distribution matrix that is a known system at the kth time; h k An adjustment matrix for the known measurement at time k; v 1k Is zero at the k-th moment and the covariance is V 1 Gaussian white noise sequence > 0.
State-dependent matrix parameter a i (x i,k )、a ij,d (x i,k ) And b ij (x i,k ) The method meets the following conditions:
wherein a is i (x i,k )、a ij,d (x i,k ) And b ij (x i,k ) Respectively A (x) k )、A d (x k ) And B (x) k ) The ith component at the kth time Γ i The value > 0 is a known handover threshold value,for the i-th known upper storage variable matrix, is->For the i-th known lower storage variable matrix, is +.>For ij, d known left storage variable matrix,/>For ij, d known right storage variable matrix,,, is->For the ij-th known memory variable matrix, < >>The variable matrix is stored externally for the ij-th known.
Definition:
in the method, in the process of the invention,first number metric matrix stored for the ith minimum,/th metric matrix>An upper storage interval variable matrix known as the i < th >>For the i-th known lower storage interval variable matrix, min { · } represents the minimum value in the two storage matrices, max { · } represents the maximum value in the two storage matrices, and +_>First metric matrix stored for the ith maximum,/th metric matrix>Second metric matrix stored for ij, d least, +.>Second metric matrix stored for ith maximum,/second metric matrix>For ij, d known left storage variable matrix,/>For ij, d known right storage variable matrix, +.>Third metric matrix stored for the ij-th minimum,/metric matrix>Third metric matrix stored for the ij-th maximum, +.>For the ij-th known memory variable matrix, < >>The external storage variable matrix known as ij is diag { - To define a first number diagonal matrix, A + For the defined second diagonal matrix +.>For the defined third diagonal matrix +.>To define a fourth diagonal matrix, B - To define a fifth diagonal matrix, B + For the defined diagonal matrix number six, n is the dimension.
It is easy to derive A (x k )∈[A - ,A + ]、And B (x) k )∈[B - ,B + ]. Let-> And->Then there are:
in the method, in the process of the invention,first number matrix for defined left and right section, < > for>Second matrix for defined left and right section, < > for the first matrix>Third matrix for defined left and right section, < -> And->Meeting the norm bounded uncertainty:
wherein DeltaA k To satisfy the first order matrix of norm bounded uncertainty, ΔA dk To satisfy the second matrix of norm bounded uncertainty, ΔB k To satisfy the third matrix of norm bounded uncertainty,and->Are all known real-valued weight matrices, < ->Is an unknown matrix at the kth moment and satisfies +.>Is->Is a transpose of (a).
And step two, performing state estimation on the memristor neural network dynamic model established in the step one under a coding and decoding mechanism. The method comprises the following specific steps:
step two, the measurement output form of the time-lapse memristor neural network is as follows:
y k =D k x k +E k v 2k
in the method, in the process of the invention,is the measured output of the memristive neural network at time k,>for memristive neural networksMeasuring the European space of the output and the space dimension of the European space is m; />For the neuron state variable of the memristive neural network at the kth time, +>The Europe type space is the memristive neural network state, and the space dimension is n; d (D) k And E is k Is a metric matrix of known measurements at the kth time, v 2k Is a gaussian white noise sequence with mean value zero and covariance V 2k >0。
Step two, at time k, the energy level of the sensor is q k E {0,1,2, …, S }, where S is the maximum number of energy units that the sensor can store. The energy collected at time k is used for h k It is shown that it is an independent co-distributed random process with the probability distribution as follows:
Prob(h k =i)=p i ,(i=0,1,2,…)
wherein q is k Is the energy level of the sensor at the kth moment, S is the maximum energy unit number which can be stored by the sensor, h k Represents the energy collected at the kth time, p i For the probability of sensor energy harvesting, i is the amount of harvested energy, and p is more than or equal to 0 i Is less than or equal to 1 and
step two, three, at time k, when the sensor stores non-zero units of energy, the sensor is able to transmit the measurement to the state estimator, and if and only if such transmission occurs, the sensor will consume 1 unit of energy. Further, the energy dynamics equation of the sensor can be expressed as:
wherein q is 0 、q k 、q k+1 The energy levels of the sensors at the 0 th, k th and k+1 th moments are respectively min { · } which represents the minimum value of the two energy levels, h k Representing the energy collected at the kth instant,represented at q k And 1 unit energy consumed by the sensor on the premise of not less than 0, wherein S is the maximum energy unit number which can be stored by the sensor.
The measurements received by the state estimator can be expressed as:
in the method, in the process of the invention,is the measurement actually received by the state estimator at the kth time, y k Is the measurement value that the state estimator ideally received at time k,/for example>Is that the index function satisfies->And->Is defined as
Step two, four, defining coding rules as follows:
in the method, in the process of the invention,is at time 0Internal operating state of encoder,/->The internal operating states of the encoder at the kth time, delta k Is a scaling parameter known at time k, +.>Is the measurement output of the encoder at time k+1,>the Europe type space is the memristive neural network state, and the space dimension is n; />Is a shift matrix of known appropriate dimension at time k,for the selected uniform quantizer form, +.>Representing the measured value actually received by the estimator at time k + 1.
Here, uniform quantizerIs described as follows:
in the method, in the process of the invention,for defined augmentation matrix +.>Is->T represents the transposed form for +.>We have:
wherein ζ is a signal vector,an h-th signal vector of ζ, l is the interval length of the quantization step, +.>To take the value asPositive integer of>Is the number of quantization levels.
Step two, defining a decoding rule as follows:
in the method, in the process of the invention,is the measured output of the decoder at time 0, < >>Is the measurement output of the decoder at time k, < >>Is the measured output of the decoder at time k+1, delta k Is the known shrink at time kParameter of putting on->The measurement output of the encoder at the k+1th moment is European space of the memristor neural network state, and the space dimension is n; />Is a shift matrix of known appropriate dimension at time k.
Step two, define the decoding error asWe can obtain:
wherein eta is k Is the measured decoding error at the kth instant,is the measurement output of the decoder at time k, < >>Is the measurement actually received by the state estimator at the kth time, y k Is the measurement value, delta, that the state estimator receives in an ideal case at the kth moment k Is a scaling parameter known at time k, +.>Is the measurement output of the encoder at time k+1,>the Europe type space is the memristive neural network state, and the space dimension is n; />Is a shift matrix of known appropriate dimension at time k,>in the form of a selected uniform quantizer.
The decoding error satisfies the following condition:
in the formula, I Is an infinite norm, l is the interval length of the quantization level, delta k Is a scaling parameter known at time k.
The nonlinear function f(s) satisfies the following fan-shaped bounded condition:
in the method, in the process of the invention,is the first real matrix of known appropriate dimensions of the 1 st component at time k, a +.>Is the second real matrix of known appropriate dimensions for the 2 nd component at time k.
Step seven, in order to estimate the state of the time-lapse memristive neural network, a time-varying state estimator is constructed based on available measurement information, wherein the time-varying state estimator comprises the following steps:
in the method, in the process of the invention,is the state estimation of the memristive neural network at the kth moment,/->Is the state estimation of memristive neural network at the k+1th moment,/and->Is a state estimate of the memristive neural network at time k-d,/and->The Europe type space is the memristive neural network state, and the space dimension is n; d is a fixed network time lag, < >>For the state estimation of the controlled output at the kth moment,/->European space which is controlled to be output by memristive neural network and has spatial dimension of r,/or->First number matrix for defined left and right section, < > for>Second matrix for defined left and right section, < > for the first matrix>Third matrix for defined left and right section, < ->H being a nonlinear excitation function at the kth time instant k For the adjustment matrix of known measurements at the kth time, D k Is a metric matrix of known measurements at time k,/for the measurement of the first time>Is the measured output of the decoder at time k, mu k Is an index functionK, K k Is the estimator gain matrix to be solved.
The main purpose of this step is to design a time-varying state estimator (5) based on the codec mechanism, so that the estimation error system meets the following two performance constraint requirements simultaneously:
(1) The disturbance attenuation level gamma is more than 0, and the semi-positive definite matrix number one and the semi-positive definite matrix number two are respectivelyAnd->For initial state e 0 Control output estimation error +.>Satisfies the following H Performance constraints:
wherein N is a limited number of nodes,representing mathematical expectations +.>Is the first weight matrix,/->Is a first number weight matrix, e 0 Is the estimated error at time 0, gamma > 0 is the given disturbance attenuation level, +.>Is noise v 1k And v 2k Is (are) the augmentation vector of->Is at the kth time e k Is represented by the norm form, I.I 2 Represented is in the form of a norm square.
(2) The estimated error covariance satisfies the upper bound constraint as follows:
in the method, in the process of the invention,is at the kth time e k Transpose of->Is a series of predetermined acceptable estimation accuracy matrices at the kth time.
Step three, giving H Performance index gamma, semi-positive definite matrix number oneSemi-positive definite matrix number two->Initial condition x 0 And->Calculating an error covariance matrix upper bound of a memristive neural network and H Performance constraints.
The method comprises the following specific steps:
step three, prove H according to the following The problem is analyzed and the corresponding discriminant criterion easy to solve is given:
wherein:
in the method, in the process of the invention,determining a first number matrix for a given half-positive; gamma is a given positive scalar; /> Respectively is R 3k Is a transpose of (2); />A semi-positive definite matrix at the kth moment; mu (mu) k To adjust positive constants, Σ 11 Is the 1 st row 1 st column blocking matrix of Σ 12 Is the 1 st row and 2 nd column blocking matrix of Σ 22 Is the 2 nd row and 2 nd column blocking matrix of Σ 33 Is the 3 rd row and 3 rd column blocking matrix of Σ 44 Is the 4 th row 4 th column block matrix of Σ 55 Is the 5 th row 5 th column blocking matrix of Σ 66 Is the 6 th row and 6 th column blocking matrix of Σ 77 The 7 th row and 7 th column block matrix of Σ,0 representing that the elements in the matrix block are all 0.
Step three, two, discuss covariance matrix X k Is set, and gives sufficient conditions as follows:
in the method, in the process of the invention,
wherein G is k Is the upper bound of the error covariance matrix at the kth moment; respectively is Is a transpose of (2); />The upper bound matrix solved at the kth moment is obtained; g k-d An upper bound matrix of an error covariance matrix at the k-d time; tr (G) k ) Is the trace of the upper bound matrix of the error covariance matrix at the kth time; x is X k =e k e k T For the upper error bound at the kth time, e k Is an error matrix at the kth time; />For state estimation at the kth time, ρ∈ (0, 1) is a known adjusting positive constant; />Is the first real matrix of known appropriate dimension of the 1 st component at time k,/->Is the second real matrix of known appropriate dimension of the 2 nd component at time k, tr () is the trace of the matrix, μ k Is a known tuning positive constant.
By analyzing the two results, the security is obtainedThe syndrome estimation error system meets a given H Performance requirements and error covariance are sufficient conditions for a bounded nature.
Step four, solving an estimator gain matrix K by solving a series of linear matrix inequalities by utilizing a random analysis method k Implementing a decoding scheme with H under a coding/decoding scheme Performing state estimation on the memristor neural network subjected to performance constraint and sensor energy harvesting; judging whether k+1 reaches the total duration N, if k+1 is less than N, executing the second step, and otherwise, ending.
In the step, a series of recursive linear matrix inequalities of (9) to (11) are solved to give an estimated error system which simultaneously satisfies H The values of the estimator gain matrix can be calculated under the sufficient condition that the performance requirement and the error covariance are bounded:
the update matrix is:
in the method, in the process of the invention,
H 22 =diag{-ε 1,k I,-ε 2,k I,-ε 2,k I,-ε 3,k I,-ε 3,k I},
Ξ 22 =diag{-G k ,-G k ,-G k ,-I},Ξ 33 =diag{-G k-d ,-G k-d ,-I,-tr(G k )I},/>
wherein ε 1,k ,ε 2,k ,ε 3,k ,ε 4,k ,ε 5,k And epsilon 6,k Normal numbers adjusted for first number, second number, third number, fourth number, fifth number and sixth number at the kth time; i is an identity matrix;a first weight matrix at the kth moment; />A second weight matrix at the kth moment;is the third weight at the kth timeA matrix; />A first real matrix of known appropriate dimensions at time k, which is the 1 st component,/>A second real matrix of known appropriate dimensions at time k, which is the 2 nd component; />State estimation for a nonlinear excitation function at a kth time;respectively H 12 ,H 13 ,Θ 12 ,Θ 13 ,Θ 14 ,Θ 15 ,S k ,T k ,W k Is a transpose of (2); />Respectively is psi 12 ,Ψ 13 ,Ψ 14 ,Ψ 23 ,Ψ 25 ,Ψ 27 ,Ψ 38 ,Ψ 39 Is a transpose of (2); h 1 ,H 2 ,H 3 ,H 4 And H 5 The metrics are the metrics of first, second, third, fourth and fifth, N 1k A first number metric matrix of known appropriate dimensions for component 1 at time k; n (N) 2k A second number metric matrix of known appropriate dimensions at time k, which is the 2 nd component; n (N) 3k A third metric matrix of known appropriate dimension at time k, which is the 3 rd component; n (N) 4k A third metric matrix of known appropriate dimension at time k, which is the 4 th component; n (N) 5k A third metric matrix of known appropriate dimension at time k, which is the 5 th component; h 11 Is a 1 st row and 1 st column block matrix, H 12 Is a 1 st row and 2 nd column block matrix, H 13 Is a 1 st row and 3 rd column block matrix, H 22 Is a 2 nd row and 2 nd column block matrix, H 33 Is a 3 rd row and 3 rd column block matrix, Θ 11 Is a 1 st row and 1 st column block matrix, Θ 12 Is a 1 st row and 2 nd column block matrix, Θ 13 Is a 1 st row and 3 rd column block matrix, Θ 14 Is a 1 st row and 4 th column block matrix, Θ 15 Is a 1 st row and 5 th column block matrix, Θ 22 Is a 1 st row and 1 st column block matrix, Θ 33 Is a 3 rd row and 3 rd column block matrix, Θ 44 Is a 4 th row and 4 th column block matrix, Θ 55 Is a 5 th row and 5 th column block matrix, S k Is a first norms bounded weight matrix at the kth moment, T k Is the second norms bounded weight matrix at the kth time, W k Is the third norms of the bounded weight matrix at the kth moment, xi 11 Is a 1 st row and 1 st column block matrix 23 Is a 2 nd row and 3 rd column block matrix 25 Is a 2 nd row and 5 th column block matrix 27 Is a 2 nd row and 7 th column block matrix 38 Is a 3 rd row and 8 th column block matrix 33 Is a 3 rd row and 3 rd column block matrix 44 Is a 4 th row and 4 th column block matrix 55 Is a 5 th row and 5 th column block matrix 66 Is a 6 th row and 6 th column block matrix 77 Is a 7 th row and 7 th column block matrix 88 Is a block matrix of 8 th row and 8 th column 99 Is a 9 th row and 9 th column block matrix, ψ 11 Is a 1 st row 1 st column block matrix, ψ 13 Is a 1 st row and 3 rd column block matrix, ψ 14 Is a 1 st row 4 th column block matrix, ψ 22 Is a 2 nd row and 2 nd column block matrix, ψ 39 Is a 3 rd row and 9 th column block matrix, < >>A first number is determined for a given semi-positive matrix; gamma is a given positive scalar; /> Respectively isR 3k Is a transpose of (2); w (W) k A semi-positive definite matrix at the kth moment; />A semi-positive definite matrix at the k-d time; mu (mu) k For the known adjustment of the positive constant,for the neuronal status estimation at time k, is->For the first update matrix at time k+1, G k To estimate the upper bound matrix of errors, tr (G k ) To estimate the error upper bound matrix G at the kth time k Is a trace of (1); g k-d For the upper matrix at time k-d, σ is the adjusted weight coefficient, ++>And->Are all known real-valued weight matrices, < ->Is an unknown matrix and satisfies-> Is->0 represents 0 for elements in the matrix block.
In the invention, the theory in the third and fourth steps is as follows:
first, prove H Analyzing the problem and giving out corresponding discrimination criteria which are easy to solve; secondly, the covariance is discussedMatrix X k The upper bound problem of (2) and gives sufficient conditions; by analyzing the two results, the method ensures that the estimation error system meets the given H The performance requirement and the error covariance are sufficiently conditioned, the values of the estimator gain matrix are solved by solving a series of linear matrix inequalities, and the estimator gain matrix K is calculated k Is a solution to (a).
Examples:
the embodiment is provided with H The memristor neural network with performance constraint and sensor energy harvesting is taken as an example, and the memristor neural network can be applied to associative memory, pattern recognition and combination optimization, and the method provided by the invention is adopted to simulate a face recognition case:
with H under codec mechanism The relevant system parameters of the memristor neural network state model, the measurement output model and the controlled output model of the performance constraint and sensor energy harvesting are selected as follows:
the corresponding adjustment matrix is given according to the state of the face:
the measurement adjustment matrix is:
the controlled output adjustment matrix is:
/>
the state weight matrix is:
the weight matrix and the adjustment parameters of the nonlinear function are as follows:
the excitation function is taken as:
wherein x is k =[x 1,k x 2,k ] T Is the state vector, x, of the memristive neuron 1,k To state x at the kth time k Is the first component specific gravity matrix, x 2,k To state x at the kth time k A second component specific gravity matrix of (c).
Other simulation initial values were selected as follows:
disturbance attenuation level γ=0.7, semi-positive definite matrix number oneUpper bound matrix->Sum covariance V 1k =V 2k =1, initial state x 0 =[-2.4 2] T ,/>
Solving the values of the associated estimator gain matrix using the recursive linear matrix inequality, the partial values are as follows:
case one (CaseI): λ=0.1;
K 1 =[1.2595 -0.1230] T ,K 2 =[0.8933 0.0535] T ,K 3 =[1.2687 -0.0400] T ,
case two (CaseII): λ=1;
K 1 =[0.3525 -0.7586] T ,K 2 =[0.1521 -0.1137] T ,K 3 =[0.3446 0.1180] T ,
state estimator effect:
as can be seen from fig. 2, the inventive state estimator design method can effectively estimate the target state for memristive neural networks with sensor energy harvesting and variance limitation under the codec mechanism.
As can be seen from fig. 3, 4, and 5, the estimation error effect becomes worse as the probability λ increases for each time.

Claims (3)

1. The memristor neural network state estimation method under the coding and decoding mechanism is characterized by being used for face recognition and comprising the following steps of:
step one, establishing H under the encoding and decoding mechanism A memristor neural network dynamic model of performance constraint and sensor energy harvesting, wherein:
having H under codec mechanism The state space form of the memristor neural network dynamic model for performance constraint and sensor energy harvesting is as follows:
x k+1 =A(x k )x k +A d (x k )x k-d +B(x k )f(x k )+C k v 1k
z k =H k x k
in the method, in the process of the invention,neuron state variables of memristive neural network at k, k+1 and k-d time points respectively,/and->The Europe type space is the memristive neural network state, and the space dimension is n; />For the controlled measurement output at time k, < > in->The Europe space is the controlled output state of the memristive neural network, and the dimension of the Europe space is r; x-shaped articles k For the initial value at time k, k= -d, -d+1, …,0, d is the discrete fixed network time lag; a (x) k )=diag n {a i (x ik ) The n is the dimension of the memristive neural network self-feedback diagonal matrix at the kth momentThe number, diag {.cndot }, represents the diagonal matrix, a i (x ik ) For at the kth time A (x k ) N is the dimension; a is that d (x k )={a ij,d (x i,k )} n*n A is a system matrix with known dimension at the kth moment and related to time lag ij,d (x i,k ) To at the kth time A d (x k ) Is the ith component form of B (x k )={b ij (x i,k )} n*n Weight matrix for a connection excitation function known at the kth moment, b ij (x i,k ) For at the kth time B (x k ) Is the i-th component form of (a); f (x) k ) Is a nonlinear excitation function at the kth time; c (C) k A noise distribution matrix that is a known system at the kth time; h k An adjustment matrix for the known measurement at time k; v 1k Is zero at the k-th moment and the covariance is V 1 A gaussian white noise sequence > 0;
the corresponding adjustment matrix is given according to the state of the face:
step two, under the coding and decoding mechanism, carrying out state estimation on the memristor neural network dynamic model established in the step one, wherein the specific steps are as follows:
step two, the measurement output form of the time-lapse memristor neural network is as follows:
y k =D k x k +E k v 2k
in the method, in the process of the invention,is the measured output of the memristive neural network at time k,>the real number domain is output by the memristive neural network dynamic model, and m is the dimension; />For the neuron state variable of the memristive neural network at the kth time, +>The real number domain is output for the memristive neural network dynamic model, and the dimension of the real number domain is n; d (D) k And E is k Is a metric matrix of known measurements at the kth time, v 2k Is a gaussian white noise sequence with mean value zero and covariance V 2k >0;
Step two, at time k, the energy level of the sensor is q k E {0,1,2, …, S }, where S is the maximum number of energy units that the sensor can store, and the energy collected at time k is represented by h k A representation;
step two, three, at time k, when the sensor stores non-zero units of energy, the sensor is able to transmit the measurement to the state estimator, and if and only if such transmission occurs, the sensor will consume 1 unit of energy, the energy dynamics equation of the sensor being expressed as:
wherein q is 0 、q k 、q k+1 The energy levels of the sensors at the 0 th, k th and k+1 th moments are respectively min { · } which represents the minimum value of the two energy levels, h k Representing the energy collected at the kth instant,represented at q k 1 unit energy consumed by the sensor on the premise of not less than 0, wherein S is the maximum energy unit number which can be stored by the sensor;
the measurement values received by the state estimator are expressed as:
in the method, in the process of the invention,is the measurement actually received by the state estimator at the kth time, y k Is the measurement value that the state estimator ideally received at time k,/for example>Is that the index function satisfies->And->Is defined as
Step two, four, the coding rule is defined as follows:
in the method, in the process of the invention,is the internal operating state of the encoder at time 0, < >>The internal operating states of the encoder at the kth time, delta k Is a scaling parameter known at time k, +.>Is the measurement output of the encoder at time k+1,>the real number domain is output for the memristive neural network dynamic model, and the dimension of the real number domain is n; />Is a shift matrix of known appropriate dimension at time k,>for the selected uniform quantizer form, +.>Representing the measured value actually received by the estimator at time k+1;
step two, the definition of the decoding rule is as follows:
in the method, in the process of the invention,is the measured output of the decoder at time 0, < >>Is a measurement of the kth time decoderOutput (I)>Is the measured output of the decoder at time k+1, delta k Is a scaling parameter known at time k, +.>Is the measurement output of the encoder at time k+1,>the real number domain is output by the memristive neural network dynamic model, and n is the dimension; />Is a shift matrix of known appropriate dimension at time k;
step two, defining the decoding error asThe method comprises the following steps:
wherein eta is k Is the measured decoding error at the kth instant,is the measurement output of the decoder at time k, < >>Is the measurement actually received by the state estimator at the kth time, y k Is the measurement value, delta, that the state estimator receives in an ideal case at the kth moment k Is a scaling parameter known at time k, +.>Is the measurement output of the encoder at time k+1,>the Europe type space is the memristive neural network state, and the space dimension is n; />Is a shift matrix of known appropriate dimension at time k,>in the form of a selected uniform quantizer;
the decoding error satisfies the following condition:
in the formula, I Is an infinite norm of the number of norms,for the granularity of the quantization level, delta k Is a scaling parameter known at time k;
the nonlinear function f(s) satisfies the following fan-shaped bounded condition:
in the method, in the process of the invention,is the first real matrix of known appropriate dimensions of the 1 st component at time k, a +.>Is the second real matrix of known appropriate dimensions for the 2 nd component at time k.
Step seven, in order to estimate the state of the time-lapse memristive neural network, a time-varying state estimator is constructed based on available measurement information, wherein the time-varying state estimator comprises the following steps:
in the method, in the process of the invention,is the state estimation of the memristive neural network at the kth moment,/->Is the state estimation of memristive neural network at the k+1th moment,/and->Is a state estimate of the memristive neural network at time k-d,/and->The Europe type space is the memristive neural network state, and the space dimension is n; d is a fixed network time lag, < >>For the state estimation of the controlled output at the kth moment,/->The real number domain of the dynamic model state of the neural network, the dimension of which is r,/is>To defineFirst matrix of left and right intervals, < ->Second matrix for defined left and right section, < > for the first matrix>For the third matrix of defined left and right intervals,h being a nonlinear excitation function at the kth time instant k For the adjustment matrix of known measurements at the kth time, D k Is a metric matrix of known measurements at time k,/for the measurement of the first time>Is the measured output of the decoder at time k, mu k Is an index function->K, K k Is the estimator gain matrix to be solved;
step three, giving H Performance index gamma, semi-positive definite matrix number oneSemi-positive definite matrix number two->Initial condition x 0 Andcalculating an error covariance matrix upper bound of a memristive neural network and H The performance constraint conditions comprise the following specific steps:
step three, prove H according to the following The problem is analyzed and the corresponding discriminant criterion easy to solve is given:
wherein:
in the method, in the process of the invention,determining a first number matrix for a given half-positive; gamma is a given positive scalar; /> Respectively->D k ,K k ,ΔA k ,H kΔB k ,/>ΔA k ,/>E k ,K k ,C k ,Σ 12 ,R 3k Is a transpose of (2); />A semi-positive definite matrix at the kth moment; mu (mu) k To adjust positive constants, Σ 11 Is the 1 st row 1 st column blocking matrix of Σ 12 Is the 1 st row and 2 nd column blocking matrix of Σ 22 Is the 2 nd row and 2 nd column blocking matrix of Σ 33 Is the 3 rd row and 3 rd column blocking matrix of Σ 44 Is the 4 th row 4 th column block matrix of Σ 55 Is the 5 th row 5 th column blocking matrix of Σ 66 Is the 6 th row and 6 th column blocking matrix of Σ 77 The 7 th row and 7 th column of the sigma block matrix, and 0 represents that elements in a matrix block are all 0;
step three, two, discuss covariance matrix X k Is set, and gives sufficient conditions as follows:
in the method, in the process of the invention,
wherein G is k Is the upper bound of the error covariance matrix at the kth moment; respectively->D k ,K k ,ΔA k ,H k ,/>ΔB k ,/>ΔA k ,/>E k ,K k ,C k Is a transpose of (2); />The upper bound matrix solved at the kth moment is obtained; g k-d An upper bound matrix of an error covariance matrix at the k-d time; tr (G) k ) Is the trace of the upper bound matrix of the error covariance matrix at the kth time; x is X k =e k e k T For the upper error bound at the kth time, e k Is an error matrix at the kth time; />For state estimation at the kth time, ρ∈ (0, 1) is a known adjusting positive constant; />Is the first real matrix of known appropriate dimension of the 1 st component at time k,/->Is the second real matrix of known appropriate dimension of the 2 nd component at time k, tr () is the trace of the matrix, μ k Is a known tuning positive constant;
step four, solving an estimator gain matrix K by solving a series of linear matrix inequalities by utilizing a random analysis method k Implementing a decoding scheme with H under a coding/decoding scheme Performing state estimation on the memristor neural network subjected to performance constraint and sensor energy harvesting; judging whether k+1 reaches the total duration N, if k+1 is less than N, executing the second step, otherwise ending, wherein:
by solving the following series of recursive linear matrix inequalities, an estimation error system is given that satisfies H simultaneously The values of the estimator gain matrix can be calculated under the sufficient condition that the performance requirement and the error covariance are bounded:
the update matrix is:
in the method, in the process of the invention,
H 33 =diag{-ε 4,k I,-ε 4,k I,-ε 5,k I,-ε 5,k I},
H 22 =diag{-ε 1,k I,-ε 2,k I,-ε 2,k I,-ε 3,k I,-ε 3,k I},
Ξ 22 =diag{-G k ,-G k ,-G k ,-I},Ξ 33 =diag{-G k-d ,-G k-d ,-I,-tr(G k )I},
Ξ 55 =diag{-I,-I,-V 2k ,-V 1k },
wherein ε 1,k ,ε 2,k ,ε 3,k ,ε 4,k ,ε 5,k And epsilon 6,k Normal numbers adjusted for first number, second number, third number, fourth number, fifth number and sixth number at the kth time; i is an identity matrix;a first weight matrix at the kth moment; />A second weight matrix at the kth moment; />A third weight matrix at the kth moment; />A first real matrix of known appropriate dimensions at time k, which is the 1 st component,/>A second real matrix of known appropriate dimensions at time k, which is the 2 nd component; />State estimation for a nonlinear excitation function at a kth time; />Respectively H 12 ,H 13 ,Θ 12 ,Θ 13 ,Θ 14 ,Θ 15 ,S k ,T k ,W k Is a transpose of (2);respectively is psi 12 ,Ψ 13 ,Ψ 14 ,Ψ 23 ,Ψ 25 ,Ψ 27 ,Ψ 38 ,Ψ 39 Is a transpose of (2); h 1 ,H 2 ,H 3 ,H 4 And H 5 The metrics are the metrics of first, second, third, fourth and fifth, N 1k A first number metric matrix of known appropriate dimensions for component 1 at time k; n (N) 2k A second number metric matrix of known appropriate dimensions at time k, which is the 2 nd component; n (N) 3k A third metric matrix of known appropriate dimension at time k, which is the 3 rd component; n (N) 4k A third metric matrix of known appropriate dimension at time k, which is the 4 th component; n (N) 5k A third metric matrix of known appropriate dimension at time k, which is the 5 th component; h 11 Is a 1 st row and 1 st column block matrix, H 12 Is a 1 st row and 2 nd column block matrix, H 13 Is a 1 st row and 3 rd column block matrix, H 22 Is a 2 nd row and 2 nd column block matrix, H 33 Is a 3 rd row and 3 rd column block matrix, Θ 11 Is a 1 st row and 1 st column block matrix, Θ 12 Is a 1 st row and 2 nd column block matrix, Θ 13 Is a 1 st row and 3 rd column block matrix, Θ 14 Is a 1 st row and 4 th column block matrix, Θ 15 Is a 1 st row and 5 th column block matrix, Θ 22 Is a 1 st row and 1 st column block matrix, Θ 33 Is a 3 rd row and 3 rd column block matrix, Θ 44 Is a 4 th row and 4 th column block matrix, Θ 55 Is a 5 th row and 5 th column block matrix, S k Is a first norms bounded weight matrix at the kth moment, T k Is the second norms bounded weight matrix at the kth time, W k Is the third norms of the bounded weight matrix at the kth moment, xi 11 Is a 1 st row and 1 st column block matrix 23 Is a 2 nd row and 3 rd column block matrix 25 Is a 2 nd row and 5 th column block matrix 27 Is a 2 nd row and 7 th column block matrix 38 Is a 3 rd row and 8 th column block matrix 33 Is a 3 rd row and 3 rd column block matrix 44 Is a 4 th row and 4 th column block matrix 55 Is a 5 th row and 5 th column block matrix 66 Is a 6 th row and 6 th column block matrix 77 Is a 7 th row and 7 th column block matrix 88 Is a block matrix of 8 th row and 8 th column 99 Is a 9 th row and 9 th column block matrix, ψ 11 Is a 1 st row 1 st column block matrix, ψ 13 Is a 1 st row and 3 rd column block matrix, ψ 14 Is a 1 st row 4 th column block matrix, ψ 22 Is a 2 nd row and 2 nd column block matrix, ψ 39 Is a 3 rd row and 9 th column block matrix, < >>A first number is determined for a given semi-positive matrix; gamma is a given positive scalar; /> Respectively isD k ,K k ,ΔA k ,H k ,/>ΔB k ,/>ΔA k ,/>E k ,K k ,C k ,Σ 12 ,R 3k Is a transpose of (2); />A semi-positive definite matrix at the kth moment; />A semi-positive definite matrix at the k-d time; mu (mu) k For a known positive adjustment constant, +.>For the neuronal status estimation at time k, is->For the first update matrix at time k+1, G k To estimate the upper bound matrix of errors, tr (G k ) To estimate the error upper bound matrix G at the kth time k Is a trace of (1); g k-d For the upper matrix at time k-d, σ is the adjusted weight coefficient, ++>Andare all known real-valued weight matrices, < ->Is an unknown matrix and satisfies-> Is that0 represents 0 for elements in the matrix block.
2. The memristive neural network state estimation method under the encoding and decoding mechanism of claim 1, wherein the h is characterized in that k The probability distribution of (2) is as follows:
Prob(h k =i)=p i ,(i=0,1,2,…)
wherein q is k Is the energy level of the sensor at the kth moment, S is the maximum energy unit number which can be stored by the sensor, h k Represents the energy collected at the kth time, p i For the probability of sensor energy harvesting, i is the amount of harvested energy, and p is more than or equal to 0 i Is less than or equal to 1 and
3. the method for estimating memristive neural network states under a coding and decoding scheme as claimed in claim 1, wherein the uniform quantizer is characterized byIs described as follows:
in the method, in the process of the invention,for defined augmentation matrix +.>Is->T represents the transposed form for +.>The method comprises the following steps:
wherein ζ is a signal vector,an h term signal vector of ζ, +.>For the granularity of the quantization level, +.>To take the value asPositive integer of>Is the number of quantization levels.
CN202211386982.3A 2022-11-07 2022-11-07 Memristor neural network state estimation method under coding and decoding mechanism Active CN115935787B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211386982.3A CN115935787B (en) 2022-11-07 2022-11-07 Memristor neural network state estimation method under coding and decoding mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211386982.3A CN115935787B (en) 2022-11-07 2022-11-07 Memristor neural network state estimation method under coding and decoding mechanism

Publications (2)

Publication Number Publication Date
CN115935787A CN115935787A (en) 2023-04-07
CN115935787B true CN115935787B (en) 2023-09-01

Family

ID=86651817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211386982.3A Active CN115935787B (en) 2022-11-07 2022-11-07 Memristor neural network state estimation method under coding and decoding mechanism

Country Status (1)

Country Link
CN (1) CN115935787B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117077748B (en) * 2023-06-15 2024-03-22 盐城工学院 Coupling synchronous control method and system for discrete memristor neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108959808A (en) * 2018-07-23 2018-12-07 哈尔滨理工大学 A kind of Optimum distribution formula method for estimating state based on sensor network
CN109088749A (en) * 2018-07-23 2018-12-25 哈尔滨理工大学 The method for estimating state of complex network under a kind of random communication agreement
CN110879533A (en) * 2019-12-13 2020-03-13 福州大学 Scheduled time projection synchronization method of delay memristive neural network with unknown disturbance resistance
CN111025914A (en) * 2019-12-26 2020-04-17 东北石油大学 Neural network system remote state estimation method and device based on communication limitation
CN112132924A (en) * 2020-09-29 2020-12-25 北京理工大学 CT reconstruction method based on deep neural network
CN113516601A (en) * 2021-06-17 2021-10-19 西南大学 Image restoration technology based on deep convolutional neural network and compressed sensing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108959808A (en) * 2018-07-23 2018-12-07 哈尔滨理工大学 A kind of Optimum distribution formula method for estimating state based on sensor network
CN109088749A (en) * 2018-07-23 2018-12-25 哈尔滨理工大学 The method for estimating state of complex network under a kind of random communication agreement
CN110879533A (en) * 2019-12-13 2020-03-13 福州大学 Scheduled time projection synchronization method of delay memristive neural network with unknown disturbance resistance
CN111025914A (en) * 2019-12-26 2020-04-17 东北石油大学 Neural network system remote state estimation method and device based on communication limitation
CN112132924A (en) * 2020-09-29 2020-12-25 北京理工大学 CT reconstruction method based on deep neural network
CN113516601A (en) * 2021-06-17 2021-10-19 西南大学 Image restoration technology based on deep convolutional neural network and compressed sensing

Also Published As

Publication number Publication date
CN115935787A (en) 2023-04-07

Similar Documents

Publication Publication Date Title
Papageorgiou et al. Fuzzy cognitive map learning based on nonlinear Hebbian rule
US20140046885A1 (en) Method and apparatus for optimized representation of variables in neural systems
CN109088749B (en) State estimation method of complex network under random communication protocol
CN115935787B (en) Memristor neural network state estimation method under coding and decoding mechanism
KR101700145B1 (en) Automated method for modifying neural dynamics
CN103105246A (en) Greenhouse environment forecasting feedback method of back propagation (BP) neural network based on improvement of genetic algorithm
CN116227324B (en) Fractional order memristor neural network estimation method under variance limitation
CN112215446A (en) Neural network-based unit dynamic fire risk assessment method
CN112578089B (en) Air pollutant concentration prediction method based on improved TCN
CN112434888A (en) PM2.5 prediction method of bidirectional long and short term memory network based on deep learning
CN117371321A (en) Internal plasticity depth echo state network soft measurement modeling method based on Bayesian optimization
CN117407675A (en) Lightning arrester leakage current prediction method based on multi-variable reconstruction combined dynamic weight
CN117194866A (en) Distributed filtering method based on mass spring damping system
Censi et al. Real-valued average consensus over noisy quantized channels
CN109217844B (en) Hyper-parameter optimization method based on pre-training random Fourier feature kernel LMS
CN116403054A (en) Image optimization classification method based on brain-like network model
El-Shafie et al. Generalized versus non-generalized neural network model for multi-lead inflow forecasting at Aswan High Dam
CN109474258B (en) Nuclear parameter optimization method of random Fourier feature kernel LMS (least mean square) based on nuclear polarization strategy
Zhang MADALINE neural network for parameter estimation of LTI MIMO systems
Thangarasa et al. Differentiable Hebbian plasticity for continual learning
Ding Improved BP neural network controller based on GA optimization
Rios et al. Image compression with a dynamic autoassociative neural network
CN111709140B (en) Ship motion forecasting method based on intrinsic plasticity echo state network
Yang et al. ELM weighted hybrid modeling and its online modification
Gao et al. A variance-constrained method to encoding–decoding H∞ state estimation for memristive neural networks with energy harvesting sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant