CN116227324A - Fractional order memristor neural network estimation method under variance limitation - Google Patents
Fractional order memristor neural network estimation method under variance limitation Download PDFInfo
- Publication number
- CN116227324A CN116227324A CN202211559637.5A CN202211559637A CN116227324A CN 116227324 A CN116227324 A CN 116227324A CN 202211559637 A CN202211559637 A CN 202211559637A CN 116227324 A CN116227324 A CN 116227324A
- Authority
- CN
- China
- Prior art keywords
- matrix
- time
- kth
- row
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 98
- 238000000034 method Methods 0.000 title claims abstract description 80
- 239000011159 matrix material Substances 0.000 claims abstract description 400
- 230000003321 amplification Effects 0.000 claims abstract description 19
- 238000003199 nucleic acid amplification method Methods 0.000 claims abstract description 19
- 238000004458 analytical method Methods 0.000 claims abstract description 7
- 238000005259 measurement Methods 0.000 claims description 32
- 230000006870 function Effects 0.000 claims description 24
- 230000005284 excitation Effects 0.000 claims description 21
- 230000005540 biological transmission Effects 0.000 claims description 8
- 230000001537 neural effect Effects 0.000 claims description 4
- 230000003190 augmentative effect Effects 0.000 claims description 2
- 230000000903 blocking effect Effects 0.000 claims description 2
- 238000003909 pattern recognition Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 210000002569 neuron Anatomy 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2111/00—Details relating to CAD techniques
- G06F2111/04—Constraint-based CAD
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2111/00—Details relating to CAD techniques
- G06F2111/08—Probabilistic or stochastic CAD
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Computational Mathematics (AREA)
- Computing Systems (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Biomedical Technology (AREA)
- Pure & Applied Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Algebra (AREA)
- Medical Informatics (AREA)
- Computer Hardware Design (AREA)
- Neurology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Complex Calculations (AREA)
- Measurement Of Resistance Or Impedance (AREA)
Abstract
The invention discloses a fractional order memristor neural network estimation method under variance limitation, which comprises the following steps: step one, establishing a fractional order memristor neural network dynamic model; step two, carrying out state estimation on the fractional order memristor neural network dynamic model under an amplification forwarding protocol; step three, calculating the upper bound and H of an error covariance matrix of the fractional order memristor neural network ∞ Performance constraints; step four, solving an estimator gain matrix K by solving a linear matrix inequality by utilizing a random analysis method k Is realized under the amplifying and forwarding protocolAnd (3) estimating the state of the fractional order memristor neural network dynamic model, judging whether k+1 reaches the total duration N, if k+1 is less than N, executing the second step, and otherwise, ending. The invention solves the problem that the prior state estimation method can not process H under the amplifying and forwarding protocol at the same time ∞ The problem of low estimation performance accuracy caused by the performance constraint and the state estimation of the variance-limited fractional order memristor neural network is solved, so that the accuracy of the estimation performance is improved.
Description
Technical Field
The invention relates to a state estimation method of a neural network, in particular to a method for estimating the state of a neural network with H under an amplifying and forwarding protocol ∞ A state estimation method of fractional order memristor neural network with limited performance constraint and variance.
Background
The neural network is an information processing system which is simulated according to the structure and the function of nerve cells in the brain of a human body, and has the advantages of stronger association capability, self-adaption, fault tolerance capability and the like. In many real world networks, such networks can efficiently address practical system modeling and analysis aspects such as pattern recognition, signal processing, and image recognition.
In the last decades, the problem of state estimation of recurrent neural networks has become an attractive topic, and has been successfully applied to a wide range of fields such as associative memory, pattern recognition and combinatorial optimization. However, in practical applications, the information of neurons is often not fully measurable, so efficient estimation methods are needed to estimate them. Up to now many different types of neural network state estimation problems have been studied. It is worth noting that the current results are only applicable in steady cases, which may lead to limitations in application.
The existing state estimation method can not simultaneously process H under the condition of variance limitation ∞ The performance constraint and the state estimation problem of the fractional order memristor neural network of the amplification forwarding protocol lead to low accuracy of estimation performance.
Disclosure of Invention
The invention provides a fractional order memristor neural network estimation method under variance limitation aiming at a time-varying system. The method solves the problem that the prior state estimation method can not process H under the amplifying and forwarding protocol at the same time ∞ The problem of low estimation accuracy is caused by the state estimation problem of the fractional order memristive neural network of the performance constraint, and the problem of low estimation performance accuracy is caused under the condition that information cannot receive other moment information under the amplification forwarding protocol, so that the method can be used in the field of state estimation of the memristive neural network.
The invention aims at realizing the following technical scheme:
a fractional order memristor neural network estimation method under variance limitation comprises the following steps:
step one, establishing a fractional order memristor neural network dynamic model under an amplification forwarding protocol;
step two, under an amplifying and forwarding protocol, performing state estimation on the fractional order memristor neural network dynamic model established in the step one;
Step three, giving H ∞ Performance index gamma, semi-positive definite matrix number oneSemi-positive definite matrix number two->Initial conditionsCalculating upper bound and H of error covariance matrix of fractional order memristive neural network ∞ Performance constraints;
step four, solving the estimation by solving the inequality of the linear matrix by utilizing a random analysis methodGain matrix K of counter k And (3) realizing state estimation of a fractional order memristor neural network dynamic model under an amplification forwarding protocol, judging whether k+1 reaches the total duration N, if k+1 is less than N, executing the second step, and otherwise ending.
In the invention, the neural network can be a network formed by vehicle suspension, a network formed by particle springs, a network formed by spacecraft or a network formed by radar, and has important application in the fields of biology, mathematics, computers, associative memory, pattern recognition, combination optimization, image processing and other multidisciplinary fields.
Compared with the prior art, the invention has the following advantages:
1. the invention also considers that the H is provided under the amplifying and forwarding protocol ∞ The method comprehensively considers the effective information of the estimated error covariance matrix by utilizing an inequality processing technology and a random analysis method, and compared with the prior neural network state estimation method, the fractional order memristor neural network state estimation method simultaneously considers that the method has H under an amplifying and forwarding protocol ∞ The state estimation problem of the fractional order memristor neural network with limited performance constraint and variance is solved, and the error system simultaneously meets the upper bound of the estimated error covariance and the given H ∞ The fractional order memristor neural network state estimation method with the performance requirement achieves the purposes of suppressing disturbance and improving estimation precision, and the current result is only suitable for the situation of steady state, which may cause limitation in application.
2. The invention solves the problem that the prior state estimation method can not process H under the amplifying and forwarding protocol at the same time ∞ The problem of low estimation performance accuracy caused by the performance constraint and the state estimation of the variance-limited fractional order memristor neural network is solved, so that the accuracy of the estimation performance is improved. From the simulation graph, the smaller the power is, the state estimation performance of the fractional order memristor neural network is gradually reduced, and the estimation error is relatively larger. In addition, the feasibility and the effectiveness of the state estimation method provided by the invention are verified.
Drawings
FIG. 1 is a flow chart of a fractional order memristive neural network state estimation method under an amplification forwarding protocol of the present invention;
FIG. 2 is a fractional order memristive neural network actual state trace z k State estimation trajectory in two different situationsIs z k A state variable of the neural network at the kth moment; wherein->Is a system status track,/->Is the state estimation trace in the case, +.>The state estimation track in the second case;
FIG. 3 is an error contrast plot of a neural network control output estimation error trajectory plot in two different scenarios; wherein the method comprises the steps ofThe control output in the case of yes estimates the error trajectory, < >>The control output under the second condition estimates the error track;
FIG. 4 is a graph of the actual state error covariance of the neural network and the trace of the first component of the error covariance upper bound; wherein the method comprises the steps ofIs a variance-constrained trajectory, ++>Is the locus of the actual error covariance;
Detailed Description
The following description of the present invention is provided with reference to the accompanying drawings, but is not limited to the following description, and any modifications or equivalent substitutions of the present invention should be included in the scope of the present invention without departing from the spirit and scope of the present invention.
The invention provides a fractional order memristor neural network estimation method under variance limitation, which utilizes a random analysis method and an inequality processing technology, firstly, respectively considers an estimation error system to meet H ∞ Performance constraint conditions and sufficient conditions with upper bound on error covariance; then, the estimated error system is obtained simultaneously to satisfy H ∞ Performance constraint conditions and discrimination conditions with upper bound error covariance; finally, the values of the gain matrix of the estimator are obtained by solving a series of inequalities of the linear matrix, thereby realizing H under the amplifying and forwarding protocol ∞ Performance estimation is not affected under the condition that performance constraint and variance limitation occur simultaneously, so that estimation accuracy is improved. As shown in fig. 1, the method specifically comprises the following steps:
step one, establishing a fractional order memristor neural network dynamic model under an amplifying and forwarding protocol. The method comprises the following specific steps:
first, the Grunwald-Letnikov fractional derivative definition is presented, which is a form suitable for numerical implementation and application. The discrete form of this definition is expressed as:
in the formula delta α The Grunwald-Letnikov fractional derivative definition, representing the alpha order, h is the corresponding sampling interval, assuming a sampling interval of 1, k is the sampling instant, All limit values representing h.fwdarw.0, i-! All layers of i, +.>Representing all the sum values of i=0 to k.
According to the definition of the Grunwald-Letnikov fractional derivative, the state space form of the fractional memristor neural network dynamic model is as follows:
wherein:
here the number of the elements is the number,representing differential operator +_>For the fractional order (j=1, 2,…, n), n being the dimension, +.>Is the state vector of the fractional order memristive neural network at the kth moment, +.>Is the state vector of the fractional order memristive neural network at time k-iota+1, +.>Is the state vector of the fractional order memristive neural network at time k-d, +.>Is the state vector of the fractional order memristive neural network at time k+1, +.>The neural network dynamic model is a real number domain of the state of the neural network dynamic model, and the dimension of the neural network dynamic model is n;For the controlled measurement output at time k, < > in->The real number domain of the controlled output state of the neural network dynamic model is provided, and the dimension of the real number domain is r;Is a given initial sequence, d is a discrete fixed network time lag; a (x) k )=diag n {a i (x i,k ) The symbol "is the neural network self-feedback diagonal matrix at the kth time, n is the dimension, diag {.cndot } represents the diagonal matrix, a i (x i,k ) Is A (x) k ) N is the dimension; a is that d (x k )={a ij,d (x i,k )} n*n A is a system matrix with known dimension at the kth moment and related to time lag ij,d (x i,k ) To at the kth time A d (x k ) Is the i-th component form of (a); b (x) k )={b ij (x i,k )} n*n Weight matrix for a known connected excitation function at the kth moment, b ij (x i,k ) For at the kth time B (x k ) Is the i-th component form of (a); f (x) k ) Is a nonlinear excitation function at the kth time; c (C) 1k For knowing the noise distribution matrix of the system for the first component at time k, C 2k For knowing the noise distribution matrix of the system for the second component at time k, H k An adjustment matrix that is a known measurement at a kth time; d (D) k A metric matrix that is a known measurement at a kth time; v 1k Is zero at the k-th moment and the covariance is V 1 Gaussian white noise sequence > 0, v 2k Is zero at the k-th moment and the covariance is V 2 Gaussian white noise sequence > 0,>indicated are values for the sum of iota=1 to k+1.
State-dependent matrix parameter a i (x i,k )、a ij,d (x i,k ) And b ij (x i,k ) The method meets the following conditions:
wherein a is i (x i,k )、a ij,d (x i,k ) And b ij (x i,k ) Respectively A (x) k ),A d (x k ) And B (x) k ) Is the ith component, omega i The value > 0 is a known handover threshold value,for the i-th known upper storage variable matrix, is->For the i-th known lower storage variable matrix, is +.>For ij, d known left storage variable matrix,For ij, d known right storage variable matrix, +.>For the ij-th known memory variable matrix, < >>The variable matrix is stored externally for the ij-th known.
Definition:
in the method, in the process of the invention,first number metric matrix stored for the ith minimum,/th metric matrix>An upper storage interval variable matrix known as the i < th >>For the i-th known lower storage interval variable matrix, min { · } represents the minimum value in the two storage matrices, max { · } represents the maximum value in the two storage matrices, and +_>First metric matrix stored for the ith maximum,/th metric matrix>Second metric matrix stored for ij, d least, +.>Second metric matrix stored for ith maximum,/second metric matrix>For ij, d known left storage variable matrix,For ij, d known right storage variable matrix, +.>Third metric matrix stored for the ij-th minimum,/metric matrix>Third metric matrix stored for the ij-th maximum, +.>For the ij-th known memory variable matrix, < >>The external storage variable matrix known as ij is diag { - To define a first number diagonal matrix, A + For the defined second diagonal matrix +.>For the defined third diagonal matrix +.>To define a fourth diagonal matrix, B - To define a fifth diagonal matrix, B + For the defined diagonal matrix number six, n is the dimension.
In the method, in the process of the invention,first number matrix for defined left and right section, < > for>Second matrix for defined left and right section, < > for the first matrix>Third matrix for defined left and right section, < ->And->Meeting the norm bounded uncertainty:
wherein DeltaA k To satisfy the first order matrix of norm bounded uncertainty, ΔA dk To satisfy the second matrix of norm bounded uncertainty, ΔB k To satisfy the third matrix of norm bounded uncertainty,and->Are all known real-valued weight matrices, < ->Is an unknown matrix and satisfies->
And step two, under the amplifying and forwarding protocol, performing state estimation on the fractional order memristor neural network dynamic model established in the step one. The method comprises the following specific steps:
in order to smoothly complete the task of remote data transmission, an amplifying-forwarding repeater is arranged in a wireless network channel so as to supplement the energy consumed by data transmission. Let p s,k And n s,k Representing the random energy of the sensor and the amplifying-repeating relay, respectively, the output signal of the amplifying-repeating relay is composed ofExpressed, it satisfies the following equation:
in the method, in the process of the invention,indicating that at the kth time is knownChannel attenuation matrix, < >>Diag {.cndot } represents a diagonal matrix, y, in the form of the m channel components of the attenuation matrix for the known channel k Is the ideal measurement output at time k, +.>Is the actual measured output at time k, +.>Is the white noise sequence at the sensor-repeater channel at time k and satisfies +.> Representing mathematical expectations +.>Is +.>Transpose of p s,k The random energy of the sensor at the kth time is shown to satisfy the following statistical characteristics:
where Pr {.cndot. } represents a mathematical probability,the sum value representing all probabilities is 1 and the probability satisfies the interval +.> For the expected value of random energy possessed by the sensor at the kth time, phi represents the number of all channels.
The output value of the amp-repeater can be expressed as:
in χ k > 0 represents the amplification factor at the kth time,is the attenuation matrix of the known channel at time k, < >>M-th channel, n, is the component form of m channels of the attenuation matrix of the known channel s,k Is a variable of the transmission random energy at the kth moment,/->Is the actual measured output at time k, +.>Is a white noise signal at the repeater-estimator channel at time k and satisfies +.> Representing mathematical expectations +.>Is +.>Is a transpose of (a). Similarly, random energy n s,k Has the following systemThe meter characteristics are as follows: />
In the method, in the process of the invention,the sum value representing all probabilities is 1 and the probability satisfies the interval +. > For the expected value of the transmission random energy at the kth time, ψ represents the number of all channels.
The nonlinear function f(s) satisfies the following fan-shaped bounded condition:
in the method, in the process of the invention,is the first real matrix of known appropriate dimension of the 1 st component at time k,/->Is the second real matrix of known appropriate dimensions for component 2 at time k.
Step two, based on the available measurement information, constructing a time-varying state estimator as follows:
in the method, in the process of the invention,is a neural networkState estimation at time k +.>Is an estimate of the state of the neural network at the kth time, and (2)>Is an estimate of the state of the neural network at time k-d,/and>the neural network dynamic model is a real number domain of the state of the neural network dynamic model, and the dimension of the neural network dynamic model is n; x-shaped articles k Indicating the amplification factor at the kth time, d is a fixed network time lag, +.>For the state estimation of the controlled output at the kth moment,/->The real number domain of the controlled output state of the neural network dynamic model is provided with the dimension of r,/or->First number matrix for defined left and right section, < > for>Second matrix for defined left and right section, < > for the first matrix>Third matrix for defined left and right section, < ->H being a nonlinear excitation function at the kth time instant k For the adjustment matrix of known measurements at the kth time, D k Is a metric matrix of known measurements at time k,/for the measurement of the first time>Is the measured output of the decoder at the kth time, K k Is the estimator gain matrix at time k,/i>Summation of random energy expectations for the sensor>Indicating the desire for random energy possessed by the sensor at the kth moment, < >>Summation of random energy expectations for the sensor>Indicating the desire for random energy possessed by the sensor at the kth time,diagonal matrix for all binomials, < ->For the fractional order (j=1, 2, …, n), n is the dimension, diag {.cndot } represents a diagonal matrix, χ k Indicating the amplification factor at the kth time.
Step two, step three, define the estimated errorAnd control output estimation error +.>Further, an estimation error system can be obtained:
in the method, in the process of the invention,for the excitation function at the kth moment, +.>For a nonlinear excitation function at the kth moment, < +.>Is an estimate of the state of the neural network at the kth time, and (2)>Is an estimate of the state of the neural network at time k-d,/and>is the state estimation of the neural network at time k-iota+1,/and>is the real number domain of the state of the neural network dynamic model, n is the dimension, χ k Indicating the amplification factor at time k, +.>The value of the open root number, K k Is the estimator gain matrix at time k,/i >Indicating the desire for random energy possessed by the sensor at the kth moment, < >>Indicating the expectation of random energy provided by the sensor at the kth time, ΔA k To satisfy the first order matrix of norm bounded uncertainty, ΔA dk To satisfy the second matrix of norm bounded uncertainty, ΔB k Is full ofThird matrix of foot norm bounded uncertainty,>first number matrix for defined left and right section, < > for>Second matrix for defined left and right section, < > for the first matrix>Third matrix e for defined left and right section k Is the estimated error at the kth time, e k+1 Is the estimated error at time k+1, e k-d Is the estimated error at the k-d time, is->Is the controlled output estimation error at the kth time, a (x k )=diag n {a i (x ik ) The diagonal matrix is expressed by diag {.cndot } which is the self-feedback diagonal matrix of the neural network at the kth moment, a i (x ik ) Is A (x) k ) N is the dimension; a is that d (x k ) B (x) is a system matrix of known dimension and time-lag correlation at time k k ) A weight matrix for the connected excitation function known at time k; f (x) k ) Is a nonlinear excitation function at the kth time; c (C) 1k For knowing the noise distribution matrix of the system for the first component at time k, C 2k For knowing the noise distribution matrix of the system for the second component at time k, H k An adjustment matrix that is a known measurement at a kth time; d (D) k A metric matrix that is a known measurement at a kth time; v 1k Is zero at the k-th moment and the covariance is V 1 Gaussian white noise sequence > 0, v 2k Is zero at the k-th moment and the covariance is V 2 Gaussian white noise sequence > 0,>representing iota=1 to k+1Value of sum->Is the white noise sequence at the sensor-repeater channel at time k and satisfies +.> Representing mathematical expectations +.>Is +.>Transpose of->Is a white noise signal at the repeater-estimator channel at time k and satisfies Representing mathematical expectations +.>Is +.>Is to be used in the present invention,indicating that at time k the channel attenuation matrix is known, diag {.cndot }, is a diagonal matrix,is the known channel attenuation matrix at time k and m represents the mth channel.
The main purpose of this step is to design a time-varying state estimator (2) based on an amplification forwarding protocol, so that the estimation error system meets the following two performance constraint requirements simultaneously:
(1) The disturbance attenuation level gamma is more than 0, and the first semi-positive definite matrix number and the second semi-positive definite matrix number are respectivelyAndfor initial state e 0 Control output estimation error +. >Satisfies the following H ∞ Performance constraints:
wherein N is a limited number of nodes,representing mathematical expectations +.>Is the first weight matrix,/->Is a first number weight matrix, e 0 Is the estimated error at time 0, gamma > 0 is the given disturbance attenuation level, +.>Is noise v 1k And v 2k Augmented vector,/->Is at the kth time e k Is represented by the transpose ofForm of norm 2 Represented is in the form of a norm square.
(2) The estimated error covariance satisfies the upper bound constraint as follows:
in the method, in the process of the invention,is at the kth time e k Is transposed of pi k (0.ltoreq.k < N) is a series of predetermined acceptable estimation accuracy matrices at time k.
Step three, giving H ∞ Performance index gamma, semi-positive definite matrix number oneSemi-positive definite matrix number two->Initial conditionsCalculating upper bound and H of error covariance matrix of fractional order memristive neural network ∞ Performance constraints. The method comprises the following specific steps:
step three, prove H according to the following ∞ The problem is analyzed and the corresponding discriminant criterion easy to solve is given:
wherein:
wherein γ is a given positive scalar;the number of the matrix is determined for a half positive value, respectively->D k 、K k 、E t,k 、C t,k 、ΔA k 、H k 、ΔB k 、ΔA k 、E k 、K k 、C k 、R 3k Is a transpose of (2);A semi-positive definite matrix; y is Y 11 Is a 1 st row and 1 st column block matrix of Y 12 Is a 1 st row and 2 nd column block matrix of Y 22 Is the 2 nd row and 2 nd column block matrix of Y 33 Is the 3 rd row and 3 rd column block matrix of Y 44 Is the 4 th row and 4 th column block matrix of Y 55 Is the 5 th row and 5 th column block matrix of Y 66 Line 6 of Y6 column block matrix, Y 77 Is the 7 th row and 7 th column block matrix of Y 88 Is the 8 th row and 8 th column block matrix of Y 99 Is the 9 th row and 9 th column block matrix of Y,>indicating the desire for random energy possessed by the sensor at the kth moment, < >>Indicating the expectation of random energy provided by the sensor at the kth time, ΔA k To satisfy the first order matrix of norm bounded uncertainty, ΔA dk To satisfy the second matrix of norm bounded uncertainty, ΔB k Third matrix for satisfying norm bounded uncertainty,First number matrix for defined left and right section, < > for>Second matrix for defined left and right section, < > for the first matrix>Third matrix for defined left and right section, < ->Is a nonlinear excitation function at the kth time; c (C) 1k For knowing the noise distribution matrix of the system for the first component at time k, C 2k For knowing the noise distribution matrix of the system for the second component at time k, H k An adjustment matrix that is a known measurement at a kth time; d (D) k A metric matrix that is a known measurement at a kth time; / >Representing the sum of iota=1 to k+1, and +.>Indicating that at time k the channel attenuation matrix is known, diag {.cndot } is a diagonal matrix,/->Is the known channel attenuation matrix at time k, m represents the mth channel,and->The first, second, third, fourth and fifth related scaling coefficients, respectively, 0 representing the elements of the matrix block as 0.
Step three, discussing covariance matrix χ k Is set, and gives sufficient conditions as follows:
S k+1 ≥Ω(S k ), (4)
in the method, in the process of the invention,
in the formula e k Is an error matrix at the kth time;for state estimation at the kth time, ρ∈ (0, 1) is a known adjusting positive constant; s is S k Is the upper bound of the error covariance matrix at the kth time;Θ 1k T 、 Respectively->Θ 1k 、C 1k 、Φ ι 、C t,k 、E t,k Is a transpose of (2); zeta is the adjustment coefficient, omega (S) k ) The upper bound matrix solved at the kth moment is obtained; s is S k-d An upper bound matrix that is an error covariance matrix at the kth-d time; tr (S) k ) Trace of the upper bound of the error covariance matrix at the kth time; tr () is the trace of the matrix χ k =e k e k T For the upper error bound at the kth time, e k For the error matrix at the kth time, I is the identity matrix,>is the first real matrix of known appropriate dimension of the 1 st component at time k,/->Is the second real matrix of known appropriate dimensions for the 2 nd component at time k.
By analyzing the two results, the method ensures that the estimation error system meets the given H ∞ Performance requirements and error covariance are sufficient conditions for a bounded nature.
Step four, solving an estimator gain matrix K by solving a series of linear matrix inequalities by utilizing a random analysis method k The solution of the method is that the state estimation is carried out on a fractional order memristor neural network dynamic model under an amplification forwarding protocol; judging whether k+1 reaches the total duration N, if k+1 is less than N, executing the second step, and otherwise, ending.
In the step, a series of recursive linear matrix inequalities of (5) to (7) are solved to give an estimated error system which simultaneously satisfies H ∞ The values of the estimator gain matrix can be calculated under the sufficient condition that the performance requirement and the error covariance are bounded:
S k+1 -Ω k+1 ≤0 (7)
the update matrix is:
wherein:
Ω 22 =diag{-ε 1,k I,-ε 2,k I,-ε 2,k I,-ε 3,k I,-ε 3,k I},
Ω 33 =diag{-ε 4,k I,-ε 4,k I,-ε 5,k I,-ε 5,k I},
in omega 11 Is a 1 st row and 1 st column block matrix, Ω 12 Is a 1 st row and 2 nd column block matrix, Ω 13 Is a 1 st row and 3 rd column block matrix, Ω 22 Is a 2 nd row and 2 nd column block matrix, Ω 33 Is a 3 rd row and 3 rd column blocking matrix,is the first1 row 1 column block matrix->Is a 1 st row and 2 nd column block matrix, < >>Is a 1 st row and 3 rd column block matrix, < >>Is a 1 st row and 4 th column block matrix, L 15 Is a 1 st row and 5 th column block matrix, L 16 Is a 1 st row and 6 th column block matrix, L 22 Is a 2 nd row and 2 nd column block matrix, L 33 Is a 3 rd row and 3 rd column block matrix, L 44 Is a 4 th row and 4 th column block matrix, L 55 Is a 5 th row and 5 th column block matrix, L 66 Is a 6 th row and 6 th column block matrix, < >>Is a 1 st row and 1 st column block matrix, G 12 Is a 1 st row and 2 nd column block matrix, G 14 Is a 1 st row 4 th column block matrix,/, a>Is a 1 st row and 5 th column block matrix, G 22 Is a 2 nd row and 2 nd column block matrix, G 24 Is a 2 nd row and 4 th column block matrix, < >>Is a 2 nd row and 6 th column block matrix, < >>Is a 2 nd row 7 th column block matrix,/, a>Is a 2 nd row and 8 th column block matrix, G 33 Is a 3 rd row and 3 rd column block matrix, G 39 Is a 3 rd row and 9 th column block matrix, < >>Is a 4 th row and 10 th column block matrix, < >>Is a 4 th row and 4 th column block matrix, < >>Is a 5 th row and 5 th column block matrix, < >>Is a 6 th row and 6 th column block matrix, < >>Is a 7 th row 7 th column block matrix,/, a>Is a 8 th row 8 th column block matrix, < >>Is a 9 th row and 9 th column block matrix, < >>Is a 10 th row 10 th column block matrix,
respectively isD k ,K k ,E t,k ,C t,k ,ΔA k ,H k ,ΔB k ,ΔA k ,E k ,K k ,C k ,R 3k Transpose of->First number matrix for defined left and right section, < > for>Second matrix for defined left and right section, < > for the first matrix>Third matrix for defined left and right section, < ->Is a nonlinear excitation function at the kth time; c (C) 1k For knowing the noise distribution matrix of the system for the first component at time k, C 2k For knowing the noise distribution matrix of the system for the second component at time k, H k An adjustment matrix that is a known measurement at a kth time; d (D) k A metric matrix that is a known measurement at a kth time;Representing the sum of iota=1 to k+1, and +.>Indicating that at time k the channel attenuation matrix is known, diag {.cndot } is a diagonal matrix,/->Is the known channel attenuation matrix at the kth moment, m represents the mth channel, ρ e (0, 1) is the known adjusting positive constant; s is S k Is the upper bound of the error covariance matrix at the kth time;Θ 1k T , respectively->Θ 1k ,C 1k ,Φ ι ,C t,k ,E t,k Is a transpose of (2); zeta is the adjustment coefficient, omega (S) k ) The upper bound matrix solved at the kth moment is obtained; s is S k-d An upper bound matrix that is an error covariance matrix at the kth-d time; tr (S) k ) Trace of the upper bound of the error covariance matrix at the kth time; tr () is the trace of the matrix, and I is the identity matrix;A first weight matrix at the kth moment;A second weight matrix at the kth moment;A third weight matrix at the kth moment;Is at the kth time R 3k Is a transpose of (2);Is the first real matrix of known appropriate dimension of the 1 st component at time k,/- >Is the known appropriate dimension of the 2 nd component at time kA second real matrix;State estimation for a nonlinear excitation function at a kth time;Is a first number metric matrix of known appropriate dimensions for component 1 at time k;A second number metric matrix of known appropriate dimensions for the 2 nd component at time k;A third metric matrix of known appropriate dimensions for component 3 at time k;A third metric matrix of known appropriate dimension for the 4 th component at time k; n (N) 5 A third metric matrix of known appropriate dimensions for the 5 th component at time k; m is M 1 ,M 2 ,M 3 ,M 4 And M 5 The metrics are the metrics of first, second, third, fourth and fifth, respectively,For the neuronal status estimation at time k, is->A semi-positive definite matrix at the kth moment;A semi-positive definite matrix at the kth moment;A semi-positive definite matrix at the k-d time;For the first update matrix at time k+1, S k To estimate the upper bound matrix of the error, tr (S k ) To estimate the error upper bound matrix S at the kth time k Is a trace of (1); s is S k-d For the upper matrix at time k-d, κ is the adjusted weight coefficient, ++>Andare all known real-valued weight matrices, < ->Is an unknown matrix and satisfies- > Is thatGamma is a given positive scalar;A first number is determined for a given semi-positive matrix;respectively is omega 12 ,Ω 13 ,Is a transpose of (2); G is respectively 12 ,G 14 ,G 24 ,G 410 ,Is a transpose of (2);Respectively M 1 ,M 2 ,M 3 ,M 4 ,M 5 Is a transpose of (2); n (N) 1 ,N 2 ,N 3 ,N 4 ,N 5 Are respectively->Is a transpose of (2);And->The first, second, third, fourth and fifth related scaling coefficients, respectively, 0 representing the elements of the matrix block as 0.
In the invention, the theory in the third and fourth steps is as follows:
first, prove H ∞ Analyzing the problem and giving out corresponding discrimination criteria which are easy to solve; next, consider covariance matrix X k Is a problem of the upper bound of the (c) is to be solved, and the following sufficient conditions are given; by analyzing the two results, the method ensures that the estimation error system meets the given H ∞ Sufficient conditions for performance requirements and error covariance constraints, solving for a solution to an estimator gain matrix by solving a series of linear matrix inequalities, and computing an estimator gain matrix K k Is a solution to (a).
Examples:
the embodiment is provided with H ∞ The fractional order memristor neural network with performance constraint and variance constraint is taken as an example, and can be applied to associative memoryIn the process of memory, pattern recognition and combination optimization, the method provided by the invention is adopted to simulate a voice recognition case:
Having H under an amplify-and-forward protocol ∞ The relative system parameters of the fractional order memristor neural network state model, the measurement output model and the controlled output model with the performance constraint and the variance constraint are selected as follows:
the corresponding adjustment matrix is given according to the state of the voice of the person:
C 1k =[-1.2-0.35sin(2k)] T ,
the measurement adjustment matrix is:
the controlled output adjustment matrix is:
H k =[-0.01-0.01sin(2k)]
the state weight matrix is:
the weight matrix and the adjustment parameters of the nonlinear function are as follows:
case I: the probability distribution of the following transmission powers is given:
Prob{p t,k =1}=0.1,Prob{p t,k =1.5}=0.3,
Prob{p t,k =2}=0.6,Prob{n t,k =1}=0.2,
Prob{n t,k =1.5}=0.4,Prob{n t,k =2}=0.4,
case II: the probability distribution of the following transmission powers is given:
Prob{p t,k =1}=0.6,Prob{p t,k =1.5}=0.3,
Prob{p t,k =2}=0.1,Prob{n t,k =1}=0.4,
Prob{n t,k =1.5}=0.4,Prob{n t,k =2}=0.2.
the excitation function is taken as:
wherein x is k =[x 1,k x 2,k ] T Is the state vector of the neuron and has an amplification factor of χ s =1,x 1,k For at the kth time x k Is the first component specific gravity matrix, x 2,k For at the kth time x k A second component specific gravity matrix of (c).
Other simulation initial values were selected as follows:
disturbance attenuation level γ=0.7, semi-positive definite matrix number oneUpper bound matrix { Ω } k } 1≤k≤N =diag {0.2,0.2} and covariance +.>Initial state->Channel parameter C t,s =0.38 and E t,s Taking the noise covariance of the sensor to relay channel and relay to estimator channel, respectively =0.12>And->
Solving linear matrix inequalities (5) to (7) by using recursive linear matrix inequalities, wherein partial numerical values are as follows: case one (Case I):
Case two (Case II):
state estimator effect:
as can be seen from fig. 2, there is H under the protocol for amplify-and-forward ∞ The method for designing the state estimator can effectively estimate the target state.
As can be seen from fig. 3, 4, and 5, the estimation error effect becomes worse as the power decreases for each time.
Claims (9)
1. The fractional order memristor neural network estimation method under the limitation of variance is characterized by comprising the following steps:
step one, establishing a fractional order memristor neural network dynamic model under an amplification forwarding protocol;
step two, under an amplifying and forwarding protocol, performing state estimation on the fractional order memristor neural network dynamic model established in the step one;
step three, giving H ∞ Performance index gamma, semi-positive definite matrix number oneSemi-positive definite matrix number two->Initial conditions->Calculating upper bound and H of error covariance matrix of fractional order memristive neural network ∞ Performance constraints;
step four, solving an estimator gain matrix K by solving a linear matrix inequality by utilizing a random analysis method k And (3) realizing state estimation of a fractional order memristor neural network dynamic model under an amplification forwarding protocol, judging whether k+1 reaches the total duration N, if k+1 is less than N, executing the second step, and otherwise ending.
2. The method for estimating a fractional memristor neural network under variance constraint according to claim 1, wherein in the first step, according to the definition of the fractional derivative of Grunwald-Letnikov, the state space of the dynamic model of the fractional memristor neural network is in the form of:
wherein:
here the number of the elements is the number,representing differential operator +_>For fractional order (j=1, 2, …, n), n is dimension, +.>Is at the firstState vector of fractional order memristive neural network at time k, +.>Is the state vector of the fractional order memristive neural network at time k-iota+1, +.>Is the state vector of the fractional order memristive neural network at time k-d, +.>Is the state vector of the fractional order memristive neural network at time k+1, +.>The neural network dynamic model is a real number domain of the state of the neural network dynamic model, and the dimension of the neural network dynamic model is n;for the controlled measurement output at time k, < > in->The real number domain of the controlled output state of the neural network dynamic model is provided, and the dimension of the real number domain is r;Is a given initial sequence, d is a discrete fixed network time lag; a (x) k )=diag n {a i (x i,k ) The symbol "is the neural network self-feedback diagonal matrix at the kth time, n is the dimension, diag {.cndot } represents the diagonal matrix, a i (x i,k ) Is A (x) k ) N is the dimension; a is that d (x k )={a ij,d (x i,k )} n*n A is a system matrix with known dimension at the kth moment and related to time lag ij,d (x i,k ) To at the kth time A d (x k ) Is the i-th component form of (a); b (x) k )={b ij (x i,k )} n*n Weight matrix for a known connected excitation function at the kth moment, b ij (x i,k ) For at the kth time B (x k ) Is the i-th component form of (a); f (x) k ) Is a nonlinear excitation function at the kth time; c (C) 1k For knowing the noise distribution matrix of the system for the first component at time k, C 2k For knowing the noise distribution matrix of the system for the second component at time k, H k An adjustment matrix that is a known measurement at a kth time; d (D) k A metric matrix that is a known measurement at a kth time; v 1k Is zero at the k-th moment and the covariance is V 1 Gaussian white noise sequence > 0, v 2k Is zero at the k-th moment and the covariance is V 2 Gaussian white noise sequence > 0,>indicated are values for the sum of iota=1 to k+1.
3. The fractional order memristive neural network under variance constraint of claim 2, characterized in that the a i (x i,k )、a ij,d (x i,k ) And b ij (x i,k ) The method meets the following conditions:
wherein a is i (x i,k )、a ij,d (x i,k ) And b ij (x i,k ) Respectively A (x) k ),A d (x k ) And B (x) k ) Is the ith component, omega i The value > 0 is a known handover threshold value,for the i-th known upper storage variable matrix, is->For the i-th known lower storage variable matrix, is +.>For ij, d known left storage variable matrix,For ij, d known right storage variable matrix, +. >For the ij-th known memory variable matrix, < >>The variable matrix is stored externally for the ij-th known.
4. The method for estimating fractional order memristor neural network under variance limitation of claim 1, wherein the specific steps of the step two are as follows:
step two, let p s,k And n s,k Representing the random energy of the sensor and the amplifying-repeating relay, respectively, the output signal of the amplifying-repeating relay is composed ofExpressed, it satisfies the following equation:
in the method, in the process of the invention,indicating that at time k the channel attenuation matrix is known, < >>Diag {.cndot } represents a diagonal matrix, y, in the form of the m channel components of the attenuation matrix for the known channel k Is the ideal measurement output at time k, +.>Is the actual measurement output at the kth time, θ s1,k Is the white noise sequence at the sensor-repeater channel at time k and satisfies +.> Representing the mathematical expectation (θ) s1,k ) T Is at the kth time theta s1,k Transpose of p s,k Representing random energy possessed by the sensor at the kth time;
the output value of the amplify-and-forward repeater is expressed as:
in χ k > 0 represents the amplification factor at the kth time,is the attenuation matrix of the known channel at time k, < >>M-th channel, n, is the component form of m channels of the attenuation matrix of the known channel s,k Is a variable of the transmission random energy at the kth moment,/->Is the actual measurement output at the kth time, θ s2,k White noise signal at the repeater-estimator channel at time kNumber and satisfy-> Representing the mathematical expectation (θ) s2,k ) T Is at the kth time theta s2,k Is a transpose of (2);
step two, based on the available measurement information, constructing a time-varying state estimator as follows:
in the method, in the process of the invention,is an estimate of the state of the neural network at the kth time, and (2)>Is an estimate of the state of the neural network at the kth time, and (2)>Is an estimate of the state of the neural network at time k-d,/and>the neural network dynamic model is a real number domain of the state of the neural network dynamic model, and the dimension of the neural network dynamic model is n; x-shaped articles k Indicating the amplification factor at the kth time, d is a fixed network time lag, +.>For the state estimation of the controlled output at the kth moment,/->Is a neural network dynamic model quiltThe real number domain of the output state is controlled and its dimension is r +.>First number matrix for defined left and right section, < > for>Second matrix for defined left and right section, < > for the first matrix>Third matrix for defined left and right section, < ->H being a nonlinear excitation function at the kth time instant k For the adjustment matrix of known measurements at the kth time, D k Is a metric matrix of known measurements at time k,/for the measurement of the first time >Is the measured output of the decoder at the kth time, K k Is the estimator gain matrix at time k,/i>Summation of random energy expectations for the sensor>Indicating the desire for random energy possessed by the sensor at the kth moment, < >>Summation of random energy expectations for the sensor>Indicating the desire for random energy possessed by the sensor at the kth moment, < >>Diagonal matrix for all binomials, < ->For the fractional order (j=1, 2, …, n), n is the dimension, diag {.cndot } represents a diagonal matrix, χ k Indicating the amplification factor at the kth time;
step two, step three, define the estimated errorAnd control output estimation error +.>Obtaining an estimation error system:
in the method, in the process of the invention,for the excitation function at the kth moment, +.>For a nonlinear excitation function at the kth moment, < +.>Is an estimate of the state of the neural network at the kth time, and (2)>Is a neural netState estimation of complex at k-d time, < >>Is the state estimation of the neural network at time k-iota+1,/and>is the real number domain of the state of the neural network dynamic model, n is the dimension, χ k Indicating the amplification factor at time k, +.>The value of the open root number, K k Is the estimator gain matrix at time k,/i>Indicating the desire for random energy possessed by the sensor at the kth moment, < > >Indicating the expectation of random energy provided by the sensor at the kth time, ΔA k To satisfy the first order matrix of norm bounded uncertainty, ΔA dk To satisfy the second matrix of norm bounded uncertainty, ΔB k Third matrix for satisfying norm bounded uncertainty,First number matrix for defined left and right section, < > for>Second matrix for defined left and right section, < > for the first matrix>Third matrix e for defined left and right section k Is the estimated error at the kth time, e k+1 Is the estimated error at time k+1, e k-d Is the estimated error at the k-d time, is->Is the controlled output estimation error at the kth time, a (x k )=diag n {a i (x ik ) The diagonal matrix is expressed by diag {.cndot } which is the self-feedback diagonal matrix of the neural network at the kth moment, a i (x ik ) Is A (x) k ) N is the dimension; a is that d (x k ) B (x) is a system matrix of known dimension and time-lag correlation at time k k ) A weight matrix for the connected excitation function known at time k; f (x) k ) Is a nonlinear excitation function at the kth time; c (C) 1k For knowing the noise distribution matrix of the system for the first component at time k, C 2k For knowing the noise distribution matrix of the system for the second component at time k, H k An adjustment matrix that is a known measurement at a kth time; d (D) k A metric matrix that is a known measurement at a kth time; v 1k Is zero at the k-th moment and the covariance is V 1 Gaussian white noise sequence > 0, v 2k Is zero at the k-th moment and the covariance is V 2 Gaussian white noise sequence > 0,>representing the sum of iota=1 to k+1, θ s1,k Is the white noise sequence at the sensor-repeater channel at time k and satisfies +.> Representing the mathematical expectation (θ) s1,k ) T Is at the kth time theta s1,k Transpose of θ s2,k Is a white noise signal at the repeater-estimator channel at time k and satisfies Representing the mathematical expectation (θ) s2,k ) T Is at the kth time theta s2,k Is to be used in the present invention,indicating that at time k the channel attenuation matrix is known, diag {.cndot }, is a diagonal matrix,is the known channel attenuation matrix at time k and m represents the mth channel.
5. The method for estimating fractional order memristive neural network under variance constraint of claim 4, wherein the p is s,k The following statistical properties are satisfied:
6. The fractional order memristor neural network estimation method under variance constraint of claim 4Characterized in that the random energy n s,k Has the following statistical properties:
7. The fractional order memristive neural network estimation method under variance constraint of claim 4, characterized in that the estimation error system satisfies the following two performance constraint requirements simultaneously:
(1) The disturbance attenuation level gamma is more than 0, and the first semi-positive definite matrix number and the second semi-positive definite matrix number are respectivelyAnd->For initial state e 0 Control output estimation error +.>Satisfies the following H ∞ Performance constraints: />
Wherein N is a limited number of nodes,representing mathematical expectations +.>Is the first weight matrix,/->Is a first number weight matrix, e 0 Is the estimated error at time 0, gamma > 0 is the given disturbance attenuation level, +.>Is noise v 1k And v 2k Augmented vector,/->Is at the kth time e k Is represented by the norm form, I.I 2 Expressed in terms of norm square;
(2) The estimated error covariance satisfies the upper bound constraint as follows:
8. The method for estimating fractional order memristor neural network under variance limitation of claim 1, wherein the specific steps of the third step are as follows:
step three, prove H according to the following ∞ The problem is analyzed and the corresponding discriminant criterion easy to solve is given:
wherein:
wherein γ is a given positive scalar;for half positive definite matrix one ++> Respectively->D k 、K k 、E t,k 、C t,k 、ΔA k 、H k 、ΔB k 、ΔA k 、E k 、K k 、C k 、R 3k Is a transpose of (2);A semi-positive definite matrix; y is Y 11 Is a 1 st row and 1 st column block matrix of Y 12 Is a 1 st row and 2 nd column block matrix of Y 22 Is the 2 nd row and 2 nd column block matrix of Y 33 Is the 3 rd row and 3 rd column block matrix of Y 44 Is the 4 th row and 4 th column block matrix of Y 55 Is the 5 th row and 5 th column block matrix of Y 66 Is a 6 th row and 6 th column block matrix of Y 77 Is the 7 th row and 7 th column block matrix of Y 88 Is the 8 th row and 8 th column block matrix of Y 99 Is the 9 th row and 9 th column block matrix of Y,>indicating the desire for random energy possessed by the sensor at the kth moment, < >>Indicating the expectation of random energy provided by the sensor at the kth time, ΔA k To satisfy the first order matrix of norm bounded uncertainty, ΔA dk To satisfy the second matrix of norm bounded uncertainty, ΔB k Third matrix for satisfying norm bounded uncertainty,First number matrix for defined left and right section,Second matrix for defined left and right section, < > for the first matrix>Third matrix for defined left and right section, < ->Is a nonlinear excitation function at the kth time; c (C) 1k For knowing the noise distribution matrix of the system for the first component at time k, C 2k For knowing the noise distribution matrix of the system for the second component at time k, H k An adjustment matrix that is a known measurement at a kth time; d (D) k A metric matrix that is a known measurement at a kth time;Representing the sum of iota=1 to k+1, and +.>Indicating that at time k the channel attenuation matrix is known, diag {.cndot } is a diagonal matrix,/->Is the known channel attenuation matrix at time k, m represents the mth channel,and->The first, second, third, fourth and fifth related scaling coefficients, respectively, 0 representing elements of the matrix block as 0;
S k+1 ≥Ω(S k ), (4)
in the method, in the process of the invention,
in the formula e k Is an error matrix at the kth time;for state estimation at the kth time, ρ∈ (0, 1) is a known adjusting positive constant; s is S k Is the upper bound of the error covariance matrix at the kth time;Θ 1k T 、 Respectively->Θ 1k 、C 1k 、Φ ι 、C t,k 、E t,k Is a transpose of (2); zeta is the adjustment coefficient, omega (S) k ) The upper bound matrix solved at the kth moment is obtained; s is S k-d An upper bound matrix that is an error covariance matrix at the kth-d time; tr (S) k ) Trace of the upper bound of the error covariance matrix at the kth time; tr () is the trace of the matrix,for the upper error bound at the kth time, e k For the error matrix at the kth time, I is the identity matrix,>is the first real matrix of known appropriate dimension of the 1 st component at time k,/->Is the second real matrix of known appropriate dimensions for the 2 nd component at time k.
9. The method for estimating a fractional order memristor neural network under variance constraint of claim 1, wherein in the fourth step, the estimation error system is given by solving a series of recursive linear matrix inequalities (5) to (7) while satisfying H ∞ The values of the estimator gain matrix can be calculated under the sufficient condition that the performance requirement and the error covariance are bounded:
S k+1 -Ω k+1 ≤0 (7)
the update matrix is:
wherein:
Ω 22 =diag{-ε 1,k I,-ε 2,k I,-ε 2,k I,-ε 3,k I,-ε 3,k I},
Ω 33 =diag{-ε 4,k I,-ε 4,k I,-ε 5,k I,-ε 5,k I},
in omega 11 Is a 1 st row and 1 st column block matrix, Ω 12 Is a 1 st row and 2 nd column block matrix, Ω 13 Is a 1 st row and 3 rd column block matrix, Ω 22 Is a 2 nd row and 2 nd column block matrix, Ω 33 Is a 3 rd row and 3 rd column blocking matrix,
is a row 1 and column 1 block matrix,is a 1 st row and 2 nd column block matrix, < >>Is a 1 st row and 3 rd column block matrix, < >>Is a 1 st row and 4 th column block matrix, L 15 Is a 1 st row and 5 th column block matrix, L 16 Is a 1 st row and 6 th column block matrix, L 22 Is a 2 nd row and 2 nd column block matrix, L 33 Is a 3 rd row and 3 rd column block matrix, L 44 Is a 4 th row and 4 th column block matrix, L 55 Is a 5 th row and 5 th column block matrix, L 66 Is a 6 th row and 6 th column block matrix, < >>Is a 1 st row and 1 st column block matrix, G 12 Is a 1 st row and 2 nd column block matrix, G 14 Is a 1 st row 4 th column block matrix,/, a>Is a 1 st row and 5 th column block matrix, G 22 Is a 2 nd row and 2 nd column block matrix, G 24 Is a 2 nd row and 4 th column block matrix, < >>Is a 2 nd row and 6 th column block matrix, < >>Is a 2 nd row 7 th column block matrix,/, a>Is a 2 nd row and 8 th column block matrix, G 33 Is a 3 rd row and 3 rd column block matrix, G 39 Is a 3 rd row and 9 th column block matrix, < >>Is a 4 th row and 10 th column block matrix, < >>Is a 4 th row and 4 th column block matrix, < >>Is a 5 th row and 5 th column block matrix, < >>Is a 6 th row and 6 th column block matrix, < >>Is a 7 th row 7 th column block matrix,/, a>Is a 8 th row 8 th column block matrix, < >>Is a 9 th row and 9 th column block matrix, < >>Is a 10 th row 10 th column block matrix,
Respectively isD k ,K k ,E t,k ,C t,k ,ΔA k ,H k ,ΔB k ,ΔA k ,E k ,K k ,C k ,R 3k Transpose of->First number matrix for defined left and right section, < > for>Second matrix for defined left and right section, < > for the first matrix>Third matrix for defined left and right section, < ->Is a nonlinear excitation function at the kth time; c (C) 1k For knowing the noise distribution matrix of the system for the first component at time k, C 2k For knowing the noise distribution matrix of the system for the second component at time k, H k An adjustment matrix that is a known measurement at a kth time; d (D) k A metric matrix that is a known measurement at a kth time;Representing the sum of iota=1 to k+1, and +.>Indicating that at time k the channel attenuation matrix is known, diag {.cndot } is a diagonal matrix,/->Is the known channel attenuation matrix at the kth moment, m represents the mth channel, ρ e (0, 1) is the known adjusting positive constant; s is S k Is the upper bound of the error covariance matrix at the kth time;Θ 1k T , Respectively->Θ 1k ,C 1k ,Φ ι ,C t,k ,E t,k Is a transpose of (2); zeta is the adjustment coefficient, omega (S) k ) The upper bound matrix solved at the kth moment is obtained; s is S k-d An upper bound matrix that is an error covariance matrix at the kth-d time; tr (S) k ) Trace of the upper bound of the error covariance matrix at the kth time; tr () is the trace of the matrix, and I is the identity matrix;A first weight matrix at the kth moment; A second weight matrix at the kth moment;A third weight matrix at the kth moment;Is at the kth time R 3k Is a transpose of (2);Is the first real matrix of known appropriate dimension of the 1 st component at time k,/->A second real matrix of known appropriate dimensions for the 2 nd component at time k;State estimation for a nonlinear excitation function at a kth time;Is a first number metric matrix of known appropriate dimensions for component 1 at time k;A second number metric matrix of known appropriate dimensions for the 2 nd component at time k;A third metric matrix of known appropriate dimensions for component 3 at time k;A third metric matrix of known appropriate dimension for the 4 th component at time k;A third metric matrix of known appropriate dimensions for the 5 th component at time k; m is M 1 ,M 2 ,M 3 ,M 4 And M 5 The metrics are the metrics of first, second, third, fourth and fifth, respectively,For the neuronal status estimation at time k, is->A semi-positive definite matrix at the kth moment;A semi-positive definite matrix at the kth moment;A semi-positive definite matrix at the k-d time;For the first update matrix at time k+1, S k To estimate the upper bound matrix of the error, tr (S k ) To estimate the error upper bound matrix S at the kth time k Is a trace of (1); s is S k-d For the upper matrix at time k-d, κ is the adjusted weight coefficient, ++>Andare all known real-valued weight matrices, < ->Is an unknown matrix and satisfies-> Is thatGamma is a given positive scalar;A first number is determined for a given semi-positive matrix;Respectively is omega 12 ,Ω 13 ,Is a transpose of (2); G is respectively 12 ,G 14 ,G 24 ,G 410 ,Is a transpose of (2);Respectively M 1 ,M 2 ,M 3 ,M 4 ,M 5 Is a transpose of (2); n (N) 1 ,N 2 ,N 3 ,N 4 ,N 5 Respectively areIs a transpose of (2);And->The first, second, third, fourth and fifth related scaling coefficients, respectively, 0 representing the elements of the matrix block as 0./>
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211559637.5A CN116227324B (en) | 2022-12-06 | 2022-12-06 | Fractional order memristor neural network estimation method under variance limitation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211559637.5A CN116227324B (en) | 2022-12-06 | 2022-12-06 | Fractional order memristor neural network estimation method under variance limitation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116227324A true CN116227324A (en) | 2023-06-06 |
CN116227324B CN116227324B (en) | 2023-09-19 |
Family
ID=86584853
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211559637.5A Active CN116227324B (en) | 2022-12-06 | 2022-12-06 | Fractional order memristor neural network estimation method under variance limitation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116227324B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117077748A (en) * | 2023-06-15 | 2023-11-17 | 盐城工学院 | Coupling synchronous control method and system for discrete memristor neural network |
CN117949897A (en) * | 2024-01-09 | 2024-04-30 | 哈尔滨理工大学 | Multifunctional radar working mode identification method based on time sequence segmentation and clustering |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107436411A (en) * | 2017-07-28 | 2017-12-05 | 南京航空航天大学 | Battery SOH On-line Estimation methods based on fractional order neural network and dual-volume storage Kalman |
CN109088749A (en) * | 2018-07-23 | 2018-12-25 | 哈尔滨理工大学 | The method for estimating state of complex network under a kind of random communication agreement |
CN111025914A (en) * | 2019-12-26 | 2020-04-17 | 东北石油大学 | Neural network system remote state estimation method and device based on communication limitation |
US11449754B1 (en) * | 2021-09-12 | 2022-09-20 | Zhejiang University | Neural network training method for memristor memory for memristor errors |
-
2022
- 2022-12-06 CN CN202211559637.5A patent/CN116227324B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107436411A (en) * | 2017-07-28 | 2017-12-05 | 南京航空航天大学 | Battery SOH On-line Estimation methods based on fractional order neural network and dual-volume storage Kalman |
CN109088749A (en) * | 2018-07-23 | 2018-12-25 | 哈尔滨理工大学 | The method for estimating state of complex network under a kind of random communication agreement |
CN111025914A (en) * | 2019-12-26 | 2020-04-17 | 东北石油大学 | Neural network system remote state estimation method and device based on communication limitation |
US11449754B1 (en) * | 2021-09-12 | 2022-09-20 | Zhejiang University | Neural network training method for memristor memory for memristor errors |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117077748A (en) * | 2023-06-15 | 2023-11-17 | 盐城工学院 | Coupling synchronous control method and system for discrete memristor neural network |
CN117077748B (en) * | 2023-06-15 | 2024-03-22 | 盐城工学院 | Coupling synchronous control method and system for discrete memristor neural network |
CN117949897A (en) * | 2024-01-09 | 2024-04-30 | 哈尔滨理工大学 | Multifunctional radar working mode identification method based on time sequence segmentation and clustering |
Also Published As
Publication number | Publication date |
---|---|
CN116227324B (en) | 2023-09-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116227324B (en) | Fractional order memristor neural network estimation method under variance limitation | |
CN109088749B (en) | State estimation method of complex network under random communication protocol | |
CN112115419B (en) | System state estimation method and system state estimation device | |
CN107102969A (en) | The Forecasting Methodology and system of a kind of time series data | |
Liu et al. | State estimation for neural networks with Markov-based nonuniform sampling: The partly unknown transition probability case | |
CN112116138A (en) | Power system prediction state estimation method and system based on data driving | |
CN109995031B (en) | Probability power flow deep learning calculation method based on physical model | |
CN110443724B (en) | Electric power system rapid state estimation method based on deep learning | |
CN111025914B (en) | Neural network system remote state estimation method and device based on communication limitation | |
Gospodinov et al. | Minimum distance estimation of possibly noninvertible moving average models | |
CN113240105B (en) | Power grid steady state discrimination method based on graph neural network pooling | |
CN105355198A (en) | Multiple self-adaption based model compensation type speech recognition method | |
CN107276561A (en) | Based on the Hammerstein system identifying methods for quantifying core least mean-square error | |
CN113435595A (en) | Two-stage optimization method for extreme learning machine network parameters based on natural evolution strategy | |
Lin | Wavelet neural networks with a hybrid learning approach | |
CN105808962A (en) | Assessment method considering voltage probabilities of multiple electric power systems with wind power output randomness | |
CN109217844B (en) | Hyper-parameter optimization method based on pre-training random Fourier feature kernel LMS | |
CN115935787B (en) | Memristor neural network state estimation method under coding and decoding mechanism | |
CN116304940A (en) | Analog circuit fault diagnosis method based on long-short-term memory neural network | |
Horváth et al. | Sample autocovariances of long-memory time series | |
CN109474258B (en) | Nuclear parameter optimization method of random Fourier feature kernel LMS (least mean square) based on nuclear polarization strategy | |
CN111416595B (en) | Big data filtering method based on multi-core fusion | |
Mustapha et al. | Data selection and fuzzy-rules generation for short-term load forecasting using ANFIS | |
CN113447818B (en) | Identification method and system of battery equivalent circuit model | |
Bermeo et al. | Artificial Neural Network and Monte Carlo Simulation in a hybrid method for time series forecasting with generation of L-scenarios |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |