CN116227324A - Fractional order memristor neural network estimation method under variance limitation - Google Patents

Fractional order memristor neural network estimation method under variance limitation Download PDF

Info

Publication number
CN116227324A
CN116227324A CN202211559637.5A CN202211559637A CN116227324A CN 116227324 A CN116227324 A CN 116227324A CN 202211559637 A CN202211559637 A CN 202211559637A CN 116227324 A CN116227324 A CN 116227324A
Authority
CN
China
Prior art keywords
matrix
time
kth
row
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211559637.5A
Other languages
Chinese (zh)
Other versions
CN116227324B (en
Inventor
胡军
高岩
贾朝清
于浍
范淑婷
杨硕
陈宇
罗若楠
刘浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202211559637.5A priority Critical patent/CN116227324B/en
Publication of CN116227324A publication Critical patent/CN116227324A/en
Application granted granted Critical
Publication of CN116227324B publication Critical patent/CN116227324B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/04Constraint-based CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/08Probabilistic or stochastic CAD

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Computational Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Biomedical Technology (AREA)
  • Pure & Applied Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Algebra (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Neurology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Complex Calculations (AREA)
  • Measurement Of Resistance Or Impedance (AREA)

Abstract

The invention discloses a fractional order memristor neural network estimation method under variance limitation, which comprises the following steps: step one, establishing a fractional order memristor neural network dynamic model; step two, carrying out state estimation on the fractional order memristor neural network dynamic model under an amplification forwarding protocol; step three, calculating the upper bound and H of an error covariance matrix of the fractional order memristor neural network Performance constraints; step four, solving an estimator gain matrix K by solving a linear matrix inequality by utilizing a random analysis method k Is realized under the amplifying and forwarding protocolAnd (3) estimating the state of the fractional order memristor neural network dynamic model, judging whether k+1 reaches the total duration N, if k+1 is less than N, executing the second step, and otherwise, ending. The invention solves the problem that the prior state estimation method can not process H under the amplifying and forwarding protocol at the same time The problem of low estimation performance accuracy caused by the performance constraint and the state estimation of the variance-limited fractional order memristor neural network is solved, so that the accuracy of the estimation performance is improved.

Description

Fractional order memristor neural network estimation method under variance limitation
Technical Field
The invention relates to a state estimation method of a neural network, in particular to a method for estimating the state of a neural network with H under an amplifying and forwarding protocol A state estimation method of fractional order memristor neural network with limited performance constraint and variance.
Background
The neural network is an information processing system which is simulated according to the structure and the function of nerve cells in the brain of a human body, and has the advantages of stronger association capability, self-adaption, fault tolerance capability and the like. In many real world networks, such networks can efficiently address practical system modeling and analysis aspects such as pattern recognition, signal processing, and image recognition.
In the last decades, the problem of state estimation of recurrent neural networks has become an attractive topic, and has been successfully applied to a wide range of fields such as associative memory, pattern recognition and combinatorial optimization. However, in practical applications, the information of neurons is often not fully measurable, so efficient estimation methods are needed to estimate them. Up to now many different types of neural network state estimation problems have been studied. It is worth noting that the current results are only applicable in steady cases, which may lead to limitations in application.
The existing state estimation method can not simultaneously process H under the condition of variance limitation The performance constraint and the state estimation problem of the fractional order memristor neural network of the amplification forwarding protocol lead to low accuracy of estimation performance.
Disclosure of Invention
The invention provides a fractional order memristor neural network estimation method under variance limitation aiming at a time-varying system. The method solves the problem that the prior state estimation method can not process H under the amplifying and forwarding protocol at the same time The problem of low estimation accuracy is caused by the state estimation problem of the fractional order memristive neural network of the performance constraint, and the problem of low estimation performance accuracy is caused under the condition that information cannot receive other moment information under the amplification forwarding protocol, so that the method can be used in the field of state estimation of the memristive neural network.
The invention aims at realizing the following technical scheme:
a fractional order memristor neural network estimation method under variance limitation comprises the following steps:
step one, establishing a fractional order memristor neural network dynamic model under an amplification forwarding protocol;
step two, under an amplifying and forwarding protocol, performing state estimation on the fractional order memristor neural network dynamic model established in the step one;
Step three, giving H Performance index gamma, semi-positive definite matrix number one
Figure BDA0003984088630000021
Semi-positive definite matrix number two->
Figure BDA0003984088630000022
Initial conditions
Figure BDA0003984088630000023
Calculating upper bound and H of error covariance matrix of fractional order memristive neural network Performance constraints;
step four, solving the estimation by solving the inequality of the linear matrix by utilizing a random analysis methodGain matrix K of counter k And (3) realizing state estimation of a fractional order memristor neural network dynamic model under an amplification forwarding protocol, judging whether k+1 reaches the total duration N, if k+1 is less than N, executing the second step, and otherwise ending.
In the invention, the neural network can be a network formed by vehicle suspension, a network formed by particle springs, a network formed by spacecraft or a network formed by radar, and has important application in the fields of biology, mathematics, computers, associative memory, pattern recognition, combination optimization, image processing and other multidisciplinary fields.
Compared with the prior art, the invention has the following advantages:
1. the invention also considers that the H is provided under the amplifying and forwarding protocol The method comprehensively considers the effective information of the estimated error covariance matrix by utilizing an inequality processing technology and a random analysis method, and compared with the prior neural network state estimation method, the fractional order memristor neural network state estimation method simultaneously considers that the method has H under an amplifying and forwarding protocol The state estimation problem of the fractional order memristor neural network with limited performance constraint and variance is solved, and the error system simultaneously meets the upper bound of the estimated error covariance and the given H The fractional order memristor neural network state estimation method with the performance requirement achieves the purposes of suppressing disturbance and improving estimation precision, and the current result is only suitable for the situation of steady state, which may cause limitation in application.
2. The invention solves the problem that the prior state estimation method can not process H under the amplifying and forwarding protocol at the same time The problem of low estimation performance accuracy caused by the performance constraint and the state estimation of the variance-limited fractional order memristor neural network is solved, so that the accuracy of the estimation performance is improved. From the simulation graph, the smaller the power is, the state estimation performance of the fractional order memristor neural network is gradually reduced, and the estimation error is relatively larger. In addition, the feasibility and the effectiveness of the state estimation method provided by the invention are verified.
Drawings
FIG. 1 is a flow chart of a fractional order memristive neural network state estimation method under an amplification forwarding protocol of the present invention;
FIG. 2 is a fractional order memristive neural network actual state trace z k State estimation trajectory in two different situations
Figure BDA0003984088630000031
Is z k A state variable of the neural network at the kth moment; wherein->
Figure BDA0003984088630000032
Is a system status track,/->
Figure BDA0003984088630000033
Is the state estimation trace in the case, +.>
Figure BDA0003984088630000034
The state estimation track in the second case;
FIG. 3 is an error contrast plot of a neural network control output estimation error trajectory plot in two different scenarios; wherein the method comprises the steps of
Figure BDA0003984088630000035
The control output in the case of yes estimates the error trajectory, < >>
Figure BDA0003984088630000036
The control output under the second condition estimates the error track;
FIG. 4 is a graph of the actual state error covariance of the neural network and the trace of the first component of the error covariance upper bound; wherein the method comprises the steps of
Figure BDA0003984088630000037
Is a variance-constrained trajectory, ++>
Figure BDA0003984088630000038
Is the locus of the actual error covariance;
FIG. 5 is a graph of the actual state error covariance of the neural network and the trace of the second component of the error covariance upper bound; wherein the method comprises the steps of
Figure BDA0003984088630000039
Is a variance-constrained trajectory, ++>
Figure BDA00039840886300000310
Is the locus of the actual error covariance.
Detailed Description
The following description of the present invention is provided with reference to the accompanying drawings, but is not limited to the following description, and any modifications or equivalent substitutions of the present invention should be included in the scope of the present invention without departing from the spirit and scope of the present invention.
The invention provides a fractional order memristor neural network estimation method under variance limitation, which utilizes a random analysis method and an inequality processing technology, firstly, respectively considers an estimation error system to meet H Performance constraint conditions and sufficient conditions with upper bound on error covariance; then, the estimated error system is obtained simultaneously to satisfy H Performance constraint conditions and discrimination conditions with upper bound error covariance; finally, the values of the gain matrix of the estimator are obtained by solving a series of inequalities of the linear matrix, thereby realizing H under the amplifying and forwarding protocol Performance estimation is not affected under the condition that performance constraint and variance limitation occur simultaneously, so that estimation accuracy is improved. As shown in fig. 1, the method specifically comprises the following steps:
step one, establishing a fractional order memristor neural network dynamic model under an amplifying and forwarding protocol. The method comprises the following specific steps:
first, the Grunwald-Letnikov fractional derivative definition is presented, which is a form suitable for numerical implementation and application. The discrete form of this definition is expressed as:
Figure BDA0003984088630000041
Figure BDA0003984088630000042
in the formula delta α The Grunwald-Letnikov fractional derivative definition, representing the alpha order, h is the corresponding sampling interval, assuming a sampling interval of 1, k is the sampling instant,
Figure BDA0003984088630000045
All limit values representing h.fwdarw.0, i-! All layers of i, +.>
Figure BDA0003984088630000043
Representing all the sum values of i=0 to k.
According to the definition of the Grunwald-Letnikov fractional derivative, the state space form of the fractional memristor neural network dynamic model is as follows:
Figure BDA0003984088630000044
wherein:
Figure BDA0003984088630000051
Figure BDA0003984088630000052
Figure BDA0003984088630000053
here the number of the elements is the number,
Figure BDA0003984088630000054
representing differential operator +_>
Figure BDA0003984088630000055
For the fractional order (j=1, 2,…, n), n being the dimension, +.>
Figure BDA0003984088630000056
Is the state vector of the fractional order memristive neural network at the kth moment, +.>
Figure BDA0003984088630000057
Is the state vector of the fractional order memristive neural network at time k-iota+1, +.>
Figure BDA0003984088630000058
Is the state vector of the fractional order memristive neural network at time k-d, +.>
Figure BDA0003984088630000059
Is the state vector of the fractional order memristive neural network at time k+1, +.>
Figure BDA00039840886300000510
The neural network dynamic model is a real number domain of the state of the neural network dynamic model, and the dimension of the neural network dynamic model is n;
Figure BDA00039840886300000511
For the controlled measurement output at time k, < > in->
Figure BDA00039840886300000512
The real number domain of the controlled output state of the neural network dynamic model is provided, and the dimension of the real number domain is r;
Figure BDA00039840886300000513
Is a given initial sequence, d is a discrete fixed network time lag; a (x) k )=diag n {a i (x i,k ) The symbol "is the neural network self-feedback diagonal matrix at the kth time, n is the dimension, diag {.cndot } represents the diagonal matrix, a i (x i,k ) Is A (x) k ) N is the dimension; a is that d (x k )={a ij,d (x i,k )} n*n A is a system matrix with known dimension at the kth moment and related to time lag ij,d (x i,k ) To at the kth time A d (x k ) Is the i-th component form of (a); b (x) k )={b ij (x i,k )} n*n Weight matrix for a known connected excitation function at the kth moment, b ij (x i,k ) For at the kth time B (x k ) Is the i-th component form of (a); f (x) k ) Is a nonlinear excitation function at the kth time; c (C) 1k For knowing the noise distribution matrix of the system for the first component at time k, C 2k For knowing the noise distribution matrix of the system for the second component at time k, H k An adjustment matrix that is a known measurement at a kth time; d (D) k A metric matrix that is a known measurement at a kth time; v 1k Is zero at the k-th moment and the covariance is V 1 Gaussian white noise sequence > 0, v 2k Is zero at the k-th moment and the covariance is V 2 Gaussian white noise sequence > 0,>
Figure BDA00039840886300000514
indicated are values for the sum of iota=1 to k+1.
State-dependent matrix parameter a i (x i,k )、a ij,d (x i,k ) And b ij (x i,k ) The method meets the following conditions:
Figure BDA00039840886300000515
wherein a is i (x i,k )、a ij,d (x i,k ) And b ij (x i,k ) Respectively A (x) k ),A d (x k ) And B (x) k ) Is the ith component, omega i The value > 0 is a known handover threshold value,
Figure BDA0003984088630000061
for the i-th known upper storage variable matrix, is->
Figure BDA0003984088630000062
For the i-th known lower storage variable matrix, is +.>
Figure BDA0003984088630000063
For ij, d known left storage variable matrix,
Figure BDA0003984088630000064
For ij, d known right storage variable matrix, +.>
Figure BDA0003984088630000065
For the ij-th known memory variable matrix, < >>
Figure BDA0003984088630000066
The variable matrix is stored externally for the ij-th known.
Definition:
Figure BDA0003984088630000067
Figure BDA0003984088630000068
Figure BDA0003984088630000069
in the method, in the process of the invention,
Figure BDA00039840886300000610
first number metric matrix stored for the ith minimum,/th metric matrix>
Figure BDA00039840886300000611
An upper storage interval variable matrix known as the i < th >>
Figure BDA00039840886300000612
For the i-th known lower storage interval variable matrix, min { · } represents the minimum value in the two storage matrices, max { · } represents the maximum value in the two storage matrices, and +_>
Figure BDA00039840886300000613
First metric matrix stored for the ith maximum,/th metric matrix>
Figure BDA00039840886300000614
Second metric matrix stored for ij, d least, +.>
Figure BDA00039840886300000615
Second metric matrix stored for ith maximum,/second metric matrix>
Figure BDA00039840886300000616
For ij, d known left storage variable matrix,
Figure BDA00039840886300000617
For ij, d known right storage variable matrix, +.>
Figure BDA00039840886300000618
Third metric matrix stored for the ij-th minimum,/metric matrix>
Figure BDA00039840886300000619
Third metric matrix stored for the ij-th maximum, +.>
Figure BDA00039840886300000620
For the ij-th known memory variable matrix, < >>
Figure BDA00039840886300000621
The external storage variable matrix known as ij is diag { - To define a first number diagonal matrix, A + For the defined second diagonal matrix +.>
Figure BDA00039840886300000622
For the defined third diagonal matrix +.>
Figure BDA00039840886300000623
To define a fourth diagonal matrix, B - To define a fifth diagonal matrix, B + For the defined diagonal matrix number six, n is the dimension.
It is easy to derive A (x k )∈[A - ,A + ]、
Figure BDA00039840886300000624
And B (x) k )∈[B - ,B + ]. Let->
Figure BDA00039840886300000625
Figure BDA00039840886300000626
And->
Figure BDA00039840886300000627
Then there are:
Figure BDA00039840886300000628
In the method, in the process of the invention,
Figure BDA00039840886300000629
first number matrix for defined left and right section, < > for>
Figure BDA00039840886300000630
Second matrix for defined left and right section, < > for the first matrix>
Figure BDA0003984088630000071
Third matrix for defined left and right section, < ->
Figure BDA0003984088630000072
And->
Figure BDA0003984088630000073
Meeting the norm bounded uncertainty:
Figure BDA0003984088630000074
wherein DeltaA k To satisfy the first order matrix of norm bounded uncertainty, ΔA dk To satisfy the second matrix of norm bounded uncertainty, ΔB k To satisfy the third matrix of norm bounded uncertainty,
Figure BDA0003984088630000075
and->
Figure BDA0003984088630000076
Are all known real-valued weight matrices, < ->
Figure BDA0003984088630000077
Is an unknown matrix and satisfies->
Figure BDA0003984088630000078
And step two, under the amplifying and forwarding protocol, performing state estimation on the fractional order memristor neural network dynamic model established in the step one. The method comprises the following specific steps:
in order to smoothly complete the task of remote data transmission, an amplifying-forwarding repeater is arranged in a wireless network channel so as to supplement the energy consumed by data transmission. Let p s,k And n s,k Representing the random energy of the sensor and the amplifying-repeating relay, respectively, the output signal of the amplifying-repeating relay is composed of
Figure BDA0003984088630000079
Expressed, it satisfies the following equation:
Figure BDA00039840886300000710
in the method, in the process of the invention,
Figure BDA00039840886300000711
indicating that at the kth time is knownChannel attenuation matrix, < >>
Figure BDA00039840886300000712
Diag {.cndot } represents a diagonal matrix, y, in the form of the m channel components of the attenuation matrix for the known channel k Is the ideal measurement output at time k, +.>
Figure BDA00039840886300000713
Is the actual measured output at time k, +.>
Figure BDA00039840886300000722
Is the white noise sequence at the sensor-repeater channel at time k and satisfies +.>
Figure BDA00039840886300000714
Figure BDA00039840886300000715
Representing mathematical expectations +.>
Figure BDA00039840886300000721
Is +.>
Figure BDA00039840886300000720
Transpose of p s,k The random energy of the sensor at the kth time is shown to satisfy the following statistical characteristics:
Figure BDA00039840886300000716
where Pr {.cndot. } represents a mathematical probability,
Figure BDA00039840886300000717
the sum value representing all probabilities is 1 and the probability satisfies the interval +.>
Figure BDA00039840886300000718
Figure BDA00039840886300000719
For the expected value of random energy possessed by the sensor at the kth time, phi represents the number of all channels.
The output value of the amp-repeater can be expressed as:
Figure BDA0003984088630000081
in χ k > 0 represents the amplification factor at the kth time,
Figure BDA0003984088630000082
is the attenuation matrix of the known channel at time k, < >>
Figure BDA0003984088630000083
M-th channel, n, is the component form of m channels of the attenuation matrix of the known channel s,k Is a variable of the transmission random energy at the kth moment,/->
Figure BDA0003984088630000084
Is the actual measured output at time k, +.>
Figure BDA00039840886300000819
Is a white noise signal at the repeater-estimator channel at time k and satisfies +.>
Figure BDA0003984088630000085
Figure BDA0003984088630000086
Representing mathematical expectations +.>
Figure BDA00039840886300000820
Is +.>
Figure BDA00039840886300000821
Is a transpose of (a). Similarly, random energy n s,k Has the following systemThe meter characteristics are as follows: />
Figure BDA0003984088630000087
In the method, in the process of the invention,
Figure BDA0003984088630000088
the sum value representing all probabilities is 1 and the probability satisfies the interval +. >
Figure BDA0003984088630000089
Figure BDA00039840886300000810
For the expected value of the transmission random energy at the kth time, ψ represents the number of all channels.
The nonlinear function f(s) satisfies the following fan-shaped bounded condition:
Figure BDA00039840886300000811
in the method, in the process of the invention,
Figure BDA00039840886300000812
is the first real matrix of known appropriate dimension of the 1 st component at time k,/->
Figure BDA00039840886300000813
Is the second real matrix of known appropriate dimensions for component 2 at time k.
Step two, based on the available measurement information, constructing a time-varying state estimator as follows:
Figure BDA00039840886300000814
in the method, in the process of the invention,
Figure BDA00039840886300000815
is a neural networkState estimation at time k +.>
Figure BDA00039840886300000816
Is an estimate of the state of the neural network at the kth time, and (2)>
Figure BDA00039840886300000817
Is an estimate of the state of the neural network at time k-d,/and>
Figure BDA00039840886300000818
the neural network dynamic model is a real number domain of the state of the neural network dynamic model, and the dimension of the neural network dynamic model is n; x-shaped articles k Indicating the amplification factor at the kth time, d is a fixed network time lag, +.>
Figure BDA0003984088630000091
For the state estimation of the controlled output at the kth moment,/->
Figure BDA0003984088630000092
The real number domain of the controlled output state of the neural network dynamic model is provided with the dimension of r,/or->
Figure BDA0003984088630000093
First number matrix for defined left and right section, < > for>
Figure BDA0003984088630000094
Second matrix for defined left and right section, < > for the first matrix>
Figure BDA0003984088630000095
Third matrix for defined left and right section, < ->
Figure BDA0003984088630000096
H being a nonlinear excitation function at the kth time instant k For the adjustment matrix of known measurements at the kth time, D k Is a metric matrix of known measurements at time k,/for the measurement of the first time>
Figure BDA0003984088630000097
Is the measured output of the decoder at the kth time, K k Is the estimator gain matrix at time k,/i>
Figure BDA0003984088630000098
Summation of random energy expectations for the sensor>
Figure BDA0003984088630000099
Indicating the desire for random energy possessed by the sensor at the kth moment, < >>
Figure BDA00039840886300000910
Summation of random energy expectations for the sensor>
Figure BDA00039840886300000911
Indicating the desire for random energy possessed by the sensor at the kth time,
Figure BDA00039840886300000912
diagonal matrix for all binomials, < ->
Figure BDA00039840886300000913
For the fractional order (j=1, 2, …, n), n is the dimension, diag {.cndot } represents a diagonal matrix, χ k Indicating the amplification factor at the kth time.
Step two, step three, define the estimated error
Figure BDA00039840886300000914
And control output estimation error +.>
Figure BDA00039840886300000915
Further, an estimation error system can be obtained:
Figure BDA00039840886300000916
Figure BDA00039840886300000917
in the method, in the process of the invention,
Figure BDA00039840886300000918
for the excitation function at the kth moment, +.>
Figure BDA00039840886300000919
For a nonlinear excitation function at the kth moment, < +.>
Figure BDA00039840886300000920
Is an estimate of the state of the neural network at the kth time, and (2)>
Figure BDA00039840886300000921
Is an estimate of the state of the neural network at time k-d,/and>
Figure BDA00039840886300000922
is the state estimation of the neural network at time k-iota+1,/and>
Figure BDA00039840886300000923
is the real number domain of the state of the neural network dynamic model, n is the dimension, χ k Indicating the amplification factor at time k, +.>
Figure BDA00039840886300000924
The value of the open root number, K k Is the estimator gain matrix at time k,/i >
Figure BDA00039840886300000925
Indicating the desire for random energy possessed by the sensor at the kth moment, < >>
Figure BDA0003984088630000101
Indicating the expectation of random energy provided by the sensor at the kth time, ΔA k To satisfy the first order matrix of norm bounded uncertainty, ΔA dk To satisfy the second matrix of norm bounded uncertainty, ΔB k Is full ofThird matrix of foot norm bounded uncertainty,>
Figure BDA0003984088630000102
first number matrix for defined left and right section, < > for>
Figure BDA0003984088630000103
Second matrix for defined left and right section, < > for the first matrix>
Figure BDA0003984088630000104
Third matrix e for defined left and right section k Is the estimated error at the kth time, e k+1 Is the estimated error at time k+1, e k-d Is the estimated error at the k-d time, is->
Figure BDA0003984088630000105
Is the controlled output estimation error at the kth time, a (x k )=diag n {a i (x ik ) The diagonal matrix is expressed by diag {.cndot } which is the self-feedback diagonal matrix of the neural network at the kth moment, a i (x ik ) Is A (x) k ) N is the dimension; a is that d (x k ) B (x) is a system matrix of known dimension and time-lag correlation at time k k ) A weight matrix for the connected excitation function known at time k; f (x) k ) Is a nonlinear excitation function at the kth time; c (C) 1k For knowing the noise distribution matrix of the system for the first component at time k, C 2k For knowing the noise distribution matrix of the system for the second component at time k, H k An adjustment matrix that is a known measurement at a kth time; d (D) k A metric matrix that is a known measurement at a kth time; v 1k Is zero at the k-th moment and the covariance is V 1 Gaussian white noise sequence > 0, v 2k Is zero at the k-th moment and the covariance is V 2 Gaussian white noise sequence > 0,>
Figure BDA0003984088630000106
representing iota=1 to k+1Value of sum->
Figure BDA00039840886300001020
Is the white noise sequence at the sensor-repeater channel at time k and satisfies +.>
Figure BDA0003984088630000107
Figure BDA0003984088630000108
Representing mathematical expectations +.>
Figure BDA00039840886300001021
Is +.>
Figure BDA00039840886300001016
Transpose of->
Figure BDA00039840886300001017
Is a white noise signal at the repeater-estimator channel at time k and satisfies
Figure BDA0003984088630000109
Figure BDA00039840886300001010
Representing mathematical expectations +.>
Figure BDA00039840886300001018
Is +.>
Figure BDA00039840886300001019
Is to be used in the present invention,
Figure BDA00039840886300001011
indicating that at time k the channel attenuation matrix is known, diag {.cndot }, is a diagonal matrix,
Figure BDA00039840886300001012
is the known channel attenuation matrix at time k and m represents the mth channel.
The main purpose of this step is to design a time-varying state estimator (2) based on an amplification forwarding protocol, so that the estimation error system meets the following two performance constraint requirements simultaneously:
(1) The disturbance attenuation level gamma is more than 0, and the first semi-positive definite matrix number and the second semi-positive definite matrix number are respectively
Figure BDA00039840886300001013
And
Figure BDA00039840886300001014
for initial state e 0 Control output estimation error +. >
Figure BDA00039840886300001015
Satisfies the following H Performance constraints:
Figure BDA0003984088630000111
wherein N is a limited number of nodes,
Figure BDA0003984088630000112
representing mathematical expectations +.>
Figure BDA0003984088630000113
Is the first weight matrix,/->
Figure BDA0003984088630000114
Is a first number weight matrix, e 0 Is the estimated error at time 0, gamma > 0 is the given disturbance attenuation level, +.>
Figure BDA0003984088630000115
Is noise v 1k And v 2k Augmented vector,/->
Figure BDA0003984088630000116
Is at the kth time e k Is represented by the transpose ofForm of norm 2 Represented is in the form of a norm square.
(2) The estimated error covariance satisfies the upper bound constraint as follows:
Figure BDA0003984088630000117
in the method, in the process of the invention,
Figure BDA0003984088630000118
is at the kth time e k Is transposed of pi k (0.ltoreq.k < N) is a series of predetermined acceptable estimation accuracy matrices at time k.
Step three, giving H Performance index gamma, semi-positive definite matrix number one
Figure BDA0003984088630000119
Semi-positive definite matrix number two->
Figure BDA00039840886300001110
Initial conditions
Figure BDA00039840886300001111
Calculating upper bound and H of error covariance matrix of fractional order memristive neural network Performance constraints. The method comprises the following specific steps:
step three, prove H according to the following The problem is analyzed and the corresponding discriminant criterion easy to solve is given:
Figure BDA00039840886300001112
wherein:
Figure BDA0003984088630000121
Figure BDA0003984088630000122
Figure BDA0003984088630000123
Figure BDA0003984088630000124
Figure BDA0003984088630000125
Figure BDA0003984088630000126
Figure BDA0003984088630000127
Figure BDA0003984088630000128
Figure BDA0003984088630000129
Figure BDA00039840886300001210
Figure BDA00039840886300001211
Figure BDA00039840886300001212
Figure BDA00039840886300001213
Figure BDA00039840886300001214
wherein γ is a given positive scalar;
Figure BDA00039840886300001215
the number of the matrix is determined for a half positive value,
Figure BDA00039840886300001216
Figure BDA00039840886300001217
respectively->
Figure BDA00039840886300001218
D k 、K k 、E t,k 、C t,k 、ΔA k 、H k
Figure BDA00039840886300001219
ΔB k
Figure BDA00039840886300001220
ΔA k
Figure BDA00039840886300001221
E k 、K k 、C k 、R 3k Is a transpose of (2);
Figure BDA00039840886300001222
A semi-positive definite matrix; y is Y 11 Is a 1 st row and 1 st column block matrix of Y 12 Is a 1 st row and 2 nd column block matrix of Y 22 Is the 2 nd row and 2 nd column block matrix of Y 33 Is the 3 rd row and 3 rd column block matrix of Y 44 Is the 4 th row and 4 th column block matrix of Y 55 Is the 5 th row and 5 th column block matrix of Y 66 Line 6 of Y6 column block matrix, Y 77 Is the 7 th row and 7 th column block matrix of Y 88 Is the 8 th row and 8 th column block matrix of Y 99 Is the 9 th row and 9 th column block matrix of Y,>
Figure BDA00039840886300001223
indicating the desire for random energy possessed by the sensor at the kth moment, < >>
Figure BDA00039840886300001224
Indicating the expectation of random energy provided by the sensor at the kth time, ΔA k To satisfy the first order matrix of norm bounded uncertainty, ΔA dk To satisfy the second matrix of norm bounded uncertainty, ΔB k Third matrix for satisfying norm bounded uncertainty,
Figure BDA00039840886300001225
First number matrix for defined left and right section, < > for>
Figure BDA00039840886300001226
Second matrix for defined left and right section, < > for the first matrix>
Figure BDA00039840886300001227
Third matrix for defined left and right section, < ->
Figure BDA0003984088630000131
Is a nonlinear excitation function at the kth time; c (C) 1k For knowing the noise distribution matrix of the system for the first component at time k, C 2k For knowing the noise distribution matrix of the system for the second component at time k, H k An adjustment matrix that is a known measurement at a kth time; d (D) k A metric matrix that is a known measurement at a kth time; / >
Figure BDA0003984088630000132
Representing the sum of iota=1 to k+1, and +.>
Figure BDA0003984088630000133
Indicating that at time k the channel attenuation matrix is known, diag {.cndot } is a diagonal matrix,/->
Figure BDA0003984088630000134
Is the known channel attenuation matrix at time k, m represents the mth channel,
Figure BDA0003984088630000135
and->
Figure BDA0003984088630000136
The first, second, third, fourth and fifth related scaling coefficients, respectively, 0 representing the elements of the matrix block as 0.
Step three, discussing covariance matrix χ k Is set, and gives sufficient conditions as follows:
S k+1 ≥Ω(S k ), (4)
in the method, in the process of the invention,
Figure BDA0003984088630000137
Figure BDA0003984088630000138
in the formula e k Is an error matrix at the kth time;
Figure BDA0003984088630000139
for state estimation at the kth time, ρ∈ (0, 1) is a known adjusting positive constant; s is S k Is the upper bound of the error covariance matrix at the kth time;
Figure BDA00039840886300001310
Θ 1k T
Figure BDA00039840886300001311
Figure BDA00039840886300001312
Respectively->
Figure BDA00039840886300001313
Θ 1k
Figure BDA00039840886300001314
C 1k 、Φ ι 、C t,k 、E t,k Is a transpose of (2); zeta is the adjustment coefficient, omega (S) k ) The upper bound matrix solved at the kth moment is obtained; s is S k-d An upper bound matrix that is an error covariance matrix at the kth-d time; tr (S) k ) Trace of the upper bound of the error covariance matrix at the kth time; tr () is the trace of the matrix χ k =e k e k T For the upper error bound at the kth time, e k For the error matrix at the kth time, I is the identity matrix,>
Figure BDA0003984088630000141
is the first real matrix of known appropriate dimension of the 1 st component at time k,/->
Figure BDA0003984088630000142
Is the second real matrix of known appropriate dimensions for the 2 nd component at time k.
By analyzing the two results, the method ensures that the estimation error system meets the given H Performance requirements and error covariance are sufficient conditions for a bounded nature.
Step four, solving an estimator gain matrix K by solving a series of linear matrix inequalities by utilizing a random analysis method k The solution of the method is that the state estimation is carried out on a fractional order memristor neural network dynamic model under an amplification forwarding protocol; judging whether k+1 reaches the total duration N, if k+1 is less than N, executing the second step, and otherwise, ending.
In the step, a series of recursive linear matrix inequalities of (5) to (7) are solved to give an estimated error system which simultaneously satisfies H The values of the estimator gain matrix can be calculated under the sufficient condition that the performance requirement and the error covariance are bounded:
Figure BDA0003984088630000143
Figure BDA0003984088630000144
S k+1k+1 ≤0 (7)
the update matrix is:
Figure BDA0003984088630000145
wherein:
Ω 22 =diag{-ε 1,k I,-ε 2,k I,-ε 2,k I,-ε 3,k I,-ε 3,k I},
Ω 33 =diag{-ε 4,k I,-ε 4,k I,-ε 5,k I,-ε 5,k I},
Figure BDA0003984088630000146
Figure BDA0003984088630000151
Figure BDA0003984088630000152
Figure BDA0003984088630000153
Figure BDA0003984088630000154
Figure BDA0003984088630000155
Figure BDA0003984088630000156
Figure BDA0003984088630000157
Figure BDA0003984088630000158
Figure BDA0003984088630000159
Figure BDA00039840886300001510
Figure BDA00039840886300001511
Figure BDA00039840886300001512
Figure BDA00039840886300001513
Figure BDA00039840886300001514
Figure BDA00039840886300001515
Figure BDA0003984088630000161
Figure BDA0003984088630000162
Figure BDA0003984088630000163
Figure BDA0003984088630000164
Figure BDA0003984088630000165
Figure BDA0003984088630000166
Figure BDA0003984088630000167
Figure BDA0003984088630000168
in omega 11 Is a 1 st row and 1 st column block matrix, Ω 12 Is a 1 st row and 2 nd column block matrix, Ω 13 Is a 1 st row and 3 rd column block matrix, Ω 22 Is a 2 nd row and 2 nd column block matrix, Ω 33 Is a 3 rd row and 3 rd column blocking matrix,
Figure BDA0003984088630000169
is the first1 row 1 column block matrix->
Figure BDA00039840886300001610
Is a 1 st row and 2 nd column block matrix, < >>
Figure BDA00039840886300001611
Is a 1 st row and 3 rd column block matrix, < >>
Figure BDA00039840886300001612
Is a 1 st row and 4 th column block matrix, L 15 Is a 1 st row and 5 th column block matrix, L 16 Is a 1 st row and 6 th column block matrix, L 22 Is a 2 nd row and 2 nd column block matrix, L 33 Is a 3 rd row and 3 rd column block matrix, L 44 Is a 4 th row and 4 th column block matrix, L 55 Is a 5 th row and 5 th column block matrix, L 66 Is a 6 th row and 6 th column block matrix, < >>
Figure BDA00039840886300001613
Is a 1 st row and 1 st column block matrix, G 12 Is a 1 st row and 2 nd column block matrix, G 14 Is a 1 st row 4 th column block matrix,/, a>
Figure BDA00039840886300001614
Is a 1 st row and 5 th column block matrix, G 22 Is a 2 nd row and 2 nd column block matrix, G 24 Is a 2 nd row and 4 th column block matrix, < >>
Figure BDA00039840886300001615
Is a 2 nd row and 6 th column block matrix, < >>
Figure BDA00039840886300001616
Is a 2 nd row 7 th column block matrix,/, a>
Figure BDA00039840886300001617
Is a 2 nd row and 8 th column block matrix, G 33 Is a 3 rd row and 3 rd column block matrix, G 39 Is a 3 rd row and 9 th column block matrix, < >>
Figure BDA00039840886300001618
Is a 4 th row and 10 th column block matrix, < >>
Figure BDA00039840886300001619
Is a 4 th row and 4 th column block matrix, < >>
Figure BDA00039840886300001620
Is a 5 th row and 5 th column block matrix, < >>
Figure BDA00039840886300001621
Is a 6 th row and 6 th column block matrix, < >>
Figure BDA00039840886300001622
Is a 7 th row 7 th column block matrix,/, a>
Figure BDA00039840886300001623
Is a 8 th row 8 th column block matrix, < >>
Figure BDA0003984088630000171
Is a 9 th row and 9 th column block matrix, < >>
Figure BDA0003984088630000172
Is a 10 th row 10 th column block matrix,
Figure BDA0003984088630000173
respectively is
Figure BDA0003984088630000174
D k ,K k ,E t,k ,C t,k ,ΔA k ,H k
Figure BDA0003984088630000175
ΔB k
Figure BDA0003984088630000176
ΔA k
Figure BDA0003984088630000177
E k ,K k ,C k ,R 3k Transpose of->
Figure BDA0003984088630000178
First number matrix for defined left and right section, < > for>
Figure BDA0003984088630000179
Second matrix for defined left and right section, < > for the first matrix>
Figure BDA00039840886300001710
Third matrix for defined left and right section, < ->
Figure BDA00039840886300001711
Is a nonlinear excitation function at the kth time; c (C) 1k For knowing the noise distribution matrix of the system for the first component at time k, C 2k For knowing the noise distribution matrix of the system for the second component at time k, H k An adjustment matrix that is a known measurement at a kth time; d (D) k A metric matrix that is a known measurement at a kth time;
Figure BDA00039840886300001712
Representing the sum of iota=1 to k+1, and +.>
Figure BDA00039840886300001713
Indicating that at time k the channel attenuation matrix is known, diag {.cndot } is a diagonal matrix,/->
Figure BDA00039840886300001714
Is the known channel attenuation matrix at the kth moment, m represents the mth channel, ρ e (0, 1) is the known adjusting positive constant; s is S k Is the upper bound of the error covariance matrix at the kth time;
Figure BDA00039840886300001715
Θ 1k T
Figure BDA00039840886300001716
Figure BDA00039840886300001717
respectively->
Figure BDA00039840886300001718
Θ 1k
Figure BDA00039840886300001719
C 1k ,Φ ι ,C t,k ,E t,k Is a transpose of (2); zeta is the adjustment coefficient, omega (S) k ) The upper bound matrix solved at the kth moment is obtained; s is S k-d An upper bound matrix that is an error covariance matrix at the kth-d time; tr (S) k ) Trace of the upper bound of the error covariance matrix at the kth time; tr () is the trace of the matrix, and I is the identity matrix;
Figure BDA00039840886300001720
A first weight matrix at the kth moment;
Figure BDA00039840886300001721
A second weight matrix at the kth moment;
Figure BDA00039840886300001722
A third weight matrix at the kth moment;
Figure BDA00039840886300001723
Is at the kth time R 3k Is a transpose of (2);
Figure BDA00039840886300001724
Is the first real matrix of known appropriate dimension of the 1 st component at time k,/- >
Figure BDA00039840886300001725
Is the known appropriate dimension of the 2 nd component at time kA second real matrix;
Figure BDA00039840886300001726
State estimation for a nonlinear excitation function at a kth time;
Figure BDA00039840886300001727
Is a first number metric matrix of known appropriate dimensions for component 1 at time k;
Figure BDA00039840886300001728
A second number metric matrix of known appropriate dimensions for the 2 nd component at time k;
Figure BDA00039840886300001729
A third metric matrix of known appropriate dimensions for component 3 at time k;
Figure BDA00039840886300001730
A third metric matrix of known appropriate dimension for the 4 th component at time k; n (N) 5 A third metric matrix of known appropriate dimensions for the 5 th component at time k; m is M 1 ,M 2 ,M 3 ,M 4 And M 5 The metrics are the metrics of first, second, third, fourth and fifth, respectively,
Figure BDA0003984088630000181
For the neuronal status estimation at time k, is->
Figure BDA0003984088630000182
A semi-positive definite matrix at the kth moment;
Figure BDA0003984088630000183
A semi-positive definite matrix at the kth moment;
Figure BDA0003984088630000184
A semi-positive definite matrix at the k-d time;
Figure BDA0003984088630000185
For the first update matrix at time k+1, S k To estimate the upper bound matrix of the error, tr (S k ) To estimate the error upper bound matrix S at the kth time k Is a trace of (1); s is S k-d For the upper matrix at time k-d, κ is the adjusted weight coefficient, ++>
Figure BDA0003984088630000186
And
Figure BDA0003984088630000187
are all known real-valued weight matrices, < ->
Figure BDA0003984088630000188
Is an unknown matrix and satisfies- >
Figure BDA0003984088630000189
Figure BDA00039840886300001810
Is that
Figure BDA00039840886300001811
Gamma is a given positive scalar;
Figure BDA00039840886300001812
A first number is determined for a given semi-positive matrix;
Figure BDA00039840886300001813
respectively is omega 12 ,Ω 13
Figure BDA00039840886300001814
Is a transpose of (2);
Figure BDA00039840886300001815
Figure BDA00039840886300001816
G is respectively 12 ,G 14
Figure BDA00039840886300001817
G 24 ,G 410
Figure BDA00039840886300001818
Is a transpose of (2);
Figure BDA00039840886300001819
Respectively M 1 ,M 2 ,M 3 ,M 4 ,M 5 Is a transpose of (2); n (N) 1 ,N 2 ,N 3 ,N 4 ,N 5 Are respectively->
Figure BDA00039840886300001820
Is a transpose of (2);
Figure BDA00039840886300001821
And->
Figure BDA00039840886300001822
The first, second, third, fourth and fifth related scaling coefficients, respectively, 0 representing the elements of the matrix block as 0.
In the invention, the theory in the third and fourth steps is as follows:
first, prove H Analyzing the problem and giving out corresponding discrimination criteria which are easy to solve; next, consider covariance matrix X k Is a problem of the upper bound of the (c) is to be solved, and the following sufficient conditions are given; by analyzing the two results, the method ensures that the estimation error system meets the given H Sufficient conditions for performance requirements and error covariance constraints, solving for a solution to an estimator gain matrix by solving a series of linear matrix inequalities, and computing an estimator gain matrix K k Is a solution to (a).
Examples:
the embodiment is provided with H The fractional order memristor neural network with performance constraint and variance constraint is taken as an example, and can be applied to associative memoryIn the process of memory, pattern recognition and combination optimization, the method provided by the invention is adopted to simulate a voice recognition case:
Having H under an amplify-and-forward protocol The relative system parameters of the fractional order memristor neural network state model, the measurement output model and the controlled output model with the performance constraint and the variance constraint are selected as follows:
the corresponding adjustment matrix is given according to the state of the voice of the person:
Figure BDA0003984088630000191
Figure BDA0003984088630000192
Figure BDA0003984088630000193
Figure BDA0003984088630000194
Figure BDA0003984088630000195
Figure BDA0003984088630000196
C 1k =[-1.2-0.35sin(2k)] T ,
the measurement adjustment matrix is:
C 2k =[-0.2-0.1sin(3k)] T ,
Figure BDA0003984088630000197
the controlled output adjustment matrix is:
H k =[-0.01-0.01sin(2k)]
the state weight matrix is:
Figure BDA0003984088630000198
Figure BDA0003984088630000199
the weight matrix and the adjustment parameters of the nonlinear function are as follows:
Figure BDA00039840886300001910
case I: the probability distribution of the following transmission powers is given:
Prob{p t,k =1}=0.1,Prob{p t,k =1.5}=0.3,
Prob{p t,k =2}=0.6,Prob{n t,k =1}=0.2,
Prob{n t,k =1.5}=0.4,Prob{n t,k =2}=0.4,
case II: the probability distribution of the following transmission powers is given:
Prob{p t,k =1}=0.6,Prob{p t,k =1.5}=0.3,
Prob{p t,k =2}=0.1,Prob{n t,k =1}=0.4,
Prob{n t,k =1.5}=0.4,Prob{n t,k =2}=0.2.
the excitation function is taken as:
Figure BDA0003984088630000201
wherein x is k =[x 1,k x 2,k ] T Is the state vector of the neuron and has an amplification factor of χ s =1,x 1,k For at the kth time x k Is the first component specific gravity matrix, x 2,k For at the kth time x k A second component specific gravity matrix of (c).
Other simulation initial values were selected as follows:
disturbance attenuation level γ=0.7, semi-positive definite matrix number one
Figure BDA0003984088630000202
Upper bound matrix { Ω } k } 1≤k≤N =diag {0.2,0.2} and covariance +.>
Figure BDA0003984088630000203
Initial state->
Figure BDA0003984088630000204
Channel parameter C t,s =0.38 and E t,s Taking the noise covariance of the sensor to relay channel and relay to estimator channel, respectively =0.12>
Figure BDA0003984088630000205
And->
Figure BDA0003984088630000206
Solving linear matrix inequalities (5) to (7) by using recursive linear matrix inequalities, wherein partial numerical values are as follows: case one (Case I):
Figure BDA0003984088630000207
Figure BDA0003984088630000208
Figure BDA0003984088630000209
Case two (Case II):
Figure BDA00039840886300002010
Figure BDA0003984088630000211
Figure BDA0003984088630000212
state estimator effect:
as can be seen from fig. 2, there is H under the protocol for amplify-and-forward The method for designing the state estimator can effectively estimate the target state.
As can be seen from fig. 3, 4, and 5, the estimation error effect becomes worse as the power decreases for each time.

Claims (9)

1. The fractional order memristor neural network estimation method under the limitation of variance is characterized by comprising the following steps:
step one, establishing a fractional order memristor neural network dynamic model under an amplification forwarding protocol;
step two, under an amplifying and forwarding protocol, performing state estimation on the fractional order memristor neural network dynamic model established in the step one;
step three, giving H Performance index gamma, semi-positive definite matrix number one
Figure FDA0003984088620000011
Semi-positive definite matrix number two->
Figure FDA0003984088620000012
Initial conditions->
Figure FDA0003984088620000013
Calculating upper bound and H of error covariance matrix of fractional order memristive neural network Performance constraints;
step four, solving an estimator gain matrix K by solving a linear matrix inequality by utilizing a random analysis method k And (3) realizing state estimation of a fractional order memristor neural network dynamic model under an amplification forwarding protocol, judging whether k+1 reaches the total duration N, if k+1 is less than N, executing the second step, and otherwise ending.
2. The method for estimating a fractional memristor neural network under variance constraint according to claim 1, wherein in the first step, according to the definition of the fractional derivative of Grunwald-Letnikov, the state space of the dynamic model of the fractional memristor neural network is in the form of:
Figure FDA0003984088620000014
wherein:
Figure FDA0003984088620000015
Figure FDA0003984088620000016
Figure FDA0003984088620000017
here the number of the elements is the number,
Figure FDA0003984088620000021
representing differential operator +_>
Figure FDA0003984088620000022
For fractional order (j=1, 2, …, n), n is dimension, +.>
Figure FDA0003984088620000023
Is at the firstState vector of fractional order memristive neural network at time k, +.>
Figure FDA0003984088620000024
Is the state vector of the fractional order memristive neural network at time k-iota+1, +.>
Figure FDA0003984088620000025
Is the state vector of the fractional order memristive neural network at time k-d, +.>
Figure FDA0003984088620000026
Is the state vector of the fractional order memristive neural network at time k+1, +.>
Figure FDA0003984088620000027
The neural network dynamic model is a real number domain of the state of the neural network dynamic model, and the dimension of the neural network dynamic model is n;
Figure FDA0003984088620000028
for the controlled measurement output at time k, < > in->
Figure FDA0003984088620000029
The real number domain of the controlled output state of the neural network dynamic model is provided, and the dimension of the real number domain is r;
Figure FDA00039840886200000210
Is a given initial sequence, d is a discrete fixed network time lag; a (x) k )=diag n {a i (x i,k ) The symbol "is the neural network self-feedback diagonal matrix at the kth time, n is the dimension, diag {.cndot } represents the diagonal matrix, a i (x i,k ) Is A (x) k ) N is the dimension; a is that d (x k )={a ij,d (x i,k )} n*n A is a system matrix with known dimension at the kth moment and related to time lag ij,d (x i,k ) To at the kth time A d (x k ) Is the i-th component form of (a); b (x) k )={b ij (x i,k )} n*n Weight matrix for a known connected excitation function at the kth moment, b ij (x i,k ) For at the kth time B (x k ) Is the i-th component form of (a); f (x) k ) Is a nonlinear excitation function at the kth time; c (C) 1k For knowing the noise distribution matrix of the system for the first component at time k, C 2k For knowing the noise distribution matrix of the system for the second component at time k, H k An adjustment matrix that is a known measurement at a kth time; d (D) k A metric matrix that is a known measurement at a kth time; v 1k Is zero at the k-th moment and the covariance is V 1 Gaussian white noise sequence > 0, v 2k Is zero at the k-th moment and the covariance is V 2 Gaussian white noise sequence > 0,>
Figure FDA00039840886200000211
indicated are values for the sum of iota=1 to k+1.
3. The fractional order memristive neural network under variance constraint of claim 2, characterized in that the a i (x i,k )、a ij,d (x i,k ) And b ij (x i,k ) The method meets the following conditions:
Figure FDA00039840886200000212
wherein a is i (x i,k )、a ij,d (x i,k ) And b ij (x i,k ) Respectively A (x) k ),A d (x k ) And B (x) k ) Is the ith component, omega i The value > 0 is a known handover threshold value,
Figure FDA00039840886200000213
for the i-th known upper storage variable matrix, is->
Figure FDA00039840886200000214
For the i-th known lower storage variable matrix, is +.>
Figure FDA00039840886200000215
For ij, d known left storage variable matrix,
Figure FDA00039840886200000216
For ij, d known right storage variable matrix, +. >
Figure FDA0003984088620000031
For the ij-th known memory variable matrix, < >>
Figure FDA0003984088620000032
The variable matrix is stored externally for the ij-th known.
4. The method for estimating fractional order memristor neural network under variance limitation of claim 1, wherein the specific steps of the step two are as follows:
step two, let p s,k And n s,k Representing the random energy of the sensor and the amplifying-repeating relay, respectively, the output signal of the amplifying-repeating relay is composed of
Figure FDA0003984088620000033
Expressed, it satisfies the following equation:
Figure FDA0003984088620000034
in the method, in the process of the invention,
Figure FDA0003984088620000035
indicating that at time k the channel attenuation matrix is known, < >>
Figure FDA0003984088620000036
Diag {.cndot } represents a diagonal matrix, y, in the form of the m channel components of the attenuation matrix for the known channel k Is the ideal measurement output at time k, +.>
Figure FDA0003984088620000037
Is the actual measurement output at the kth time, θ s1,k Is the white noise sequence at the sensor-repeater channel at time k and satisfies +.>
Figure FDA0003984088620000038
Figure FDA0003984088620000039
Representing the mathematical expectation (θ) s1,k ) T Is at the kth time theta s1,k Transpose of p s,k Representing random energy possessed by the sensor at the kth time;
the output value of the amplify-and-forward repeater is expressed as:
Figure FDA00039840886200000310
in χ k > 0 represents the amplification factor at the kth time,
Figure FDA00039840886200000311
is the attenuation matrix of the known channel at time k, < >>
Figure FDA00039840886200000312
M-th channel, n, is the component form of m channels of the attenuation matrix of the known channel s,k Is a variable of the transmission random energy at the kth moment,/->
Figure FDA00039840886200000313
Is the actual measurement output at the kth time, θ s2,k White noise signal at the repeater-estimator channel at time kNumber and satisfy->
Figure FDA00039840886200000314
Figure FDA00039840886200000315
Representing the mathematical expectation (θ) s2,k ) T Is at the kth time theta s2,k Is a transpose of (2);
step two, based on the available measurement information, constructing a time-varying state estimator as follows:
Figure FDA00039840886200000316
in the method, in the process of the invention,
Figure FDA00039840886200000317
is an estimate of the state of the neural network at the kth time, and (2)>
Figure FDA00039840886200000318
Is an estimate of the state of the neural network at the kth time, and (2)>
Figure FDA00039840886200000319
Is an estimate of the state of the neural network at time k-d,/and>
Figure FDA00039840886200000320
the neural network dynamic model is a real number domain of the state of the neural network dynamic model, and the dimension of the neural network dynamic model is n; x-shaped articles k Indicating the amplification factor at the kth time, d is a fixed network time lag, +.>
Figure FDA0003984088620000041
For the state estimation of the controlled output at the kth moment,/->
Figure FDA0003984088620000042
Is a neural network dynamic model quiltThe real number domain of the output state is controlled and its dimension is r +.>
Figure FDA0003984088620000043
First number matrix for defined left and right section, < > for>
Figure FDA0003984088620000044
Second matrix for defined left and right section, < > for the first matrix>
Figure FDA0003984088620000045
Third matrix for defined left and right section, < ->
Figure FDA0003984088620000046
H being a nonlinear excitation function at the kth time instant k For the adjustment matrix of known measurements at the kth time, D k Is a metric matrix of known measurements at time k,/for the measurement of the first time >
Figure FDA0003984088620000047
Is the measured output of the decoder at the kth time, K k Is the estimator gain matrix at time k,/i>
Figure FDA0003984088620000048
Summation of random energy expectations for the sensor>
Figure FDA0003984088620000049
Indicating the desire for random energy possessed by the sensor at the kth moment, < >>
Figure FDA00039840886200000410
Summation of random energy expectations for the sensor>
Figure FDA00039840886200000411
Indicating the desire for random energy possessed by the sensor at the kth moment, < >>
Figure FDA00039840886200000412
Diagonal matrix for all binomials, < ->
Figure FDA00039840886200000413
For the fractional order (j=1, 2, …, n), n is the dimension, diag {.cndot } represents a diagonal matrix, χ k Indicating the amplification factor at the kth time;
step two, step three, define the estimated error
Figure FDA00039840886200000414
And control output estimation error +.>
Figure FDA00039840886200000415
Obtaining an estimation error system:
Figure FDA00039840886200000416
Figure FDA00039840886200000417
in the method, in the process of the invention,
Figure FDA00039840886200000418
for the excitation function at the kth moment, +.>
Figure FDA00039840886200000419
For a nonlinear excitation function at the kth moment, < +.>
Figure FDA00039840886200000420
Is an estimate of the state of the neural network at the kth time, and (2)>
Figure FDA00039840886200000421
Is a neural netState estimation of complex at k-d time, < >>
Figure FDA00039840886200000422
Is the state estimation of the neural network at time k-iota+1,/and>
Figure FDA00039840886200000423
is the real number domain of the state of the neural network dynamic model, n is the dimension, χ k Indicating the amplification factor at time k, +.>
Figure FDA00039840886200000424
The value of the open root number, K k Is the estimator gain matrix at time k,/i>
Figure FDA0003984088620000051
Indicating the desire for random energy possessed by the sensor at the kth moment, < > >
Figure FDA0003984088620000052
Indicating the expectation of random energy provided by the sensor at the kth time, ΔA k To satisfy the first order matrix of norm bounded uncertainty, ΔA dk To satisfy the second matrix of norm bounded uncertainty, ΔB k Third matrix for satisfying norm bounded uncertainty,
Figure FDA0003984088620000053
First number matrix for defined left and right section, < > for>
Figure FDA0003984088620000054
Second matrix for defined left and right section, < > for the first matrix>
Figure FDA0003984088620000055
Third matrix e for defined left and right section k Is the estimated error at the kth time, e k+1 Is the estimated error at time k+1, e k-d Is the estimated error at the k-d time, is->
Figure FDA0003984088620000056
Is the controlled output estimation error at the kth time, a (x k )=diag n {a i (x ik ) The diagonal matrix is expressed by diag {.cndot } which is the self-feedback diagonal matrix of the neural network at the kth moment, a i (x ik ) Is A (x) k ) N is the dimension; a is that d (x k ) B (x) is a system matrix of known dimension and time-lag correlation at time k k ) A weight matrix for the connected excitation function known at time k; f (x) k ) Is a nonlinear excitation function at the kth time; c (C) 1k For knowing the noise distribution matrix of the system for the first component at time k, C 2k For knowing the noise distribution matrix of the system for the second component at time k, H k An adjustment matrix that is a known measurement at a kth time; d (D) k A metric matrix that is a known measurement at a kth time; v 1k Is zero at the k-th moment and the covariance is V 1 Gaussian white noise sequence > 0, v 2k Is zero at the k-th moment and the covariance is V 2 Gaussian white noise sequence > 0,>
Figure FDA0003984088620000057
representing the sum of iota=1 to k+1, θ s1,k Is the white noise sequence at the sensor-repeater channel at time k and satisfies +.>
Figure FDA0003984088620000058
Figure FDA0003984088620000059
Representing the mathematical expectation (θ) s1,k ) T Is at the kth time theta s1,k Transpose of θ s2,k Is a white noise signal at the repeater-estimator channel at time k and satisfies
Figure FDA00039840886200000510
Figure FDA00039840886200000511
Representing the mathematical expectation (θ) s2,k ) T Is at the kth time theta s2,k Is to be used in the present invention,
Figure FDA00039840886200000512
indicating that at time k the channel attenuation matrix is known, diag {.cndot }, is a diagonal matrix,
Figure FDA00039840886200000513
is the known channel attenuation matrix at time k and m represents the mth channel.
5. The method for estimating fractional order memristive neural network under variance constraint of claim 4, wherein the p is s,k The following statistical properties are satisfied:
Figure FDA00039840886200000514
where Pr {.cndot. } represents a mathematical probability,
Figure FDA0003984088620000061
the sum value representing all probabilities is 1 and the probability satisfies the interval +.>
Figure FDA0003984088620000062
Figure FDA0003984088620000063
For the expected value of random energy possessed by the sensor at the kth time, phi represents the number of all channels.
6. The fractional order memristor neural network estimation method under variance constraint of claim 4Characterized in that the random energy n s,k Has the following statistical properties:
Figure FDA0003984088620000064
in the method, in the process of the invention,
Figure FDA0003984088620000065
the sum value representing all probabilities is 1 and the probability satisfies the interval +.>
Figure FDA0003984088620000066
Figure FDA0003984088620000067
For the expected value of the transmission random energy at the kth time, ψ represents the number of all channels.
7. The fractional order memristive neural network estimation method under variance constraint of claim 4, characterized in that the estimation error system satisfies the following two performance constraint requirements simultaneously:
(1) The disturbance attenuation level gamma is more than 0, and the first semi-positive definite matrix number and the second semi-positive definite matrix number are respectively
Figure FDA0003984088620000068
And->
Figure FDA0003984088620000069
For initial state e 0 Control output estimation error +.>
Figure FDA00039840886200000610
Satisfies the following H Performance constraints: />
Figure FDA00039840886200000611
Wherein N is a limited number of nodes,
Figure FDA00039840886200000612
representing mathematical expectations +.>
Figure FDA00039840886200000613
Is the first weight matrix,/->
Figure FDA00039840886200000614
Is a first number weight matrix, e 0 Is the estimated error at time 0, gamma > 0 is the given disturbance attenuation level, +.>
Figure FDA00039840886200000615
Is noise v 1k And v 2k Augmented vector,/->
Figure FDA00039840886200000616
Is at the kth time e k Is represented by the norm form, I.I 2 Expressed in terms of norm square;
(2) The estimated error covariance satisfies the upper bound constraint as follows:
Figure FDA00039840886200000617
In the method, in the process of the invention,
Figure FDA00039840886200000618
is at the kth time e k Is transposed of pi k (0.ltoreq.k < N) is a series of predetermined acceptable estimation accuracy matrices at time k.
8. The method for estimating fractional order memristor neural network under variance limitation of claim 1, wherein the specific steps of the third step are as follows:
step three, prove H according to the following The problem is analyzed and the corresponding discriminant criterion easy to solve is given:
Figure FDA0003984088620000071
wherein:
Figure FDA0003984088620000072
Figure FDA0003984088620000073
Figure FDA0003984088620000074
Figure FDA0003984088620000075
Figure FDA0003984088620000076
Figure FDA0003984088620000077
Figure FDA0003984088620000078
Figure FDA0003984088620000079
Figure FDA00039840886200000710
Figure FDA00039840886200000711
Figure FDA00039840886200000712
Figure FDA00039840886200000713
Figure FDA00039840886200000714
Figure FDA00039840886200000715
wherein γ is a given positive scalar;
Figure FDA00039840886200000716
for half positive definite matrix one ++>
Figure FDA00039840886200000717
Figure FDA0003984088620000081
Respectively->
Figure FDA0003984088620000082
D k 、K k 、E t,k 、C t,k 、ΔA k 、H k
Figure FDA0003984088620000083
ΔB k
Figure FDA0003984088620000084
ΔA k
Figure FDA0003984088620000085
E k 、K k 、C k 、R 3k Is a transpose of (2);
Figure FDA0003984088620000086
A semi-positive definite matrix; y is Y 11 Is a 1 st row and 1 st column block matrix of Y 12 Is a 1 st row and 2 nd column block matrix of Y 22 Is the 2 nd row and 2 nd column block matrix of Y 33 Is the 3 rd row and 3 rd column block matrix of Y 44 Is the 4 th row and 4 th column block matrix of Y 55 Is the 5 th row and 5 th column block matrix of Y 66 Is a 6 th row and 6 th column block matrix of Y 77 Is the 7 th row and 7 th column block matrix of Y 88 Is the 8 th row and 8 th column block matrix of Y 99 Is the 9 th row and 9 th column block matrix of Y,>
Figure FDA0003984088620000087
indicating the desire for random energy possessed by the sensor at the kth moment, < >>
Figure FDA0003984088620000088
Indicating the expectation of random energy provided by the sensor at the kth time, ΔA k To satisfy the first order matrix of norm bounded uncertainty, ΔA dk To satisfy the second matrix of norm bounded uncertainty, ΔB k Third matrix for satisfying norm bounded uncertainty,
Figure FDA0003984088620000089
First number matrix for defined left and right section,
Figure FDA00039840886200000810
Second matrix for defined left and right section, < > for the first matrix>
Figure FDA00039840886200000811
Third matrix for defined left and right section, < ->
Figure FDA00039840886200000812
Is a nonlinear excitation function at the kth time; c (C) 1k For knowing the noise distribution matrix of the system for the first component at time k, C 2k For knowing the noise distribution matrix of the system for the second component at time k, H k An adjustment matrix that is a known measurement at a kth time; d (D) k A metric matrix that is a known measurement at a kth time;
Figure FDA00039840886200000813
Representing the sum of iota=1 to k+1, and +.>
Figure FDA00039840886200000814
Indicating that at time k the channel attenuation matrix is known, diag {.cndot } is a diagonal matrix,/->
Figure FDA00039840886200000815
Is the known channel attenuation matrix at time k, m represents the mth channel,
Figure FDA00039840886200000816
and->
Figure FDA00039840886200000817
The first, second, third, fourth and fifth related scaling coefficients, respectively, 0 representing elements of the matrix block as 0;
step three, two, discuss covariance matrix
Figure FDA00039840886200000818
Is set, and gives sufficient conditions as follows:
S k+1 ≥Ω(S k ), (4)
in the method, in the process of the invention,
Figure FDA0003984088620000091
Figure FDA0003984088620000092
in the formula e k Is an error matrix at the kth time;
Figure FDA0003984088620000093
for state estimation at the kth time, ρ∈ (0, 1) is a known adjusting positive constant; s is S k Is the upper bound of the error covariance matrix at the kth time;
Figure FDA0003984088620000094
Θ 1k T
Figure FDA0003984088620000095
Figure FDA0003984088620000096
Respectively->
Figure FDA0003984088620000097
Θ 1k
Figure FDA0003984088620000098
C 1k 、Φ ι 、C t,k 、E t,k Is a transpose of (2); zeta is the adjustment coefficient, omega (S) k ) The upper bound matrix solved at the kth moment is obtained; s is S k-d An upper bound matrix that is an error covariance matrix at the kth-d time; tr (S) k ) Trace of the upper bound of the error covariance matrix at the kth time; tr () is the trace of the matrix,
Figure FDA0003984088620000099
for the upper error bound at the kth time, e k For the error matrix at the kth time, I is the identity matrix,>
Figure FDA00039840886200000910
is the first real matrix of known appropriate dimension of the 1 st component at time k,/->
Figure FDA00039840886200000911
Is the second real matrix of known appropriate dimensions for the 2 nd component at time k.
9. The method for estimating a fractional order memristor neural network under variance constraint of claim 1, wherein in the fourth step, the estimation error system is given by solving a series of recursive linear matrix inequalities (5) to (7) while satisfying H The values of the estimator gain matrix can be calculated under the sufficient condition that the performance requirement and the error covariance are bounded:
Figure FDA00039840886200000912
Figure FDA0003984088620000101
S k+1k+1 ≤0 (7)
the update matrix is:
Figure FDA0003984088620000102
wherein:
Figure FDA0003984088620000103
Ω 22 =diag{-ε 1,k I,-ε 2,k I,-ε 2,k I,-ε 3,k I,-ε 3,k I},
Ω 33 =diag{-ε 4,k I,-ε 4,k I,-ε 5,k I,-ε 5,k I},
Figure FDA0003984088620000104
Figure FDA0003984088620000105
Figure FDA0003984088620000106
Figure FDA0003984088620000107
Figure FDA0003984088620000108
Figure FDA0003984088620000109
Figure FDA00039840886200001010
Figure FDA0003984088620000111
Figure FDA0003984088620000112
Figure FDA0003984088620000113
Figure FDA0003984088620000114
Figure FDA0003984088620000115
Figure FDA0003984088620000116
Figure FDA0003984088620000117
Figure FDA0003984088620000118
Figure FDA0003984088620000119
Figure FDA00039840886200001110
Figure FDA00039840886200001111
Figure FDA00039840886200001112
Figure FDA00039840886200001113
Figure FDA00039840886200001114
Figure FDA00039840886200001115
Figure FDA00039840886200001116
in omega 11 Is a 1 st row and 1 st column block matrix, Ω 12 Is a 1 st row and 2 nd column block matrix, Ω 13 Is a 1 st row and 3 rd column block matrix, Ω 22 Is a 2 nd row and 2 nd column block matrix, Ω 33 Is a 3 rd row and 3 rd column blocking matrix,
Figure FDA00039840886200001117
is a row 1 and column 1 block matrix,
Figure FDA0003984088620000121
is a 1 st row and 2 nd column block matrix, < >>
Figure FDA0003984088620000122
Is a 1 st row and 3 rd column block matrix, < >>
Figure FDA0003984088620000123
Is a 1 st row and 4 th column block matrix, L 15 Is a 1 st row and 5 th column block matrix, L 16 Is a 1 st row and 6 th column block matrix, L 22 Is a 2 nd row and 2 nd column block matrix, L 33 Is a 3 rd row and 3 rd column block matrix, L 44 Is a 4 th row and 4 th column block matrix, L 55 Is a 5 th row and 5 th column block matrix, L 66 Is a 6 th row and 6 th column block matrix, < >>
Figure FDA0003984088620000124
Is a 1 st row and 1 st column block matrix, G 12 Is a 1 st row and 2 nd column block matrix, G 14 Is a 1 st row 4 th column block matrix,/, a>
Figure FDA0003984088620000125
Is a 1 st row and 5 th column block matrix, G 22 Is a 2 nd row and 2 nd column block matrix, G 24 Is a 2 nd row and 4 th column block matrix, < >>
Figure FDA0003984088620000126
Is a 2 nd row and 6 th column block matrix, < >>
Figure FDA0003984088620000127
Is a 2 nd row 7 th column block matrix,/, a>
Figure FDA0003984088620000128
Is a 2 nd row and 8 th column block matrix, G 33 Is a 3 rd row and 3 rd column block matrix, G 39 Is a 3 rd row and 9 th column block matrix, < >>
Figure FDA0003984088620000129
Is a 4 th row and 10 th column block matrix, < >>
Figure FDA00039840886200001210
Is a 4 th row and 4 th column block matrix, < >>
Figure FDA00039840886200001211
Is a 5 th row and 5 th column block matrix, < >>
Figure FDA00039840886200001212
Is a 6 th row and 6 th column block matrix, < >>
Figure FDA00039840886200001213
Is a 7 th row 7 th column block matrix,/, a>
Figure FDA00039840886200001214
Is a 8 th row 8 th column block matrix, < >>
Figure FDA00039840886200001215
Is a 9 th row and 9 th column block matrix, < >>
Figure FDA00039840886200001216
Is a 10 th row 10 th column block matrix,
Figure FDA00039840886200001217
Respectively is
Figure FDA00039840886200001218
D k ,K k ,E t,k ,C t,k ,ΔA k ,H k
Figure FDA00039840886200001219
ΔB k
Figure FDA00039840886200001220
ΔA k
Figure FDA00039840886200001221
E k ,K k ,C k ,R 3k Transpose of->
Figure FDA00039840886200001222
First number matrix for defined left and right section, < > for>
Figure FDA00039840886200001223
Second matrix for defined left and right section, < > for the first matrix>
Figure FDA00039840886200001224
Third matrix for defined left and right section, < ->
Figure FDA00039840886200001225
Is a nonlinear excitation function at the kth time; c (C) 1k For knowing the noise distribution matrix of the system for the first component at time k, C 2k For knowing the noise distribution matrix of the system for the second component at time k, H k An adjustment matrix that is a known measurement at a kth time; d (D) k A metric matrix that is a known measurement at a kth time;
Figure FDA00039840886200001226
Representing the sum of iota=1 to k+1, and +.>
Figure FDA00039840886200001227
Indicating that at time k the channel attenuation matrix is known, diag {.cndot } is a diagonal matrix,/->
Figure FDA00039840886200001228
Is the known channel attenuation matrix at the kth moment, m represents the mth channel, ρ e (0, 1) is the known adjusting positive constant; s is S k Is the upper bound of the error covariance matrix at the kth time;
Figure FDA00039840886200001229
Θ 1k T
Figure FDA00039840886200001230
Figure FDA00039840886200001231
Respectively->
Figure FDA00039840886200001232
Θ 1k
Figure FDA00039840886200001233
C 1k ,Φ ι ,C t,k ,E t,k Is a transpose of (2); zeta is the adjustment coefficient, omega (S) k ) The upper bound matrix solved at the kth moment is obtained; s is S k-d An upper bound matrix that is an error covariance matrix at the kth-d time; tr (S) k ) Trace of the upper bound of the error covariance matrix at the kth time; tr () is the trace of the matrix, and I is the identity matrix;
Figure FDA0003984088620000131
A first weight matrix at the kth moment;
Figure FDA0003984088620000132
A second weight matrix at the kth moment;
Figure FDA0003984088620000133
A third weight matrix at the kth moment;
Figure FDA0003984088620000134
Is at the kth time R 3k Is a transpose of (2);
Figure FDA0003984088620000135
Is the first real matrix of known appropriate dimension of the 1 st component at time k,/->
Figure FDA0003984088620000136
A second real matrix of known appropriate dimensions for the 2 nd component at time k;
Figure FDA0003984088620000137
State estimation for a nonlinear excitation function at a kth time;
Figure FDA0003984088620000138
Is a first number metric matrix of known appropriate dimensions for component 1 at time k;
Figure FDA0003984088620000139
A second number metric matrix of known appropriate dimensions for the 2 nd component at time k;
Figure FDA00039840886200001310
A third metric matrix of known appropriate dimensions for component 3 at time k;
Figure FDA00039840886200001311
A third metric matrix of known appropriate dimension for the 4 th component at time k;
Figure FDA00039840886200001312
A third metric matrix of known appropriate dimensions for the 5 th component at time k; m is M 1 ,M 2 ,M 3 ,M 4 And M 5 The metrics are the metrics of first, second, third, fourth and fifth, respectively,
Figure FDA00039840886200001313
For the neuronal status estimation at time k, is->
Figure FDA00039840886200001314
A semi-positive definite matrix at the kth moment;
Figure FDA00039840886200001315
A semi-positive definite matrix at the kth moment;
Figure FDA00039840886200001316
A semi-positive definite matrix at the k-d time;
Figure FDA00039840886200001317
For the first update matrix at time k+1, S k To estimate the upper bound matrix of the error, tr (S k ) To estimate the error upper bound matrix S at the kth time k Is a trace of (1); s is S k-d For the upper matrix at time k-d, κ is the adjusted weight coefficient, ++>
Figure FDA00039840886200001318
And
Figure FDA00039840886200001319
are all known real-valued weight matrices, < ->
Figure FDA00039840886200001320
Is an unknown matrix and satisfies->
Figure FDA00039840886200001321
Figure FDA00039840886200001322
Is that
Figure FDA00039840886200001323
Gamma is a given positive scalar;
Figure FDA00039840886200001324
A first number is determined for a given semi-positive matrix;
Figure FDA00039840886200001325
Respectively is omega 12 ,Ω 13
Figure FDA00039840886200001326
Is a transpose of (2);
Figure FDA00039840886200001327
Figure FDA00039840886200001328
G is respectively 12 ,G 14
Figure FDA00039840886200001329
G 24 ,G 410
Figure FDA00039840886200001330
Is a transpose of (2);
Figure FDA00039840886200001331
Respectively M 1 ,M 2 ,M 3 ,M 4 ,M 5 Is a transpose of (2); n (N) 1 ,N 2 ,N 3 ,N 4 ,N 5 Respectively are
Figure FDA00039840886200001332
Is a transpose of (2);
Figure FDA00039840886200001333
And->
Figure FDA00039840886200001334
The first, second, third, fourth and fifth related scaling coefficients, respectively, 0 representing the elements of the matrix block as 0./>
CN202211559637.5A 2022-12-06 2022-12-06 Fractional order memristor neural network estimation method under variance limitation Active CN116227324B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211559637.5A CN116227324B (en) 2022-12-06 2022-12-06 Fractional order memristor neural network estimation method under variance limitation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211559637.5A CN116227324B (en) 2022-12-06 2022-12-06 Fractional order memristor neural network estimation method under variance limitation

Publications (2)

Publication Number Publication Date
CN116227324A true CN116227324A (en) 2023-06-06
CN116227324B CN116227324B (en) 2023-09-19

Family

ID=86584853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211559637.5A Active CN116227324B (en) 2022-12-06 2022-12-06 Fractional order memristor neural network estimation method under variance limitation

Country Status (1)

Country Link
CN (1) CN116227324B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117077748A (en) * 2023-06-15 2023-11-17 盐城工学院 Coupling synchronous control method and system for discrete memristor neural network
CN117949897A (en) * 2024-01-09 2024-04-30 哈尔滨理工大学 Multifunctional radar working mode identification method based on time sequence segmentation and clustering

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107436411A (en) * 2017-07-28 2017-12-05 南京航空航天大学 Battery SOH On-line Estimation methods based on fractional order neural network and dual-volume storage Kalman
CN109088749A (en) * 2018-07-23 2018-12-25 哈尔滨理工大学 The method for estimating state of complex network under a kind of random communication agreement
CN111025914A (en) * 2019-12-26 2020-04-17 东北石油大学 Neural network system remote state estimation method and device based on communication limitation
US11449754B1 (en) * 2021-09-12 2022-09-20 Zhejiang University Neural network training method for memristor memory for memristor errors

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107436411A (en) * 2017-07-28 2017-12-05 南京航空航天大学 Battery SOH On-line Estimation methods based on fractional order neural network and dual-volume storage Kalman
CN109088749A (en) * 2018-07-23 2018-12-25 哈尔滨理工大学 The method for estimating state of complex network under a kind of random communication agreement
CN111025914A (en) * 2019-12-26 2020-04-17 东北石油大学 Neural network system remote state estimation method and device based on communication limitation
US11449754B1 (en) * 2021-09-12 2022-09-20 Zhejiang University Neural network training method for memristor memory for memristor errors

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117077748A (en) * 2023-06-15 2023-11-17 盐城工学院 Coupling synchronous control method and system for discrete memristor neural network
CN117077748B (en) * 2023-06-15 2024-03-22 盐城工学院 Coupling synchronous control method and system for discrete memristor neural network
CN117949897A (en) * 2024-01-09 2024-04-30 哈尔滨理工大学 Multifunctional radar working mode identification method based on time sequence segmentation and clustering

Also Published As

Publication number Publication date
CN116227324B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN116227324B (en) Fractional order memristor neural network estimation method under variance limitation
CN109088749B (en) State estimation method of complex network under random communication protocol
CN112115419B (en) System state estimation method and system state estimation device
CN107102969A (en) The Forecasting Methodology and system of a kind of time series data
Liu et al. State estimation for neural networks with Markov-based nonuniform sampling: The partly unknown transition probability case
CN112116138A (en) Power system prediction state estimation method and system based on data driving
CN109995031B (en) Probability power flow deep learning calculation method based on physical model
CN110443724B (en) Electric power system rapid state estimation method based on deep learning
CN111025914B (en) Neural network system remote state estimation method and device based on communication limitation
Gospodinov et al. Minimum distance estimation of possibly noninvertible moving average models
CN113240105B (en) Power grid steady state discrimination method based on graph neural network pooling
CN105355198A (en) Multiple self-adaption based model compensation type speech recognition method
CN107276561A (en) Based on the Hammerstein system identifying methods for quantifying core least mean-square error
CN113435595A (en) Two-stage optimization method for extreme learning machine network parameters based on natural evolution strategy
Lin Wavelet neural networks with a hybrid learning approach
CN105808962A (en) Assessment method considering voltage probabilities of multiple electric power systems with wind power output randomness
CN109217844B (en) Hyper-parameter optimization method based on pre-training random Fourier feature kernel LMS
CN115935787B (en) Memristor neural network state estimation method under coding and decoding mechanism
CN116304940A (en) Analog circuit fault diagnosis method based on long-short-term memory neural network
Horváth et al. Sample autocovariances of long-memory time series
CN109474258B (en) Nuclear parameter optimization method of random Fourier feature kernel LMS (least mean square) based on nuclear polarization strategy
CN111416595B (en) Big data filtering method based on multi-core fusion
Mustapha et al. Data selection and fuzzy-rules generation for short-term load forecasting using ANFIS
CN113447818B (en) Identification method and system of battery equivalent circuit model
Bermeo et al. Artificial Neural Network and Monte Carlo Simulation in a hybrid method for time series forecasting with generation of L-scenarios

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant