CN115935787A - Memristor neural network state estimation method under coding and decoding mechanism - Google Patents
Memristor neural network state estimation method under coding and decoding mechanism Download PDFInfo
- Publication number
- CN115935787A CN115935787A CN202211386982.3A CN202211386982A CN115935787A CN 115935787 A CN115935787 A CN 115935787A CN 202211386982 A CN202211386982 A CN 202211386982A CN 115935787 A CN115935787 A CN 115935787A
- Authority
- CN
- China
- Prior art keywords
- matrix
- time
- neural network
- row
- state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 111
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000007246 mechanism Effects 0.000 title claims abstract description 39
- 239000011159 matrix material Substances 0.000 claims abstract description 279
- 238000003306 harvesting Methods 0.000 claims abstract description 16
- 238000004458 analytical method Methods 0.000 claims abstract description 7
- 238000005259 measurement Methods 0.000 claims description 34
- 150000001875 compounds Chemical class 0.000 claims description 25
- 238000003860 storage Methods 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 14
- 210000002569 neuron Anatomy 0.000 claims description 11
- 230000005284 excitation Effects 0.000 claims description 9
- 230000000903 blocking effect Effects 0.000 claims description 8
- 238000013139 quantization Methods 0.000 claims description 6
- 230000003321 amplification Effects 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 3
- 230000001419 dependent effect Effects 0.000 claims description 2
- 238000012886 linear function Methods 0.000 claims description 2
- 230000001105 regulatory effect Effects 0.000 claims description 2
- 239000000725 suspension Substances 0.000 claims description 2
- UFHFLCQGNIYNRP-UHFFFAOYSA-N Hydrogen Chemical compound [H][H] UFHFLCQGNIYNRP-UHFFFAOYSA-N 0.000 claims 2
- 229910052739 hydrogen Inorganic materials 0.000 claims 2
- 239000001257 hydrogen Substances 0.000 claims 2
- 230000001537 neural effect Effects 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 5
- 238000011156 evaluation Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 210000000225 synapse Anatomy 0.000 description 2
- 239000003990 capacitor Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000452 restraining effect Effects 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Complex Calculations (AREA)
- Error Detection And Correction (AREA)
Abstract
The invention discloses a memristor neural network state estimation method under a coding and decoding mechanism, which comprises the following steps: step one, establishing a tunnel with H ∞ A dynamic model of a memristive neural network for performance constraint and sensor energy harvesting; secondly, performing state estimation on the memristor neural network dynamic model under an encoding and decoding mechanism; step three, calculating the upper bound of the error covariance matrix of the memristor neural network and H ∞ Performance constraints; step four, solving the estimator gain moment by using a random analysis method and solving a series of linear matrix inequalitiesMatrix K k The state estimation of the memristor neural network is realized; and (5) judging whether k +1 reaches the total duration N, if k +1 is less than N, executing the step two, otherwise, ending. The invention solves the problem that the existing state estimation method can not process H under the coding and decoding mechanism at the same time ∞ The problem of low accuracy of the estimated performance caused by the state estimation of the performance constraint and variance limited memristor neural network is solved, so that the accuracy of the estimated performance is improved.
Description
Technical Field
The invention relates to a memristor neural network state estimation method, in particular to a method for estimating the state of a memristor neural network under a coding and decoding mechanism with H ∞ A state estimation method of a memristive neural network with performance constraint and variance limitation.
Background
Neural networks are characterized by being composed of a large number of interconnected dynamic networks. In many real networks, the method is applied to modeling and analyzing actual systems such as pattern recognition, optimization problems and associative memory.
The memristor is a fourth novel passive nano information device which is connected with three basic circuit elements of a resistor, a capacitor and an inductor. Compared with the existing device, the memristor has the advantages of low energy consumption, small size and the like, and is not easy to volatilize. In fact, memristors and biological synapses are very similar in structure and function. Therefore, an increasing number of researchers choose to replace synapses in artificial neural networks with memristors.
In many engineering practices, especially in the current network environment, delay, bandwidth limitation and the like inevitably occur in the information transmission process due to machine faults, communication channel congestion and the like. Therefore, the design is also suitable for the coding and decoding mechanism with H ∞ A state estimation method of a memristive neural network with performance constraint and limited variance is necessary, especially when H is considered simultaneously ∞ Performance constraints and variancesA restricted situation.
The existing state estimation method can not process H in the coding and decoding mechanism at the same time ∞ Performance constraints and variance-limited state estimation of the memristive neural network, resulting in low estimation performance accuracy.
Disclosure of Invention
The invention aims to provide a memristor neural network state estimation method under a coding and decoding mechanism, which solves the problem that the existing state estimation method cannot simultaneously process H under the coding and decoding mechanism ∞ The method has the advantages that the problem of low accuracy of estimation precision is caused by the problem of state estimation of the memristive neural network with performance constraint and limited variance, and the problem of low accuracy of estimation performance is caused under the condition that information cannot receive information at other moments under an encoding and decoding mechanism, and can be used in the field of state estimation of the memristive neural network.
The purpose of the invention is realized by the following technical scheme:
a memristor neural network state estimation method under a coding and decoding mechanism comprises the following steps:
step one, establishing that H is arranged under a coding and decoding mechanism ∞ A dynamic model of a memristive neural network for performance constraint and sensor energy harvesting;
secondly, performing state estimation on the memristor neural network dynamic model established in the first step under an encoding and decoding mechanism;
step three, giving H ∞ Performance index gamma, semi-positive definite matrix number oneHalf positive definite matrix two number->And initial conditions x 0 And &>Calculating the upper bound of the error covariance matrix H of the memristive neural network ∞ Performance constraints;
fourthly, solving the gain matrix K of the estimator by utilizing a random analysis method and solving a series of linear matrix inequalities k To have H under the coding and decoding mechanism ∞ Performing state estimation on a memristive neural network for performance constraint and sensor energy harvesting; and D, judging whether k +1 reaches the total duration N, if k +1 is less than N, executing the step two, and otherwise, ending.
In the invention, the neural network can be a network formed by mass point springs, a network formed by vehicle suspensions, a nonlinear truck trailer model, a network formed by spacecrafts or a network formed by radars.
Compared with the prior art, the invention has the following advantages:
1. the invention provides a method for encoding and decoding with H ∞ A state estimation method of a memristive neural network with performance constraint and variance limitation simultaneously considers H in a coding and decoding mechanism ∞ The influence of performance constraint, sensor energy harvesting and variance limitation on state estimation performance is comprehensively considered by utilizing a random analysis method and an inequality processing technology, effective information of an estimation error covariance matrix is comprehensively considered, and compared with the existing time-lag neural network state estimation method, the memristive neural network state estimation method simultaneously considers the fact that the state estimation method has H under a coding and decoding mechanism ∞ The performance constraint and variance limited memristive neural network state estimation problem is solved, and the error system is obtained and simultaneously meets the requirements that the estimation error covariance has an upper bound and is given as H ∞ The memristor neural network state estimation method based on the performance requirement achieves the purposes of simultaneously restraining disturbance and improving the estimation precision.
2. The invention utilizes a stochastic analysis method, firstly, estimation error systems are respectively considered to satisfy H ∞ The performance constraint condition and the error covariance have upper bound sufficient conditions; then, the system of the estimated error is obtained simultaneously to satisfy H ∞ Performance constraint and error covariance have upper bound discrimination condition; finally, the value of the estimator gain matrix is obtained by solving a series of linear matrix inequalities, so that H is possessed under a coding and decoding mechanism ∞ Concurrent performance constraints and variance constraintsUnder the condition, the performance estimation is not influenced, so that the estimation accuracy is improved.
3. The invention solves the problem that the existing state estimation method can not process H under the coding and decoding mechanism at the same time ∞ The problem of low accuracy of the estimated performance caused by the state estimation of the performance constraint and variance limited memristor neural network is solved, so that the accuracy of the estimated performance is improved. As can be seen from the simulation diagram, the larger the lambda is, the state estimation performance of the memristive neural network is gradually reduced, and the estimation error is relatively large, so that the feasibility and the effectiveness of the state estimation method provided by the invention are further verified.
Drawings
FIG. 1 is a flow chart of a memristive neural network state estimation method under the encoding and decoding mechanism of the present invention;
FIG. 2 is a memristive neural network actual state trajectory z k State estimation trajectory in two different situationsComparative graph of (1), z k State variables at the kth instant for the memristive neural network, wherein: />Is a system status track, based on a system status of a device>Yes status evaluation track>Is the state estimation trajectory in case two;
FIG. 3 is a comparison of error for memristive neural network control output estimation error plots for two different scenarios, where:is a condition evaluation trajectory, a status evaluation trajectory>Is the state estimation trajectory in case two;
FIG. 4 is a trajectory diagram of the error covariance and the upper bound of the error covariance of the actual state of a memristive neural network, whereIs a locus to an upper bound of the error covariance>Is the upper bound trajectory of the actual error covariance;
Detailed Description
The technical solution of the present invention is further described below with reference to the accompanying drawings, but not limited thereto, and any modification or equivalent replacement of the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention shall be covered by the protection scope of the present invention.
The invention provides a memristor neural network state estimation method under a coding and decoding mechanism, which comprises the following steps of:
step one, establishing that H is arranged under a coding and decoding mechanism ∞ And (3) a dynamic model of a memristive neural network for performance constraint and energy harvesting of the sensor. The method comprises the following specific steps:
in this step, H is provided under the coding and decoding mechanism ∞ The state space form of the memristor neural network dynamic model of performance constraint and sensor energy harvesting is as follows:
x k+1 =A(x k )x k +A d (x k )x k-d +B(x k )f(x k )+C k v 1k (1)
z k =H k x k (2)
in the formula (I), the compound is shown in the specification,neuron state variables of memristive neural network at k, k +1 and k-d moments respectively>The memory resistance neural network state is a Euclidean space of a memory resistance neural network state, and the space dimension of the memory resistance neural network state is n; />For a controlled measurement output at the k-th instant>The dimension of the Euclidean space which is the controlled output of the memristor neural network is r; chi shape k K = -d, -d +1, …,0,d are discrete fixed network time lags for the initial value at time k; a (x) k )=diag n {a i (x ik ) The self-feedback diagonal matrix of the memristive neural network at the kth moment is used as the input, n is a dimension, and diag {. Cndot. } represents the diagonal matrix, a i (x ik ) At the kth time A (x) k ) Of (ii), n is the dimension; a. The d (x k )={a ij,d (x i,k )} n*n A system matrix of known dimension at time k and associated with time lag, a ij,d (x i,k ) At the k-th time A d (x k ) Of (i) th component form, B (x) k )={b ij (x i,k )} n*n A weight matrix which is a connected excitation function known at the k-th moment, b ij (x i,k ) At the kth time B (x) k ) The ith component form of (a); f (x) k ) Is a non-linear excitation function at the kth time instant; c k A noise distribution matrix for the known system at time k; h k An adjustment matrix for the known measurements at time k; v. of 1k Is that at the k-th time the mean is zero and the covariance is V 1 White gaussian noise sequence > 0.
The state dependent matrix parameter a i (x i,k )、a ij,d (x i,k ) And b ij (x i,k ) Satisfies the following conditions:
in the formula, a i (x i,k )、a ij,d (x i,k ) And b ij (x i,k ) Are respectively A (x) k )、A d (x k ) And B (x) k ) The ith component at time k, Γ i > 0 is a known switching threshold value,for the ith known up-storage variable matrix, for which a value is stored>For the ith known lower storage variable matrix, <' > H>For the ij th, d known left storage variable matrix, < >>For the ij, d known right storage variable matrix,,>for the ijth known matrix of stored variables which are stored in>The ijth known external memory variable matrix.
Defining:
in the formula (I), the compound is shown in the specification,measure a matrix for the ith least stored first number, based on the number of the preceding measurement in the preceding field>For the ith known upper storage interval variable matrix, <' >>For the ith known variable matrix of the lower storage interval, min {. Cndot. } represents taking the minimum value from the two storage matrices, max {. Cndot. } represents taking the maximum value from the two storage matrices, and combining the maximum value with the maximum value>A first number metric matrix stored for the ith maximum, <' >>The second metric matrix for the ij, d minimum storage>A matrix of second measures stored for the ith maximum, <' >>For the ij th, d known left storage variable matrix, < >>For the ij th, d known right storage variable matrix, < >>Measure a matrix for the ijth least stored third number>Measure a matrix for the third number of the ij-th largest store, <' >>For the ijth known memory variable matrix, <' > based on>For the ijth known external storage variable matrix, diag {. Is the diagonal matrix, A - For a defined diagonal matrix of the first sign, A + For a defined diagonal matrix of second sign, < >>For a defined diagonal third matrix, ->To define a diagonal matrix number four, B - To define a diagonal matrix of the fifth order, B + For the sixth diagonal matrix defined, n is the dimension.
in the formula (I), the compound is shown in the specification,a first number matrix which is a defined left and right interval>A second signal matrix, which is a defined left and right interval>A third signal matrix, which is a defined left and right interval> And &>Norm-bounded uncertainty is satisfied:
in the formula,. DELTA.A k To satisfy norm-bounded uncertainty first matrix, Δ A dk To satisfy norm-bounded uncertainty matrix # II, Δ B k To satisfy the norm bounded uncertainty matrix number three,and &>Are all known real-valued weight matrices, are asserted>Is an unknown matrix at time k and satisfies->Is->The transposing of (1).
And step two, performing state estimation on the memristive neural network dynamic model established in the step one under an encoding and decoding mechanism. The method comprises the following specific steps:
step two, the measurement output form of the time-lag memristor neural network is as follows:
y k =D k x k +E k v 2k
in the formula (I), the compound is shown in the specification,is the measurement output of the memristive neural network at the k-th moment>Measuring an output Euclidean space for the memristive neural network, wherein the space dimension is m; />For the neuron state variable at the k-th moment of the memristive neural network, a decision is made as to whether the state variable is present in the neuron>The memory resistance neural network state is a Euclidean space of a memory resistance neural network state, and the space dimension of the memory resistance neural network state is n; d k And E k Is a metric matrix of known measurements at time k, v 2k Is a white Gaussian noise sequence with a mean of zero and a covariance of V 2k >0。
Step two, at the time k, the energy level of the sensor is q k E {0,1,2, …, S } represents where S is the maximum number of energy units that the sensor can store. Energy h collected at time k k It is a random process with independent and same distribution, and its probability distribution is as follows:
Prob(h k =i)=p i ,(i=0,1,2,…)
in the formula, q k Is the sensor energy level at time k, S is the sensor energyMaximum number of energy units, h, that can be stored k Representing the energy, p, collected at the k-th instant i I is the number of harvested energy for the probability of sensor energy harvesting, and p is more than or equal to 0 i 1 or less and/>
step two and three, at time k, when the sensor stores non-zero units of energy, the sensor is able to transmit the measurement to the state estimator, and if and only if such transmission occurs, the sensor will consume 1 unit of energy. Further, the energy dynamics equation for the sensor can be expressed as:
in the formula, q 0 、q k 、q k+1 The energy levels of the sensors at the 0 th, k th and k +1 th moments respectively, min {. DEG } represents the minimum value in the two energy levels, h k Representing the energy collected at the time of the k-th instant,is represented by q k And (3) 1 unit of energy consumed by the sensor under the condition of being more than or equal to 0, and S is the maximum energy unit number which can be stored by the sensor.
The measurements received by the state estimator may be expressed as:
in the formula (I), the compound is shown in the specification,is the measured value, y, actually received by the state estimator at the k-th instant k Is the measurement value, which is ideally received by the state estimator at the k-th instant>Means that the index function satisfies->And->Is defined as
Step two, defining the coding rule as follows:
in the formula (I), the compound is shown in the specification,is an internal operating state of the encoder at time 0, is present>Respectively, the internal operating state of the encoder at the k-th time, delta k Is a known scaling parameter at the k-th instant, is asserted>Is the measurement output of the encoder at the time k + 1, is greater than>The memory resistance neural network state is a Euclidean space of a memory resistance neural network state, and the space dimension of the memory resistance neural network state is n; />Is a shift matrix of known appropriate dimension at time k,in the form of a selected uniform quantizer>Representing the measured value actually received by the estimator at the (k + 1) th instant.
in the formula (I), the compound is shown in the specification,for a defined amplification matrix, ->Is->In component form, T denotes transposed form, for ∑>We have:
where, ζ is the signal vector,h-th signal vector ζ, l the length of the interval of the quantization step,/, ->Is taken as value>Is positive integer of->Is the number of quantization levels.
Step two, defining a decoding rule as follows:
in the formula (I), the compound is shown in the specification,is the measured output of the decoder at time 0, <' > is>Is the measured output of the decoder at instant k, is greater than>Is the measured output of the decoder at time k + 1, δ k Is a known scaling parameter at the k-th instant, is asserted>The measurement output of the encoder at the k +1 th moment is an Euclidean space for memorizing the state of the neural network, and the space dimension of the Euclidean space is n; />Is a shift matrix of known appropriate dimension at time k.
in the formula eta k Is the measured decoding error at the k-th instant,being decoders at time kMeasured output, <' > or>Is the measured value, y, actually received by the state estimator at the k-th instant k Is the measured value, δ, ideally received by the state estimator at the k-th instant k Is a known scaling parameter at the k-th instant>Is the measurement output of the encoder at time k + 1, is taken>The memory resistance neural network state is a Euclidean space of a memory resistance neural network state, and the space dimension of the memory resistance neural network state is n; />Is a shift matrix of known appropriate dimension at the time k, is evaluated>In the form of a uniform quantizer of choice.
The decoding error satisfies the following condition:
in the formula, | · the luminance | | ∞ Is the infinite norm, l is the interval length of the quantization step, δ k Is a known scaling parameter at time k.
The non-linear function f(s) satisfies the fan-shaped bounded condition as follows:
in the formula (I), the compound is shown in the specification,is the first real matrix of known appropriate dimensions for the 1 st component at time k,/>Is the second real matrix of known appropriate dimensions for the 2 nd component at time k.
Step two, in order to estimate the state of the time-lag memristor neural network, constructing the following time-varying state estimator based on the available measurement information:
in the formula (I), the compound is shown in the specification,is a state estimate of the memristive neural network at time k, based on the measured signal strength>Is the state estimate of the memristive neural network at time k +1>Is the state estimate of the memristive neural network at time k-d>The memory resistance neural network state is an Euclidean space of the memory resistance neural network state, and the space dimension of the memory resistance neural network state is n; d is a fixed network time delay, and->For the state estimation of the controlled output at the kth time instant, a decision is made whether or not the signal is based on the value of the signal>Is a Euclidean space with controlled output of memristive neural network and the spatial dimension of r, is greater than or equal to>A first number matrix, which is a defined left and right interval>A second signal matrix, which is a defined left and right interval>A third signal matrix, which is a defined left and right interval>As a non-linear excitation function at the k-th instant, H k Adjustment matrix for known measurements at the k-th moment, D k Is a measure matrix of known measurements at the k-th instant, is evaluated>Is the measured output of the decoder at time k, μ k Is an index functionMathematical expectation of (1), K k Is the estimator gain matrix to be solved.
The main purpose of this step is to design a time-varying state estimator (5) based on a coding and decoding mechanism, so that the estimation error system can satisfy the following two performance constraint requirements at the same time:
(1) Let the disturbance attenuation level gamma be more than 0, and the first and second semi-positive definite matrixes are respectivelyAnd &>For initial state e 0 Controlling an output evaluation error>Satisfies the following H ∞ Performance constraints are as follows:
in the formula, N is the limited number of nodes,indicates a mathematical expectation that>Is the first number weight matrix, ->Is the first weight matrix, e 0 Is the estimated error at time 0, gamma > 0 is a given level of disturbance attenuation, and>is the noise v 1k And v 2k Is amplified by the amplification vector of (4)>Is at the kth time e k The transpose of (1) represents the norm form, | | | · | | |, the luminance is zero 2 The norm squared form is shown.
(2) The estimation error covariance satisfies the following upper bound constraint:
in the formula (I), the compound is shown in the specification,is at the kth time e k Is transferred and is taken out>Is a series of pre-given acceptable estimation accuracy matrices at time k.
Step three, giving H ∞ Performance index gamma, half positive definite matrix number oneHalf positive definite matrix two number->And initial conditions x 0 And &>Calculating upper bound of error covariance matrix of memristive neural network and H ∞ Performance constraints.
The method comprises the following specific steps:
step three, one, proving H according to the following formula ∞ The performance analysis problem and the corresponding easy-to-solve discriminant criteria are given:
in the formula:
in the formula (I), the compound is shown in the specification,determining a first number matrix for a given semi-positive; gamma is a given positive scalar quantity; /> Are respectively as R 3k Transposing; />A positive semi-definite matrix at the kth moment; mu.s k For known regulation of normal constants, sigma 11 Is the 1 st row and 1 st column block matrix of sigma 12 Is a 1 st row 2 nd column block matrix of sigma 22 Is 2 nd row 2 nd column block matrix of sigma 33 Is a 3 rd row and 3 rd column block matrix of sigma 44 Is the 4 th row and 4 th column block matrix of sigma 55 Is a 5 th row and 5 th column block matrix of sigma 66 Is a 6 th row and 6 th column block matrix of sigma 77 Is the 7 th row and 7 th column block matrix of Σ, and 0 represents that the elements in the matrix block are all 0.
Step three and two, discussing covariance matrix X k And given the following sufficiency conditions:
in the formula (I), the compound is shown in the specification,
in the formula, G k Is the upper bound of the error covariance matrix at time k; are respectively as Transposing; />The upper bound matrix solved at the kth moment; g k-d The upper bound matrix of the error covariance matrix at the k-d moment; tr (G) k ) Is the trace of the upper bound matrix of the error covariance matrix at the kth time;X k =e k e k T upper bound of error at time k, e k Is the error matrix at the kth time instant; />For state estimation at time k, ρ ∈ (0,1) is the known adjustment normal; />A real matrix number 1 of known appropriate dimension of the 1 st component at time k, based on the number of the first component in the first component, and based on the number of the first component in the first component>Is the second real matrix of the 2 nd component at time k of known appropriate dimension, tr () being the trace of the matrix, μ k Known as regulatory normality.
By analyzing the two results, the estimation error system is ensured to meet the given H ∞ Performance requirements and error covariance are sufficient conditions for bounding.
Fourthly, solving the gain matrix K of the estimator by utilizing a random analysis method and solving a series of linear matrix inequalities k To have H under the coding and decoding mechanism ∞ Performing state estimation on a memristive neural network for performance constraint and sensor energy harvesting; and (5) judging whether k +1 reaches the total duration N, if k +1 is less than N, executing the step two, otherwise, ending.
In the step, a series of recursion linear matrix inequalities from (9) to (11) are solved to provide an estimation error system which simultaneously satisfies H ∞ The performance requirement and the error covariance have a bounded sufficient condition, and the value of the estimator gain matrix can be calculated:
the update matrix is:
in the formula (I), the compound is shown in the specification,
H 22 =diag{-ε 1,k I,-ε 2,k I,-ε 2,k I,-ε 3,k I,-ε 3,k I},
in the formula, epsilon 1,k ,ε 2,k ,ε 3,k ,ε 4,k ,ε 5,k And ε 6,k The first, second, third, fourth, fifth and sixth adjusted normal numbers at the kth moment; i is an identity matrix;the weight matrix is the first weight matrix at the kth moment; />Is the second weight matrix at the kth moment;the weight matrix is the third weight matrix at the kth moment; />Real matrix # 1 of known appropriate dimension at time k, which is the 1 st component>A second real matrix of known appropriate dimensions at time k for the 2 nd component; />A state estimate for the nonlinear excitation function at the kth time;are each H 12 ,H 13 ,Θ 12 ,Θ 13 ,Θ 14 ,Θ 15 ,S k ,T k ,W k Transposing; />Are each Ψ 12 ,Ψ 13 ,Ψ 14 ,Ψ 23 ,Ψ 25 ,Ψ 27 ,Ψ 38 ,Ψ 39 Transposing; h 1 ,H 2 ,H 3 ,H 4 And H 5 A measurement matrix of number one, number two, number three, number four and number five, respectively, N 1k A first number metric matrix of known appropriate dimension at time k that is the 1 st component; n is a radical of 2k A second matrix of metrics of known appropriate dimension at time k for the 2 nd component; n is a radical of 3k Metric matrix # iii of known appropriate dimension at time k for the 3 rd component; n is a radical of 4k Metric matrix # III of known appropriate dimension at time k for the 4 th component; n is a radical of 5k A metric matrix # III of known appropriate dimension at time k for the 5 th component; h 11 Is a row 1, column 1, block matrix, H 12 Is a row 1, column 2 block matrix, H 13 Is a row 1, column 3 block matrix, H 22 Is a 2 nd row 2 nd column block matrix, H 33 Is a row 3, column 3 blocking matrix, Θ 11 Is a row 1, column 1 blocking matrix, Θ 12 Is a row 1, column 2 block matrix, Θ 13 Is a row 1, column 3 blocking matrix, Θ 14 Is a row 1, column 4 block matrix, Θ 15 Is a row 1, column 5 block matrix, Θ 22 Is a row 1, column 1 blocking matrix, Θ 33 Is a row 3, column 3 blocking matrix, Θ 44 Is the row 4 column 4 block matrix, Θ 55 Is a 5 th row and 5 th column block matrix, S k Is the norm-first bounded weight matrix at time k, T k Is the second norm bounded weight matrix, W, at time k k Is the norm number three bounded weight matrix at the kth time, xi 11 Is the 1 st row and 1 st column block matrix xi 23 Is the 2 nd row and 3 rd column block matrix xi 25 Is the 2 nd row and 5 th column block matrix xi 27 Is the 2 nd row and 7 th column part matrix xi 38 Is the block matrix of row 3, column 8. Xi 33 Is the 3 rd row and 3 rd column block matrix xi 44 Is the 4 th row and 4 th column block matrix xi 55 Is the 5 th row and 5 th column block matrix xi 66 Is the 6 th row and 6 th column block matrix xi 77 Is the 7 th row and 7 th column part matrix xi 88 Is the 8 th row 8 th column block matrix, xi 99 Is the 9 th row and 9 th column block matrix, Ψ 11 Is a row 1, column 1 block matrix, Ψ 13 Is a row 1, column 3 block matrix, Ψ 14 Is a row 1, column 4 block matrix, Ψ 22 Is a row 2, column 2 block matrix, Ψ 39 Is row 3, column 9 block matrix, based on the number of blocks selected>Determining a matrix number for a given semi-positive; gamma is a given positive scalar; /> Are respectively asR 3k Transposing; w k A semi-positive definite matrix at the kth moment; />A semi-positive definite matrix at the k-d moment; mu.s k In order to be known to regulate the normal number,for the neuron state estimate at the k-th instant, a decision is made whether to predict a neuron state based on the measured values>For the first update matrix at the time k +1, G k To estimate the upper bound matrix of errors, tr (G) k ) For estimating the error upper bound matrix G at the k-th time k The trace of (2); g k-d Is an upper bound matrix at time k-d, σ is an adjusted weight coefficient, and>and &>Are all made ofKnown real-valued weight matrix, based on a weighted value of the sum of the weighted values>Is an unknown matrix and satisfies >> Is/>And 0 represents that all elements in the matrix block are 0.
In the invention, the theory in the third step and the fourth step is as follows:
first, H is proved ∞ Analyzing the problem and providing a corresponding judgment criterion which is easy to solve; next, the covariance matrix X is discussed k The upper bound constraint problem of (2) and giving sufficient conditions; by analyzing the two results, the estimation error system is ensured to meet the given H ∞ Solving the value of estimator gain matrix by solving a series of linear matrix inequalities under the sufficient condition of bounded performance requirement and error covariance, and calculating estimator gain matrix K k The solution of (1).
Example (b):
this embodiment has H ∞ Taking a memristor neural network for performance constraint and sensor energy harvesting as an example, the method can also be applied to associative memory, pattern recognition and combination optimization, and the method is adopted to simulate a face recognition case:
has H under the coding and decoding mechanism ∞ The relevant system parameters of the memristor neural network state model, the measurement output model and the controlled output model for performance constraint and sensor energy harvesting are selected as follows:
according to the state of the human face, a corresponding adjusting matrix is given as follows:
the measurement adjustment matrix is:
the controlled output adjustment matrix is:
the state weight matrix is:
the weight matrix and tuning parameters of the nonlinear function are:
the excitation function is taken as:
in the formula, x k =[x 1,k x 2,k ] T Is the state vector of the memristive neuron, x 1,k Is in state x at the k-th time k First component proportion matrix of (a), x 2,k Is in state x at the k-th time k The second component of (2) is a weight matrix.
Other simulation initial values are selected as follows:
disturbance attenuation level γ =0.7, semi-positive definite matrix number oneUpper bound matrix->Sum covariance V 1k =V 2k =1, initial state x 0 =[-2.4 2] T ,/>
And solving the values of the related estimator gain matrix by using a recursion linear matrix inequality, wherein partial numerical values are as follows:
case one (case i): λ =0.1;
K 1 =[1.2595 -0.1230] T ,K 2 =[0.8933 0.0535] T ,K 3 =[1.2687 -0.0400] T ,
case two (CaseII): λ =1;
K 1 =[0.3525 -0.7586] T ,K 2 =[0.1521 -0.1137] T ,K 3 =[0.3446 0.1180] T ,
the state estimator effect:
as can be seen from fig. 2, the state estimator design method of the invention can effectively estimate the target state for the memristive neural network with sensor energy harvesting and variance limitation under the encoding and decoding mechanism.
As can be seen from fig. 3, 4, and 5, the estimation error effect becomes worse as the probability λ increases for each time.
Claims (9)
1. A memristive neural network state estimation method under a coding and decoding mechanism is characterized by comprising the following steps:
step one, establishing that H is arranged under a coding and decoding mechanism ∞ A dynamic model of a memristive neural network for performance constraint and sensor energy harvesting;
secondly, performing state estimation on the memristor neural network dynamic model established in the first step under an encoding and decoding mechanism;
step three, giving H ∞ Performance index gamma, semi-positive definite matrix number oneHalf positive definite matrix two number->And initial conditions x 0 Andcalculating upper bound of error covariance matrix of memristive neural network and H ∞ Performance constraints;
step four, utilizingThe method comprises solving a gain matrix K of the estimator by solving a series of linear matrix inequalities k To have H under the coding and decoding mechanism ∞ Performing state estimation on a memristive neural network for performance constraint and sensor energy harvesting; and D, judging whether k +1 reaches the total duration N, if k +1 is less than N, executing the step two, and otherwise, ending.
2. The method for estimating the state of a memristive neural network under an encoding and decoding mechanism according to claim 1, wherein the neural network is a biometric identification network, a network formed by mass springs, a network formed by vehicle suspensions, a nonlinear truck trailer model, a network formed by spacecraft, or a network formed by radar.
3. The method according to claim 1, wherein the first step is to have H under the codec mechanism ∞ The state space form of the memristor neural network dynamic model for performance constraint and sensor energy harvesting is as follows:
x k+1 =A(x k )x k +A d (x k )x k-d +B(x k )f(x k )+C k v 1k
z k =H k x k
in the formula (I), the compound is shown in the specification,neuron state variables of memristive neural network at k, k +1 and k-d moments respectively>The memory resistance neural network state is a Euclidean space of a memory resistance neural network state, and the space dimension of the memory resistance neural network state is n; />For a controlled measurement output at the k-th instant>The dimension of the Euclidean space is r, and the Euclidean space is a controlled output state of the memristive neural network; chi shape k K = -d, -d +1, …,0,d is a discrete fixed network time lag for the initial value at time k; a (x) k )=diag n {a i (x ik ) The self-feedback diagonal matrix of the memristive neural network at the k-th moment is used as the matrix, n is the dimension, and diag {. Cndot. } represents the diagonal matrix, a i (x ik ) At the k-th time A (x) k ) The ith component form of (1), n is the dimension; a. The d (x k )={a ij,d (x i,k )} n*n A system matrix of known dimension at time k and associated with time lag, a ij,d (x i,k ) At the kth time A d (x k ) The ith component form of (c), B (x) k )={b ij (x i,k )} n*n A weight matrix which is a connected excitation function known at the k-th moment, b ij (x i,k ) At the k-th time B (x) k ) The ith component form of (a); f (x) k ) Is a non-linear excitation function at the kth time instant; c k A noise distribution matrix for the known system at time k; h k An adjustment matrix for the known measurements at time k; v. of 1k Is that at the k-th time the mean is zero and the covariance is V 1 White gaussian noise sequence > 0.
4. The method for estimating the state of the memristive neural network under the coding and decoding mechanism of claim 3, wherein the state-dependent matrix parameter a i (x i,k )、a ij,d (x i,k ) And b ij (x i,k ) Satisfies the following conditions:
in the formula, a i (x i,k )、a ij,d (x i,k ) And b ij (x i,k ) Are respectively A (x) k )、A d (x k ) And B (x) k ) The ith component at time k, Γ i > 0 is a known switching threshold value,for the ith known up-storing variable matrix, <' > is>For the ith known lower storage variable matrix, <' > based on>For the ij th, d known left storage variable matrix, < >>For the ij th, d known right storage variable matrix, < >>For the ijth known memory variable matrix, <' > based on>Is the ijth known external memory variable matrix.
5. The method for estimating the state of the memristive neural network under the coding and decoding mechanism according to claim 1, wherein the specific steps of the second step are as follows:
step two, the measurement output form of the time-lag memristor neural network is as follows:
y k =D k x k +E k v 2k
in the formula (I), the compound is shown in the specification,is the measurement output of the memristive neural network at the k-th moment>The method comprises the steps of outputting a real number domain for a memristive neural network dynamic model, wherein m is a dimension; />For the neuron state variable at the k-th moment of the memristive neural network, a decision is made as to whether the state variable is present in the neuron>A real number domain output by the dynamic model of the memristive neural network, wherein the dimension of the real number domain is n; d k And E k Is a metric matrix of known measurements at time k, v 2k Is a white Gaussian noise sequence with a mean of zero and a covariance of V 2k >0;
Step two, at the time k, the energy level of the sensor is q k E {0,1,2, …, S } represents, wherein S is the maximum energy unit number capable of being stored by the sensor, and the energy collected at the moment k is represented by h k Represents;
step two, at time k, when the sensor stores non-zero units of energy, the sensor is able to transmit the measurement to the state estimator, and if and only if such transmission occurs, the sensor will consume 1 unit of energy, the energy dynamic equation of the sensor being expressed as:
in the formula, q 0 、q k 、q k+1 The energy levels of the sensors at the 0 th moment, the k th moment and the k +1 th moment respectively, min {. DEG } represents the minimum value of the two energy levels, h k Representing the energy collected at the time of the k-th instant,is represented by q k Transfer under the precondition of not less than 01 unit of energy consumed by the sensor, S being the maximum number of energy units that the sensor can store;
the measurements received by the state estimator are expressed as:
in the formula (I), the compound is shown in the specification,is the measured value, y, actually received by the state estimator at the k-th instant k Is the measurement value, which is ideally received by the state estimator at the k-th instant>Means that the index function satisfies->And->Is defined as
Step two, the coding rule is defined as follows:
in the formula (I), the compound is shown in the specification,is an internal operating state of the encoder at time 0, is asserted>Respectively, the internal operating state of the encoder at the k-th time, delta k Is a known scaling parameter at the k-th instant, is asserted>Is the measurement output of the encoder at time k +1, is taken>A real number domain output by the dynamic model of the memristive neural network, wherein the dimension of the real number domain is n; />Is a shift matrix of known appropriate dimension at the time k, is evaluated>In the form of a selected uniform quantizer>Represents the measured value actually received by the estimator at the (k + 1) th moment;
step two and step five, the definition of the decoding rule is as follows:
in the formula (I), the compound is shown in the specification,is the measured output of the decoder at time 0, <' > is>Is the measured output of the decoder at instant k, is greater than>Is the measured output of the decoder at time k +1, δ k Is a known scaling parameter at the k-th instant, is asserted>Is the measurement output of the encoder at time k +1, is taken>The method comprises the following steps of (1) outputting a real number domain for a dynamic model of a memristive neural network, wherein n is a dimension; />Is a shift matrix of known appropriate dimension at time k;
in the formula eta k Is the measured decoding error at the k-th instant,is the measurement output of the decoder at time k, <' >>Is the measured value, y, actually received by the state estimator at the kth instant k Is the measured value, δ, ideally received by the state estimator at the k-th instant k Is a known scaling parameter at the k-th instant, is asserted>Is the measurement output of the encoder at the time k +1, is greater than>The memory resistance neural network state is an Euclidean space of the memory resistance neural network state, and the space dimension of the memory resistance neural network state is n; />Is a shift matrix of known appropriate dimension at the time k, is evaluated>In the form of a selected uniform quantizer;
the decoding error satisfies the following condition:
in the formula, | · the luminance | | ∞ Is the infinite norm, l is the interval length of the quantization step, δ k Is a known scaling parameter at time k;
the non-linear function f(s) satisfies the fan-shaped bounded condition as follows:
in the formula (I), the compound is shown in the specification,is the first real matrix of known appropriate dimensions for the 1 st component at the time k, and->Is the second real matrix of known appropriate dimensions for the 2 nd component at time k.
Step two, in order to estimate the state of the time-lag memristor neural network, constructing the following time-varying state estimator based on the available measurement information:
in the formula (I), the compound is shown in the specification,is a state estimate of the memristive neural network at time k, based on the measured signal strength>Is the state estimate of the memristive neural network at time k +1>Is the state estimate of the memristive neural network at time k-d>The memory resistance neural network state is a Euclidean space of a memory resistance neural network state, and the space dimension of the memory resistance neural network state is n; d is a fixed network time delay, and->For the state estimation of the controlled output at the kth time instant, a decision is made whether or not the signal is based on the value of the signal>For practice of dynamic model states of neural networksNumber field of dimension r->A first number matrix, which is a defined left and right interval>A second number matrix, which is a defined left and right interval>A third matrix of defined left and right intervals,as a non-linear excitation function at the k-th instant, H k Adjustment matrix for known measurements at the k-th moment, D k Is a measure matrix of known measurements at the k-th instant, is evaluated>Is the measured output of the decoder at time k, μ k Is a function of the indicator>Mathematical expectation of (1), K k Is the estimator gain matrix to be solved.
6. The method of claim 5, wherein the h is a memristive neural network state estimation method under a coding and decoding mechanism k The probability distribution of (c) is as follows:
Prob(h k =i)=p i ,(i=0,1,2,…)
in the formula, q k Is the energy level of the sensor at time k, S is the maximum number of energy units that the sensor can store, h k Representing the energy, p, collected at the k-th instant i I is the number of harvested energy for the probability of sensor energy harvesting, and p is more than or equal to 0 i 1 or less and
7. the method of claim 5, wherein the uniform quantizer is a component of the memristive neural network state estimation methodDescribed in the following form:
in the formula (I), the compound is shown in the specification,for a defined amplification matrix>Is->T denotes the transposed form, for ∑ is>Comprises the following steps:
8. The method for estimating the state of the memristive neural network under the coding and decoding mechanism according to claim 1, wherein the specific steps of the third step are as follows:
step three, one, proving H according to the following formula ∞ The performance analysis problem and the corresponding easy-to-solve discriminant criteria are given:
in the formula:
in the formula (I), the compound is shown in the specification,determining a first number matrix for a given semi-positive; gamma is a given positive scalar quantity; /> Are respectively based on>D k ,K k ,ΔA k ,H k ,ΔB k ,/>ΔA k ,/>E k ,K k ,C k ,Σ 12 ,R 3k Transposing; />Is a semi-positive definite matrix at the kth moment; mu.s k For known regulation of normal constants, sigma 11 Is the 1 st row and 1 st column block matrix of sigma 12 Is the 1 st row, 2 nd column block of ∑Matrix, sigma 22 Is 2 nd row 2 nd column block matrix of sigma 33 Is a 3 rd row and 3 rd column block matrix of sigma 44 Is the 4 th row and 4 th column block matrix of sigma 55 Is a 5 th row and 5 th column block matrix of sigma 66 Is the 6 th row and 6 th column block matrix of sigma 77 Is the 7 th row and 7 th column block matrix of Σ, and 0 represents that the elements in the matrix block are all 0;
step three and two, discussing covariance matrix X k And given the following sufficiency conditions:
in the formula (I), the compound is shown in the specification,
in the formula, G k Is the upper bound of the error covariance matrix at time k; are respectively based on>D k ,K k ,ΔA k ,H k ,/>ΔB k ,/>ΔA k ,/>E k ,K k ,C k Transposing; />The upper bound matrix solved at the kth moment; g k-d The upper bound matrix of the error covariance matrix at the k-d moment; tr (G) k ) Is the trace of the upper bound matrix of the error covariance matrix at the kth time; x k =e k e k T Upper bound of error at time k, e k Is the error matrix at the kth time instant; />For state estimation at time k, ρ ∈ (0,1) is the known adjustment normal; />Is the first real matrix of the known appropriate dimension of the 1 st component at time k, and->Is the second real matrix of the 2 nd component at time k of known appropriate dimension, tr () being the trace of the matrix, μ k Known as regulatory normality.
9. The method for estimating the state of the memristive neural network under the coding and decoding mechanism according to claim 1, wherein the step is that an estimation error system is given by solving a series of recursive linear matrix inequalities below while satisfying H ∞ The value of the estimator gain matrix can be calculated under the sufficient condition that the performance requirement and the error covariance have upper bounds:
the update matrix is:
in the formula (I), the compound is shown in the specification,
H 33 =diag{-ε 4,k I,-ε 4,k I,-ε 5,k I,-ε 5,k I},
H 22 =diag{-ε 1,k I,-ε 2,k I,-ε 2,k I,-ε 3,k I,-ε 3,k I},
Ξ 22 =diag{-G k ,-G k ,-G k ,-I},Ξ 33 =diag{-G k-d ,-G k-d ,-I,-tr(G k )I},
in the formula, epsilon 1,k ,ε 2,k ,ε 3,k ,ε 4,k ,ε 5,k And epsilon 6,k The first, second, third, fourth, fifth and sixth adjusted normal numbers at the kth moment; i is an identity matrix;the weight matrix is the first weight matrix at the kth moment; />Is the second weight matrix at the kth moment; />Is the third weight matrix at the kth moment; />The real matrix of the first number of known suitable dimension at time k, which is the 1 st component, is->A second real matrix of known appropriate dimensions at time k for the 2 nd component; />A state estimate for the nonlinear excitation function at the kth time instant; />Are each H 12 ,H 13 ,Θ 12 ,Θ 13 ,Θ 14 ,Θ 15 ,S k ,T k ,W k Transposing;are respectively Ψ 12 ,Ψ 13 ,Ψ 14 ,Ψ 23 ,Ψ 25 ,Ψ 27 ,Ψ 38 ,Ψ 39 Transposing; h 1 ,H 2 ,H 3 ,H 4 And H 5 A measurement matrix of number one, number two, number three, number four and number five, respectively, N 1k A first number metric matrix of known appropriate dimension at time k that is the 1 st component; n is a radical of hydrogen 2k A second number metric matrix of known appropriate dimension at time k for the 2 nd component; n is a radical of hydrogen 3k Is the 3 rd component at kMetric matrix III of known appropriate dimensions; n is a radical of 4k Metric matrix # III of known appropriate dimension at time k for the 4 th component; n is a radical of 5k A metric matrix # III of known appropriate dimension at time k for the 5 th component; h 11 Is a row 1, column 1 block matrix, H 12 Is a row 1, column 2 block matrix, H 13 Is a row 1, column 3 block matrix, H 22 Is a row 2, column 2 block matrix, H 33 Is the 3 rd row and 3 rd column block matrix, theta 11 Is a row 1, column 1 blocking matrix, Θ 12 Is the row 1, column 2 block matrix, Θ 13 Is a row 1, column 3 blocking matrix, Θ 14 Is a row 1, column 4 block matrix, Θ 15 Is the row 1, column 5 block matrix, Θ 22 Is a row 1, column 1 blocking matrix, Θ 33 Is the 3 rd row and 3 rd column block matrix, theta 44 Is the 4 th row and 4 th column block matrix, theta 55 Is a 5 th row and 5 th column block matrix, S k Is the norm-first bounded weight matrix at time k, T k Is the second norm bounded weight matrix at time k, W k Is a third norm bounded weight matrix at time k, xi 11 Is the 1 st row and 1 st column block matrix xi 23 Is the 2 nd row and 3 rd column block matrix xi 25 Is the 2 nd row and 5 th column block matrix xi 27 Is the 2 nd row and 7 th column part matrix xi 38 Is the block matrix of row 3, column 8. Xi 33 Is the 3 rd row and 3 rd column block matrix xi 44 Is the 4 th row and 4 th column block matrix xi 55 Is the row 5, column 5 block matrix, xi 66 Is the 6 th row and 6 th column block matrix xi 77 Is the 7 th row and 7 th column part matrix xi 88 Is the 8 th row 8 th column part matrix xi 99 Is a 9 th row and 9 th column block matrix, Ψ 11 Is a row 1, column 1 block matrix, Ψ 13 Is a row 1, column 3 block matrix, Ψ 14 Is a row 1, column 4 block matrix, Ψ 22 Is a row 2, column 2 block matrix, Ψ 39 Is row 3, column 9 block matrix, based on the number of blocks selected>For a given semi-positive settingA matrix number one; gamma is a given positive scalar quantity; />/>Are respectively asD k ,K k ,ΔA k ,H k ,/>ΔB k ,/>ΔA k ,/>E k ,K k ,C k ,Σ 12 ,R 3k Transposing; />A semi-positive definite matrix at the kth moment; />A semi-positive definite matrix at the k-d moment; mu.s k For known modulation of normal values>For the neuron state estimate at the k-th instant, a decision is made whether to predict a neuron state based on the measured values>For the first update matrix at time k +1, G k To estimate the upper bound matrix of errors, tr (G) k ) For estimating the error upper bound matrix G at the k-th time k The trace of (2); g k-d Is an upper bound matrix at time k-d, σ is an adjusted weight factor, <' > based on>Andare all known real-valued weight matrices, are combined>Is an unknown matrix and satisfies-> Is thatAnd 0 represents that all elements in the matrix block are 0./>
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211386982.3A CN115935787B (en) | 2022-11-07 | 2022-11-07 | Memristor neural network state estimation method under coding and decoding mechanism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211386982.3A CN115935787B (en) | 2022-11-07 | 2022-11-07 | Memristor neural network state estimation method under coding and decoding mechanism |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115935787A true CN115935787A (en) | 2023-04-07 |
CN115935787B CN115935787B (en) | 2023-09-01 |
Family
ID=86651817
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211386982.3A Active CN115935787B (en) | 2022-11-07 | 2022-11-07 | Memristor neural network state estimation method under coding and decoding mechanism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115935787B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117077748A (en) * | 2023-06-15 | 2023-11-17 | 盐城工学院 | Coupling synchronous control method and system for discrete memristor neural network |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108959808A (en) * | 2018-07-23 | 2018-12-07 | 哈尔滨理工大学 | A kind of Optimum distribution formula method for estimating state based on sensor network |
CN109088749A (en) * | 2018-07-23 | 2018-12-25 | 哈尔滨理工大学 | The method for estimating state of complex network under a kind of random communication agreement |
CN110879533A (en) * | 2019-12-13 | 2020-03-13 | 福州大学 | Scheduled time projection synchronization method of delay memristive neural network with unknown disturbance resistance |
CN111025914A (en) * | 2019-12-26 | 2020-04-17 | 东北石油大学 | Neural network system remote state estimation method and device based on communication limitation |
CN112132924A (en) * | 2020-09-29 | 2020-12-25 | 北京理工大学 | CT reconstruction method based on deep neural network |
CN113516601A (en) * | 2021-06-17 | 2021-10-19 | 西南大学 | Image restoration technology based on deep convolutional neural network and compressed sensing |
-
2022
- 2022-11-07 CN CN202211386982.3A patent/CN115935787B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108959808A (en) * | 2018-07-23 | 2018-12-07 | 哈尔滨理工大学 | A kind of Optimum distribution formula method for estimating state based on sensor network |
CN109088749A (en) * | 2018-07-23 | 2018-12-25 | 哈尔滨理工大学 | The method for estimating state of complex network under a kind of random communication agreement |
CN110879533A (en) * | 2019-12-13 | 2020-03-13 | 福州大学 | Scheduled time projection synchronization method of delay memristive neural network with unknown disturbance resistance |
CN111025914A (en) * | 2019-12-26 | 2020-04-17 | 东北石油大学 | Neural network system remote state estimation method and device based on communication limitation |
CN112132924A (en) * | 2020-09-29 | 2020-12-25 | 北京理工大学 | CT reconstruction method based on deep neural network |
CN113516601A (en) * | 2021-06-17 | 2021-10-19 | 西南大学 | Image restoration technology based on deep convolutional neural network and compressed sensing |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117077748A (en) * | 2023-06-15 | 2023-11-17 | 盐城工学院 | Coupling synchronous control method and system for discrete memristor neural network |
CN117077748B (en) * | 2023-06-15 | 2024-03-22 | 盐城工学院 | Coupling synchronous control method and system for discrete memristor neural network |
Also Published As
Publication number | Publication date |
---|---|
CN115935787B (en) | 2023-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Papageorgiou et al. | Fuzzy cognitive map learning based on nonlinear Hebbian rule | |
US20210149349A9 (en) | Networked control system time-delay compensation method based on predictive control | |
CN109829577B (en) | Rail train running state prediction method based on deep neural network structure model | |
CN116757534B (en) | Intelligent refrigerator reliability analysis method based on neural training network | |
CN113723007B (en) | Equipment residual life prediction method based on DRSN and sparrow search optimization | |
CN110705692A (en) | Method for predicting product quality of industrial nonlinear dynamic process by long-short term memory network based on space and time attention | |
CN109088749B (en) | State estimation method of complex network under random communication protocol | |
CN112085254B (en) | Prediction method and model based on multi-fractal cooperative measurement gating circulation unit | |
CN110824914B (en) | Intelligent wastewater treatment monitoring method based on PCA-LSTM network | |
CN115935787A (en) | Memristor neural network state estimation method under coding and decoding mechanism | |
CN112734002B (en) | Service life prediction method based on data layer and model layer joint transfer learning | |
CN110542748B (en) | Knowledge-based robust effluent ammonia nitrogen soft measurement method | |
CN116227324B (en) | Fractional order memristor neural network estimation method under variance limitation | |
CN109155001B (en) | Signal processing method and device based on impulse neural network | |
CN107704426A (en) | Water level prediction method based on extension wavelet-neural network model | |
CN112434888A (en) | PM2.5 prediction method of bidirectional long and short term memory network based on deep learning | |
CN115410372A (en) | Reliable prediction method for highway traffic flow based on Bayesian LSTM | |
CN115687995A (en) | Big data environmental pollution monitoring method and system | |
Bonassi et al. | Towards lifelong learning of recurrent neural networks for control design | |
CN117371321A (en) | Internal plasticity depth echo state network soft measurement modeling method based on Bayesian optimization | |
CN112785056A (en) | Short-term load prediction method based on fusion of Catboost and LSTM models | |
CN115865702A (en) | Distributed fusion estimation method with data attenuation under network scheduling strategy | |
Censi et al. | Real-valued average consensus over noisy quantized channels | |
CN115659201A (en) | Gas concentration detection method and monitoring system for Internet of things | |
CN113627687A (en) | Water supply amount prediction method based on ARIMA-LSTM combined model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |