CN114491400A - Method for solving time-varying Siemens equation by noise suppression adaptive coefficient zero-ization neural network based on error norm - Google Patents

Method for solving time-varying Siemens equation by noise suppression adaptive coefficient zero-ization neural network based on error norm Download PDF

Info

Publication number
CN114491400A
CN114491400A CN202210035500.3A CN202210035500A CN114491400A CN 114491400 A CN114491400 A CN 114491400A CN 202210035500 A CN202210035500 A CN 202210035500A CN 114491400 A CN114491400 A CN 114491400A
Authority
CN
China
Prior art keywords
model
equation
adaptive coefficient
neural network
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210035500.3A
Other languages
Chinese (zh)
Inventor
李坤键
姜丞泽
肖秀春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Ocean University
Original Assignee
Guangdong Ocean University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Ocean University filed Critical Guangdong Ocean University
Priority to CN202210035500.3A priority Critical patent/CN114491400A/en
Publication of CN114491400A publication Critical patent/CN114491400A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Operations Research (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method for solving a time-varying Siemens equation by a noise suppression adaptive coefficient nulling neural network based on an error norm, which comprises the following steps of: a, firstly, establishing a mathematical model for solving a time-varying Siemens equation; b, defining a noise suppression adaptive coefficient zero neural network based on the error norm, and discussing the convergence of the noise suppression adaptive coefficient zero neural network; step C, adding different noises into the model, and discussing the robustness of the model under the influence of different noises; and D, initializing parameters, verifying and analyzing results, introducing a noise suppression self-adaptive coefficient based on an error norm into an ZNN model, theoretically analyzing the global convergence of the method, introducing different noises into the model, analyzing the stability of the model under the influence of different noises, and verifying that the model quickly converges to zero under the influence of different noises, thereby proving the effectiveness and superiority of the method.

Description

Method for solving time-varying Siemens equation by noise suppression adaptive coefficient zero-ization neural network based on error norm
Technical Field
The invention relates to the technical field of time-varying Sieve's equation and neural network, in particular to a method for solving time-varying Sieve's equation based on an Adaptive coefficient nulling neural network (ACZNN for short) of noise suppression Adaptive coefficient based on error.
Background
The time-varying zerewith equation is an important branch of matrix theory. It has common applications in scientific research and engineering applications, such as robot kinematics, control theory, digital image processing, and communication engineering. Due to the important application of the time-varying siemens equation in many fields, it becomes more important to accurately solve the problem of the time-varying siemens equation.
In recent research and investigation, some numerical algorithms have been proposed to solve the above-mentioned problems. For example, it has been shown that newton's iteration method is used to calculate the time-varying sierster equation. Newton's iterative method is a classical numerical algorithm that solves the discrete time problem. From the control theory, newton's iteration method is a proportional feedback controller. Clearly, according to control theory, a controller using only proportional feedback cannot control a system with time-varying parameters in a predictive manner, resulting in hysteresis errors. In solving the zeroing problem, the Recurrent Neural Network (RNN) is typically designed as an ordinary differential function. The RNN model needs to evolve from an arbitrary initial value, continue in a given direction, and recursively compute an estimate for each point until it converges to the required accuracy. Therefore, the evolutionary direction of the algorithm needs to be modified according to the input state, forcing the residual to decrease to zero over time. The zero-ization neural network (ZNN) is an important component of the neural network as a parallel computing method and plays an important role in a linear computing problem or an optimization problem. For example, Liao et al propose an adaptive coefficient GNN model to solve the time-varying sierwise equation; gold et al propose a finite time recurrent neural network to solve this problem.
However, most of the neural networks are used for solving the time-varying sierwite equation at the present stage, but the convergence accuracy and the noise resistance of the neural networks have certain defects.
Disclosure of Invention
Based on the above discussion, the present invention proposes an error-based adaptive coefficient ZNN model, introduces the definition of error-based adaptive coefficients, and converts the coefficients of the OZNN model into error-related functions. The error-based adaptive coefficient ZNN model provided herein corrects the defect that the OZNN model cannot be stable under the influence of noise, that is, the corrected ZNN model still accurately solves the time-varying Sieve's equation for convergence within a finite time under the influence of noise, and the parallel computing model converts the problem of solving the time-varying Sieve's equation into a zero-solving problem of a linear equation for solving the time-varying Sieve's equation.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the method for solving the time-varying Siemens equation by the noise suppression adaptive coefficient zero-ization neural network based on the error norm comprises the following steps: the method comprises the following steps:
A. firstly, establishing a mathematical model for solving a time-varying Siemens equation;
B. defining an error norm-based adaptive nulling neural network, and discussing convergence;
C. adding different noises into the model, and discussing the robustness of the model under the influence of different noises;
D. initializing parameters, and verifying and analyzing results.
Preferably, the establishing of the mathematical model for solving the time-varying siemens equation in the step a includes the following steps:
A1. the Serveste equation is expressed as
A(t)X(t)-X(t)B(t)+C(t)=0
A2. Vectorizing two sides of the equation simultaneously to obtain
vec(A(t)X(t)-X(t)B(t))=-vec(C(t))
A3. From the kronecker product property:
Figure BDA0003468177500000031
wherein, the symbol
Figure BDA0003468177500000032
Representing the kronecker product, then
Figure BDA0003468177500000033
x (t) vec (x (t)), b (t) vec (c (t)), and the following formula
P(t)x(t)+b(t)=0
Then the error function is written as
e(t)=P(t)x(t)+b(t)
A4. Obtained by OZNN model
Figure BDA0003468177500000034
Preferably, the specific process of defining the error norm-based adaptive nulling neural network and determining its convergence in step B is as follows:
B1. the adaptive coefficient based on the error norm is specifically
a. Exponential adaptive coefficient
Figure BDA0003468177500000035
b. Logarithmic adaptive coefficient
λ(e(t))=r|log2||e(t)||2|+r,
c. Fractional adaptive coefficient
Figure BDA0003468177500000041
Wherein r >1 is a constant;
B2. defining an error norm based noise suppression adaptive coefficient zero-ization neural network as follows:
Figure BDA0003468177500000042
where μ >0 is a constant and phi (-) is expressed as an excitation function, expressed specifically as
Figure BDA0003468177500000043
Figure BDA0003468177500000044
Figure BDA0003468177500000045
B3. Order to
Figure BDA0003468177500000046
Make it derived from time
Figure BDA0003468177500000047
Thus, can obtain
Figure BDA0003468177500000048
Defining a Lyapunov candidate function as
Figure BDA0003468177500000049
Wherein κ>0, obtaining Gi(t) is positive, GiThe time derivative of (t) is described as
Figure BDA0003468177500000051
Order to
Figure BDA0003468177500000052
Suppose there is a certain time Gi(t)≤Gi(0) Then there are
The following two formulas
Figure BDA0003468177500000053
Figure BDA0003468177500000054
Can be obtained by finishing
Figure BDA0003468177500000055
Figure BDA0003468177500000056
According to the median theorem in the mathematical theory, the method obtains
Figure BDA0003468177500000057
Since the excitation function is a monotonically increasing function, it is possible to use a single excitation function
Figure BDA0003468177500000058
Specifying a section
Figure BDA0003468177500000059
Figure BDA00034681775000000510
When q isi(t)∈D1When it is used, make
Figure BDA00034681775000000511
Then the following inequality is:
|φ(qi(t))|≤S1|qi(t)|
|∈i(t)φ(qi(t))|≤S1|∈i(t)||qi(t)|
likewise, let
Figure BDA00034681775000000512
S3Is defined as
Figure BDA00034681775000000513
Figure BDA0003468177500000061
Is similarly obtained
|φ(qi(t))|≥S3|qi(t)|
|φ(∈i(t))|≤S2|∈i(t)|
In view of the above, it is desirable to provide,
Figure BDA0003468177500000062
obtained after finishing
Figure BDA0003468177500000063
It is always true, thus demonstrating the convergence of the model.
Preferably, the step C of determining the model of the introduced noise and the stability thereof includes the following specific steps:
C1. after the introduction of noise, the basic model is
Figure BDA0003468177500000064
C2. Xi is ai(t) is constant noise, then obtain
Figure BDA0003468177500000065
The Lyapunov candidate function is defined as follows:
Figure BDA0003468177500000066
derivation of this can yield:
Figure BDA0003468177500000067
from the above analysis, it can be known that the model still maintains stability under the influence of constant noise;
C3. xi is ai(t) is linear noise, then there is the following derivation: defining a function
Figure BDA0003468177500000071
Due to the fact that
Figure BDA0003468177500000072
Then
Figure BDA0003468177500000073
The following cases are discussed:
a.ξi(t)qiwhen the (t) is less than or equal to 0,
Figure BDA00034681775000000711
b.ξi(t)qiwhen (t) ≧ 0, when | qi(t) when increasing, | - μ φ (q) |i(t))+ξi(t) | will decrease until- μ φ (q)i(t))+ξi(t)=0;
At this time ui(t) obtaining a minimum value, thereby obtaining the following inequality
Figure BDA0003468177500000074
Because of | phi-1(.) | is less than or equal to |, so
Figure BDA0003468177500000075
In case of a certain moment t1Satisfy the equation
Figure BDA0003468177500000076
At the same time have t2Meet at the moment
Figure BDA0003468177500000077
And t is2-t1If at t, δ1To t2The moment always exists
Figure BDA0003468177500000078
Then there is
Figure BDA0003468177500000079
After that have
Figure BDA00034681775000000710
In combination with the above two formulas,
Figure BDA0003468177500000081
because of
Figure BDA0003468177500000082
Thus, it is possible to provide
Figure BDA0003468177500000083
The left side of the above equation is an increasing function with respect to δ, and the right side is a fixed value, resulting in
Figure BDA0003468177500000084
When in use
Figure BDA0003468177500000085
Then, similarly, obtain
Figure BDA0003468177500000086
Figure BDA0003468177500000087
Finally, the error norm converged to
Figure BDA0003468177500000088
The model therefore proves to be stable also in the presence of disturbances of the linear noise.
Preferably, the result verification and analysis in step D specifically comprises the following steps:
D1. given as an example
Figure BDA0003468177500000089
Figure BDA00034681775000000810
Figure BDA00034681775000000811
And adjusting the values of r and mu, substituting the values into the model, so that the experimental result accords with the expectation, and proving the effectiveness of the model.
The invention has the beneficial effects that: the invention discloses a method for solving a time-varying Siemens equation by a noise suppression adaptive coefficient zero-ization neural network based on an error norm, which has the improvement that:
1) an error-based adaptive coefficient is defined, the limit that an original ZNN model coefficient is a constant is broken through, and a variable coefficient related to the error is changed, so that the error-based adaptive coefficient is better adapted to the variable condition in real life.
2) The defect that the original ZNN model cannot be stable under noise interference is corrected by the error norm-based noise suppression adaptive coefficient zero-ization neural network, namely the corrected ZNN model still accurately solves the time-varying Siemens equation under the condition of noise interference.
3) The parallel computation model converts the time-varying Sieve's equation into a zero-solving problem of a linear equation for solving the time-varying Sieve's equation.
Drawings
FIG. 1 is a flow chart of the ACZNN method of the present invention.
FIG. 2 is a diagram illustrating a comparison between a computed solution and a theoretical solution trajectory using the ACZNN model, according to an embodiment of the present invention. (ii) a
Fig. 3 shows simulation results of ACZNN model using three different adaptive coefficients when r ═ μ ═ 3 according to an embodiment of the present invention.
Fig. 4 shows simulation results of the ACZNN model under the influence of constant noise when r ═ μ ═ 3 in the embodiment of the present invention.
Fig. 5 shows simulation results of the ACZNN model under the influence of linear noise when r ═ μ ═ 3 according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the following further describes the technical solution of the present invention with reference to the drawings and the embodiments.
A method for solving a time-varying sierster equation for a noise suppression adaptive coefficient nulling neural network based on an error norm as described with reference to fig. 1-5, comprising:
A. firstly, establishing a mathematical model for solving a time-varying Siemens equation;
A1. the Serveste equation is expressed as
A(t)X(t)-X(t)B(t)+C(t)=0
A2. Vectorizing two sides of the equation simultaneously to obtain
vec(A(t)X(t)-X(t)B(t))=-vec(C(t))
A3. From the kronecker product property:
Figure BDA0003468177500000101
wherein, the symbol
Figure BDA0003468177500000102
Representing the kronecker product, then
Figure BDA0003468177500000103
x (t) vec (x (t)), b (t) vec (c (t)), and the following formula
P(t)x(t)+b(t)=0
Then the error function is written as
e(t)=P(t)x(t)+b(t)
A4. Obtained by OZNN model
Figure BDA0003468177500000104
B. Defining an error norm-based adaptive nulling neural network, and discussing convergence;
B1. the adaptive coefficient based on the error norm is specifically
a. Exponential adaptive coefficient
Figure BDA0003468177500000105
b. Logarithmic adaptive coefficient
λ(e(t))=r|log2||e(t)||2|+r,
c. Fractional adaptive coefficient
Figure BDA0003468177500000111
Where r >1, is a constant.
B2. Defining an error norm based noise suppression adaptive coefficient zero-ization neural network as follows:
Figure BDA0003468177500000112
where μ >0, is a constant. Phi (-) is expressed as an excitation function, expressed specifically as
Figure BDA0003468177500000113
Figure BDA0003468177500000114
Figure BDA0003468177500000115
B3. Order to
Figure BDA0003468177500000116
Make it derived from time
Figure BDA0003468177500000117
Thus, can obtain
Figure BDA0003468177500000118
Defining a Lyapunov candidate function as
Figure BDA0003468177500000121
Wherein κ>0, obtaining Gi(t) is positive. GiThe time derivative of (t) is described as
Figure BDA0003468177500000122
Order to
Figure BDA0003468177500000123
Suppose there is a certain time Gi(t)≤Gi(0) Then there are
The following two formulas
Figure BDA0003468177500000124
Figure BDA0003468177500000125
Can be obtained by finishing
Figure BDA0003468177500000126
Figure BDA0003468177500000127
According to the median theorem in the mathematical theory, the method obtains
Figure BDA0003468177500000128
Since the excitation function is a monotonically increasing function, it is possible to use a single excitation function
Figure BDA0003468177500000129
Specifying a section D1
Figure BDA00034681775000001210
When q isi(t)∈D1When it is used, order
Figure BDA00034681775000001211
Then the following inequality is:
|φ(qi(t))|≤S1|qi(t)|
|∈i(t)φ(qi(t))|≤S1|∈i(t)||qi(t)|
likewise, let
Figure BDA00034681775000001212
S3Is defined as
Figure BDA0003468177500000131
Figure BDA0003468177500000132
Is similarly obtained
|φ(qi(t))|≥S3|qi(t)|
|φ(∈i(t))|≤S2|∈i(t)|
In view of the above, it is desirable to provide,
Figure BDA0003468177500000133
obtained after finishing
Figure BDA0003468177500000134
This is always true. Thereby demonstrating the convergence of the model.
C. Adding different noises into the model, and discussing the robustness of the model under the influence of different noises;
C1. after the introduction of noise, the basic model is
Figure BDA0003468177500000135
C2. Xi is ai(t) is constant noise, then obtain
Figure BDA0003468177500000136
The Lyapunov candidate function is defined as follows:
Figure BDA0003468177500000137
derivation of this can yield:
Figure BDA0003468177500000138
from the above analysis, it can be seen that the model still maintains stability under the influence of constant noise.
C3. Xi is ai(t) is linear noise, then there is the following derivation: defining a function
Figure BDA0003468177500000141
Due to the fact that
Figure BDA0003468177500000142
Then
Figure BDA0003468177500000143
The following cases are discussed:
a.ξi(t)qiwhen the (t) is less than or equal to 0,
Figure BDA0003468177500000144
b.ξi(t)qiwhen (t) ≧ 0, when | qi(t) when increasing, | - μ φ (q) |i(t))+ξi(t) | will decrease until- μ φ (q)i(t))+ξi(t)=0;
At this time ui(t) taking the minimum value. The following inequality is obtained
Figure BDA0003468177500000145
Because of | phi-1(. h) | is less than or equal to |, so
Figure BDA0003468177500000146
In case of a certain moment t1Satisfy the equation
Figure BDA0003468177500000147
At the same time have t2Meet at the moment
Figure BDA0003468177500000148
And t is2-t1If at t, δ1To t2The moment always exists
Figure BDA0003468177500000149
Then there is
Figure BDA00034681775000001410
After that have
Figure BDA00034681775000001411
In combination with the above two formulas,
Figure BDA0003468177500000151
because of the fact that
Figure BDA0003468177500000152
Thus, it is possible to provide
Figure BDA0003468177500000153
The above equation is an increasing function with respect to δ to the left and a fixed value to the right. To obtain
Figure BDA0003468177500000154
When in use
Figure BDA0003468177500000155
Then, similarly, obtain
Figure BDA0003468177500000156
Figure BDA0003468177500000157
Finally, the error norm converged to
Figure BDA0003468177500000158
The model therefore proves to be stable also in the presence of disturbances of the linear noise.
D. Initializing parameters, and verifying and analyzing results.
D1. Given as an example
Figure BDA0003468177500000159
Figure BDA00034681775000001510
Figure BDA00034681775000001511
And adjusting the values of r and mu, substituting the values into the model, so that the experimental result accords with the expectation, and proving the effectiveness of the model.
Examples
The method for solving the time-varying Siemens equation based on the zero-ization neural network is utilized for calculation: examples are as follows:
Figure BDA0003468177500000161
Figure BDA0003468177500000162
Figure BDA0003468177500000163
(1) in this example, r ═ μ ═ 3.
(2) And (4) respectively substituting the examples into ACZNN models with different adaptive coefficients for calculation. The comparison and error map of the calculated solution and the theoretical solution trajectory are shown in fig. 2 and fig. 3 (fig. 2 is a comparison map between the calculated solution and the theoretical solution trajectory of the ACZNN model, fig. 3(a) is a residual map under the influence of exponential adaptive coefficients, fig. 3(b) is a residual map under the influence of logarithmic adaptive coefficients, and fig. 3(c) is a residual map under the influence of fractional adaptive coefficients). Under the influence of noise, the error map of the ACZNN model for solving the target problem is shown in fig. 4 and fig. 5 (fig. 4(a) is a linear residual map under the action of constant noise, fig. 4(b) is a d-log residual map under the action of constant noise, fig. 5(a) is a linear residual map under the action of linear noise, and fig. 5(b) is a log residual map under the action of linear noise), and it is seen from the diagram that under the action of the ACZNN model, the calculated solution quickly converges to the theoretical solution, and the error also quickly converges to zero. Thereby proving the effectiveness and superiority of the ACZNN model.
The principal features of the invention and advantages of the invention have been shown and described. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (5)

1. The method for solving the time-varying Siemens equation by the noise suppression adaptive coefficient zero-ization neural network based on the error norm comprises the following steps: the method is characterized by comprising the following steps:
a, firstly, establishing a mathematical model for solving a time-varying Siemens equation;
b, defining a noise suppression adaptive coefficient zero neural network based on the error norm, and judging the convergence of the noise suppression adaptive coefficient zero neural network;
step C, adding different noises into the model, and discussing the robustness of the model under the influence of different noises;
and D, initializing parameters, and verifying and analyzing results.
2. The method for solving the time-varying siemens equation based on the error norm noise suppression adaptive coefficient nulling neural network of claim 1, wherein said establishing a mathematical model for solving the time-varying siemens equation of step a comprises the steps of:
A1. the Serveste equation is expressed as
A(t)X(t)-X(t)B(t)+C(t)=0
A2. Vectorizing two sides of the equation simultaneously to obtain
vec(A(t)X(t)-X(t)B(t))=-vec(C(t))
A3. From the kronecker product property:
Figure FDA0003468177490000011
wherein, the symbol
Figure FDA0003468177490000012
Representing the kronecker product, then
Figure FDA0003468177490000013
Then has the following formula
P(t)x(t)+b(t)=0
Then the error function is written as
e(t)=P(t)x(t)+b(t)
A4. Obtained by OZNN model
Figure FDA0003468177490000021
3. The method according to claim 1, wherein the step B of defining the error norm based noise suppression adaptive coefficient nulling neural network comprises:
B1. the adaptive coefficient based on the error norm is specifically
a. Exponential adaptive coefficient
Figure FDA0003468177490000022
b. Logarithmic adaptive coefficient
λ(e(t))=r|log2||e(t)||2|+r,
c. Fractional adaptive coefficient
Figure FDA0003468177490000023
Wherein r >1 is a constant;
B2. defining an error norm based noise suppression adaptive coefficient zero-ization neural network as follows:
Figure FDA0003468177490000024
where μ >0 is a constant and phi (-) is expressed as an excitation function, expressed specifically as
Figure FDA0003468177490000031
Figure FDA0003468177490000032
Figure FDA0003468177490000033
B3. Order to
Figure FDA0003468177490000034
Make it derived from time
Figure FDA0003468177490000035
Thus, can obtain
Figure FDA0003468177490000036
Defining a Lyapunov candidate function as
Figure FDA0003468177490000037
Where κ >0, to give Gi(t) is positive, GiThe time derivative of (t) is described as
Figure FDA0003468177490000038
Order to
Figure FDA0003468177490000039
Suppose there is a certain time Gi(t)≤Gi(0) Then there are the following two formulas
Figure FDA00034681774900000310
Figure FDA00034681774900000311
Can be obtained by finishing
Figure FDA0003468177490000041
Figure FDA0003468177490000042
According to the median theorem in the mathematical theory, the method obtains
Figure FDA0003468177490000043
Since the excitation function is a monotonically increasing function, it is possible to use a single excitation function
Figure FDA0003468177490000044
Specifying a section
Figure FDA0003468177490000045
Figure FDA0003468177490000046
When q isi(t)∈D1When it is used, order
Figure FDA0003468177490000047
Then the following inequality is:
|φ(qi(t))|≤S1|qi(t)|
|∈i(t)φ(qi(t))|≤S1|∈i(t)||qi(t)|
likewise, let
Figure FDA0003468177490000048
S3Is defined as
Figure FDA0003468177490000049
Figure FDA00034681774900000410
Is similarly obtained
|φ(qi(t))|≥S3|qi(t)|
|φ(∈i(t))|≤S2|∈i(t)|
In view of the above, it is desirable to provide,
Figure FDA00034681774900000411
obtained after finishing
Figure FDA0003468177490000051
Figure FDA0003468177490000052
It is always true, thus demonstrating the convergence of the model.
4. The method for solving the time-varying siemens equation based on the error norm noise suppression adaptive coefficient nulling neural network of claim 1, wherein the noise addition model of step C is determined by the specific steps of:
C1. after the introduction of noise, the basic model is
Figure FDA0003468177490000053
C2. Xi is ai(t) is constant noise, then obtain
Figure FDA0003468177490000054
The Lyapunov candidate function is defined as follows:
Figure FDA0003468177490000055
derivation of this can yield:
Figure FDA0003468177490000056
from the above analysis, it can be seen that under the influence of constant noise, the model still maintains stability,
C3. xi is ai(t) is linear noise, then there is the following derivation: defining a function
Figure FDA0003468177490000057
Due to the fact that
Figure FDA0003468177490000058
Then
Figure FDA0003468177490000059
The following cases are discussed:
a.ξi(t)qiwhen the (t) is less than or equal to 0,
Figure FDA00034681774900000510
b.ξi(t)qiwhen (t) ≧ 0, when | qi(t) when increasing, | - μ φ (q) |i(t))+ξi(t) | will decrease until- μ φ (q)i(t))+ξi(t)=0;
At this time ui(t) obtaining a minimum value, thereby obtaining the following inequality
Figure FDA0003468177490000061
Because of | phi-1(.) | is less than or equal to |, so
Figure FDA0003468177490000062
In case of a certain moment t1Satisfy the equation
Figure FDA0003468177490000063
At the same time have t2Meet at the moment
Figure FDA0003468177490000064
And t is2-t1If at t, δ1To t2The moment always exists
Figure FDA0003468177490000065
Then there is
Figure FDA0003468177490000066
Figure FDA0003468177490000067
After that have
Figure FDA0003468177490000068
In combination with the above two formulas,
Figure FDA0003468177490000069
because of the fact that
Figure FDA00034681774900000610
Thus, it is possible to provide
Figure FDA00034681774900000611
The left side of the above equation is an increasing function with respect to δ, and the right side is a fixed value, resulting in
Figure FDA00034681774900000612
When in use
Figure FDA00034681774900000613
Then, similarly, obtain
Figure FDA00034681774900000614
Figure FDA0003468177490000071
Finally, the error norm converged to
Figure FDA0003468177490000072
The model therefore proves to be stable also in the presence of disturbances of the linear noise.
5. The method for solving the time-varying siemens equation based on the noise suppression adaptive coefficient nulling neural network of the error norm as set forth in claim 1, wherein the result verification and analysis in step D specifically comprises the steps of:
D1. given as an example
Figure FDA0003468177490000073
Figure FDA0003468177490000074
Figure FDA0003468177490000075
And adjusting the values of r and mu, substituting the values into the model, so that the experimental result accords with the expectation, and proving the effectiveness of the model.
CN202210035500.3A 2022-01-13 2022-01-13 Method for solving time-varying Siemens equation by noise suppression adaptive coefficient zero-ization neural network based on error norm Pending CN114491400A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210035500.3A CN114491400A (en) 2022-01-13 2022-01-13 Method for solving time-varying Siemens equation by noise suppression adaptive coefficient zero-ization neural network based on error norm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210035500.3A CN114491400A (en) 2022-01-13 2022-01-13 Method for solving time-varying Siemens equation by noise suppression adaptive coefficient zero-ization neural network based on error norm

Publications (1)

Publication Number Publication Date
CN114491400A true CN114491400A (en) 2022-05-13

Family

ID=81512821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210035500.3A Pending CN114491400A (en) 2022-01-13 2022-01-13 Method for solving time-varying Siemens equation by noise suppression adaptive coefficient zero-ization neural network based on error norm

Country Status (1)

Country Link
CN (1) CN114491400A (en)

Similar Documents

Publication Publication Date Title
CN112101530B (en) Neural network training method, device, equipment and storage medium
CN108132599B (en) Design method of UDE control system based on iterative feedback setting
CN112445131A (en) Self-adaptive optimal tracking control method for linear system
GB2471758A (en) Methods and Apparatus to Compensate First Principle-Based Simulation Models
US20040181300A1 (en) Methods, apparatus and computer program products for adaptively controlling a system by combining recursive system identification with generalized predictive control
WO2004090782A1 (en) Accurate linear parameter estimation with noisy inputs
Wang et al. Kernel recursive least squares with multiple feedback and its convergence analysis
JPH07104715B2 (en) How to identify parameters
Sun et al. Filtered multi‐innovation‐based iterative identification methods for multivariate equation‐error ARMA systems
US5129038A (en) Neural network with selective error reduction to increase learning speed
Gao et al. Transient performance analysis of zero-attracting Gaussian kernel LMS algorithm with pre-tuned dictionary
Jing Identification of a deterministic Wiener system based on input least squares algorithm and direct residual method
CN114491400A (en) Method for solving time-varying Siemens equation by noise suppression adaptive coefficient zero-ization neural network based on error norm
CN113282873A (en) Method for solving time-varying continuous algebraic Riccati equation based on zero-degree neural network
JP2541044B2 (en) Adaptive filter device
US20140006321A1 (en) Method for improving an autocorrector using auto-differentiation
CN113297540A (en) APP resource demand prediction method, device and system under edge Internet of things agent service
CN112379601A (en) MFA control system design method based on industrial process
CN111416595A (en) Big data filtering method based on multi-core fusion
CN110598226A (en) Nonlinear system construction method based on collective estimation and neural network
Rao et al. Efficient total least squares method for system modeling using minor component analysis
CN114897188B (en) Large-scale data processing method
JP2541040B2 (en) Coefficient updating method in adaptive filter device
CN114357359A (en) Time-varying Lyapunov equation solving based on EACFZNN model
CN114167728B (en) Self-adaptive control method and device of multi-agent system with dead zone constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination