CN113780543B - FPGA implementation method for PID closed-loop control of neural network - Google Patents

FPGA implementation method for PID closed-loop control of neural network Download PDF

Info

Publication number
CN113780543B
CN113780543B CN202111052881.8A CN202111052881A CN113780543B CN 113780543 B CN113780543 B CN 113780543B CN 202111052881 A CN202111052881 A CN 202111052881A CN 113780543 B CN113780543 B CN 113780543B
Authority
CN
China
Prior art keywords
layer
output
neurons
neural network
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111052881.8A
Other languages
Chinese (zh)
Other versions
CN113780543A (en
Inventor
王俊
李谋道
林瑞全
程长春
谢鑫
林剑峰
谢欢
章敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN202111052881.8A priority Critical patent/CN113780543B/en
Publication of CN113780543A publication Critical patent/CN113780543A/en
Application granted granted Critical
Publication of CN113780543B publication Critical patent/CN113780543B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention relates to an FPGA implementation method for PID closed-loop control of a neural network. Comprising the following steps: taking the motion state quantity as input in a training period, and passing through the output layer, the hidden layer and the multiplier-adder and the activation function of the output layer; then, the PWM wave is calculated and obtained through an incremental PID algorithm and is input into a control object; judging whether the expected output and the actual output are equal, and if the expected output and the actual output are not equal, correcting the weight of each layer of neurons according to the gradient descent principle; the rotating speed of the direct current motor is controlled by regulating the duty ratio through PWM waves, the total number of pulses in a period of time is measured through quadruple frequency counting, and then the actual rotating speed is obtained through mathematical conversion calculation; the control parameters of the system can be changed in real time according to the environment through multiple learning and training to achieve the optimal control effect. The invention can find the optimal control strategy, has the function of automatically adjusting the control parameters to achieve self-adaptive control in a complex environment, and has the characteristics of reliable performance and high real-time performance.

Description

FPGA implementation method for PID closed-loop control of neural network
Technical Field
The invention relates to the field of motion control, in particular to an FPGA implementation method for PID closed-loop control of a neural network.
Background
With the rapid development of artificial intelligence, intelligent devices are rapidly spreading. The model and the learning algorithm of the neural network have achieved great achievement in software implementation, but because of the limitation of a CPU, the execution command can only be read sequentially, the characteristic of parallel operation of the neural network can not be solved, and with the continuous rapid development of artificial intelligence, the algorithm of the neural network is more and more complex, and in many occasions with higher real-time requirements, such as automatic driving, data analysis, industrial control and the like, the capability of the neural network is greatly limited by the sequential execution command. The BP neural network realized by the software serial instruction always has the problems of too slow network convergence speed and poor real-time performance, and the appearance of the programmable logic device FPGA provides an effective hardware realization mode for the neural network. The FPGA can complete a plurality of operations in one period in a parallel computing mode, and the programmable and reconfigurable capacity of the FPGA greatly shortens the design period of the neural network, so that the realization of a large-scale neural network by utilizing hardware is possible.
Disclosure of Invention
The invention aims to provide an FPGA implementation method of PID closed-loop control of a neural network, which is beneficial to improving the accuracy and instantaneity of motor speed control.
In order to achieve the above purpose, the technical scheme of the invention is as follows: a method for realizing FPGA of PID closed-loop control of a neural network comprises the following steps:
step S1, acquiring an expected rotating speed through external equipment, and entering a step S2 after obtaining an error, the expected rotating speed and the actual rotating speed through calculation;
step S2, forward propagation calculation is carried out by utilizing a BP neural network algorithm, and step S3 is carried out after a performance index function is calculated;
step S3, correcting the weights of the neurons of each layer by utilizing the gradient descent principle to carry out error counter propagation, and then entering step S4;
and S4, after each training, calculating by using a closed-loop PID algorithm, and controlling and measuring the speed by using a PWM technology and quadruple frequency pulse counting.
In an embodiment of the present invention, in step S1, error calculation is performed by the formula e (k) =r (k) -y (k), where e (k), r (k), and y (k) represent the current error, the expected rotation speed, and the actual rotation speed at the time k, respectively, and are used as inputs of the BP neural network.
In an embodiment of the present invention, in step S2, the neural network obtained in step S1 is input, and then passes through neurons of an input layer, an hidden layer and an output layer, where the neurons include multipliers, accumulators, weight registers and activation functions, and the result is Kp, ki and Kd control parameters of the PID controller;
of the formula (I)And->Representing the input of the hidden layer and the input of the output layer respectively; />And->Respectively represent inputLayer output, hidden layer output, and output layer output; g (x) and f (x) represent the activation function of the hidden layer and the activation function of the output layer, respectively; w (w) ij And w jl The weights of the neurons of each layer of the hidden layer and the weights of the neurons of each layer of the output layer are represented respectively; Σ represents the accumulation symbol, which is the accumulation of the weight of the i-th neuron of the current layer multiplied by the outputs of all neurons of the upper layer.
In an embodiment of the present invention, step S3, the weights of each neuron in the hidden layer and the output layer are modified by using the gradient descent principle, the weights of each neuron are changed through multiple iterations in the training process, and finally, a function minimum point is found through a large number of iterations, and weight calculation is performed as shown in formula (2):
wherein eta is the learning rate and is more than 0, gamma is the inertia coefficient and is more than 0,and->Weights representing the implicit layer and the output layer of the last training, o (1) (i)、/>Representing the output of the input layer, the output of the hidden layer, respectively,/->And->The difference value of the current weight of the output layer and the upper layer weight and the difference value of the current weight of the hidden layer and the upper layer weight are respectively +.>And->The local gradient values of the neurons of the output layer and the local gradient values of the neurons of the hidden layer are represented respectively, and the formula (3) and the formula (4) are as follows:
in the formulas (3) and (4), e (k) is the difference between the given rotation speed and the actual rotation speed; by a sign functionThe influence of the calculation inaccuracy can be compensated by the learning rate instead of the amount of change of the input on the output ratio of the control object; />Representing the derivative of the activation function of the output layer, wherein the argument +.>An input value for the output layer; />The product term of the proportion, the integral and the derivative of the incremental PID algorithm; />Derivatives of activation functions for hidden layers, wherein the argument +.>Input values for neurons of the hidden layer; />The weights of the neurons of the output layer are multiplied by the local gradient values of the neurons of the output layer and then accumulated.
In one embodiment of the present invention, step S4, the control coefficient obtained by training the neural network is used as the proportional, integral and derivative coefficients of the incremental PID algorithm, and equation (5) is the incremental PID control equation; the quadruple frequency pulse counting is used for counting the number of pulses in a time period, and then the actual speed of the motor is calculated through an M velocimetry; the PWM technology is adopted to adjust the duty ratio of the motor to control the speed of the motor;
Δu(k)=K p [e(k)-e(k-1)]+K i e(k)+K d [e(k)-2e(k-1)+e(k-2)] (5)
wherein K is p 、K i And K d Respectively outputting the BP neural network output layers; proportional, integral and derivative corresponding to the PID control algorithm; e (k), e (k-1) and e (k-2) are errors of the current time, the upper time and the upper time, respectively; deltau (k) is the incremental change in the input to the control object.
In an embodiment of the present invention, the activation function of the hidden layer and the activation function of the output layer respectively select a lasting function and a Sigmoid function, which use a lookup table to implement the activation function, where the lasting function value ranges between [ -1,1] and is symmetrical about the zero point, and the Sigmoid function takes a value range between [0,1], if the formula (6) shows that:
e is the base of natural logarithms, e being approximately equal to 2.71828182; f (x) denotes the sign of the implicit layer activation function and g (x) denotes the sign of the output layer activation function. The meaning of the formula is that the activation function is used for adding a nonlinear factor, improving the expression capacity of the neural network to the model, and solving the problem which can be solved by the linear model.
Compared with the prior art, the invention has the following beneficial effects: the invention can find the optimal control strategy, has the function of automatically adjusting the control parameters to achieve self-adaptive control in a complex environment, and has the characteristics of reliable performance and high real-time performance.
Drawings
Fig. 1 is a flowchart of a BP neural network algorithm according to an embodiment of the present invention.
FIG. 2 is a flowchart of a PID algorithm according to an embodiment of the invention.
Fig. 3 is a system configuration diagram of an embodiment of the present invention.
Fig. 4 is a state machine design diagram of an embodiment of the present invention.
Fig. 5 is a diagram of a single neuron structure according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is specifically described below with reference to the accompanying drawings.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the present application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments in accordance with the present application. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
As shown in fig. 1 to 5, the present embodiment provides an FPGA implementation method for PID closed-loop control of a neural network, which specifically includes the following steps:
step S1: acquiring an expected rotating speed through external equipment, and obtaining an error, the expected rotating speed and an actual rotating speed through calculation;
step S2: forward propagation calculation is carried out by utilizing a BP neural network algorithm, and a performance index function is calculated;
step S3: the gradient descent principle is utilized to carry out the counter propagation of errors to correct the weights of the neurons of each layer;
step S4: after each training, the control and speed measurement are performed by using a closed-loop PID algorithm and through a PWM technology and quadruple frequency pulse counting.
In this embodiment, in step S1, the desired input is input through the external device, and then error calculation is performed through the formula e (k) =r (k) -y (k), where e (k), r (k), and y (k) represent the current error, the desired rotational speed, and the actual rotational speed at time k, respectively, and these PID motion state amounts are used as inputs of the system.
In this embodiment, in step S2, the PID motion state quantity is input as a neural network, and then passes through neurons of an input layer, an hidden layer and an output layer, and the design of the single neurons mainly includes modules such as a multiplier, an accumulator, a weight register, an activation function, and the like, so that the proportional, integral and differential control parameters of the PID controller are obtained.
Of the formula (I)And->Representing the input of the hidden layer and the input of the output layer respectively; />Andrespectively representing the output of the input layer, the hidden layer and the output layer; g (x) and f (x) represent activation functions of the hidden layer and the output layer, respectively; w (w) ij And w jl Weights of neurons of each layer representing an implicit layer and an output layer respectively; and sigma represents a summation symbol, wherein the summation symbol is summation obtained by multiplying the weight of each neuron of the current layer i by the output of all neurons of the upper layer.
In this embodiment, in step S3, the gradient descent principle is adopted to correct the weights of each neuron in the hidden layer and the output layer, the weights of the neurons in each layer are changed through multiple iterations in the training process, and finally, a function minimum point is found through a large number of iterations, and weight calculation is performed as shown in formula (2):
wherein eta is the learning rate and is more than 0, gamma is the inertia coefficient and is more than 0,and->Weights representing the hidden layer and the output layer of the last training, respectively, < >>And->The difference between the current weight and the upper layer weight of the output layer and the hidden layer, respectively, ++>And->The local gradient values of each neuron representing the output layer and the hidden layer are shown in the following formula (3) and formula (4):
in the formulas (3) and (4), e (k) is the difference between the given rotation speed and the actual rotation speed; by a sign functionThe influence of the calculation inaccuracy can be compensated by the learning rate instead of the amount of change of the input on the output ratio of the control object; />Representing the derivative of the activation function of the output layer, wherein the argument +.>An input value for the output layer; />The product term of the proportion, the integral and the derivative of the incremental PID algorithm; />Derivatives of activation functions for hidden layers, wherein the argument +.>Input values for individual neurons of the hidden layer; />The weights of the neurons of the output layer are multiplied by the local gradient values of the neurons of the output layer and then accumulated.
In this embodiment, in step S4, the control coefficients trained by the neural network are used as the proportional, integral and derivative coefficients of the incremental PID algorithm, and equation (5) is an incremental PID control equation. The quadruple frequency pulse count is used for counting the number of pulses in a time period, and the actual speed of the motor is calculated through an M velocimetry. And regulating the duty ratio of the motor by adopting a PWM technology to control the speed of the motor.
Δu(k)=K p [e(k)-e(k-1)]+K i e(k)+K d [e(k)-2e(k-1)+e(k-2)] (5)
Wherein K is p 、K i And K d Respectively outputting the BP neural network output layers; proportional, integral and derivative of the corresponding PID control algorithm. e (k), e (k-1) and e (k-2) are errors of the current time, the upper time and the upper time, respectively; deltau (k) is the incremental change in the input to the control object.
In this embodiment, the activation functions of the hidden layer and the output layer select a taning function and a Sigmoid function, respectively, which use a lookup table to implement the activation function, where the taning function value ranges between [ -1,1] and is symmetrical about the zero point, and the Sigmoid function value ranges between [0,1], where the formula (6) shows:
e is the base of natural logarithms, e being approximately equal to 2.71828182; f (x) denotes the sign of the implicit layer activation function and g (x) denotes the sign of the output layer activation function. The meaning of the formula is that the activation function is used for adding a nonlinear factor, improving the expression capacity of the neural network to the model, and solving the problem which can be solved by the linear model.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the invention in any way, and any person skilled in the art may make modifications or alterations to the disclosed technical content to the equivalent embodiments. However, any simple modification, equivalent variation and variation of the above embodiments according to the technical substance of the present invention still fall within the protection scope of the technical solution of the present invention.

Claims (4)

1. The FPGA implementation method for the PID closed-loop control of the neural network is characterized by comprising the following steps of:
step S1, acquiring an expected rotating speed through external equipment, and entering a step S2 after obtaining an error, the expected rotating speed and the actual rotating speed through calculation;
step S2, forward propagation calculation is carried out by utilizing a BP neural network algorithm, and step S3 is carried out after a performance index function is calculated;
step S3, correcting the weights of the neurons of each layer by utilizing the gradient descent principle to carry out error counter propagation, and then entering step S4;
s4, after each training, calculating by using a closed-loop PID algorithm, and controlling and measuring the speed by using a PWM technology and quadruple frequency pulse counting;
in the step S2, the neural network obtained in the step S1 is input and then passes through neurons of an input layer, an hidden layer and an output layer, wherein the neurons comprise multipliers, accumulators, weight registers and activation functions, and the result is Kp, ki and Kd control parameters of a PID controller;
of the formula (I)And->Representing the input of the hidden layer and the input of the output layer respectively; />Andrepresenting the output of the input layer, the output of the hidden layer and the output of the output layer respectively; g (x) and f (x) represent the activation function of the hidden layer and the activation function of the output layer, respectively; w (w) ij And w jl The weights of the neurons of each layer of the hidden layer and the weights of the neurons of each layer of the output layer are represented respectively; sigma represents a summation symbol, and the summation symbol is summation obtained by multiplying the weight of the ith neuron of the current layer and the output of all neurons of the upper layer;
step S3, the weights of the neurons of the hidden layer and the output layer are corrected by adopting a gradient descent principle, the weights of the neurons of each layer are changed through multiple iterations in the training process, and finally, function minimum value points are found through a large number of iterations, and weight calculation is carried out as shown in a formula (2):
wherein eta is the learning rate and is more than 0, gamma is the inertia coefficient and is more than 0,and->Weights representing the implicit layer and the output layer of the last training, o (1) (i)、/>Representing the output of the input layer, the output of the hidden layer, respectively,/->And->The difference value of the current weight of the output layer and the upper layer weight and the difference value of the current weight of the hidden layer and the upper layer weight are respectively +.>And->The local gradient values of the neurons of the output layer and the local gradient values of the neurons of the hidden layer are represented respectively, and the formula (3) and the formula (4) are as follows:
in the formulas (3) and (4), e (k) is the difference between the given rotation speed and the actual rotation speed; by a sign functionThe influence of the inaccuracy of the calculation is compensated by the learning rate in place of the amount of change of the input on the output ratio of the control object;representing the derivative of the activation function of the output layer, wherein the argument +.>An input value for the output layer;the product term of the proportion, the integral and the derivative of the incremental PID algorithm; />Derivatives of activation functions for hidden layers, wherein the argument +.>Input values for neurons of the hidden layer; />The weights of the neurons of the output layer are multiplied by the local gradient values of the neurons of the output layer and then accumulated.
2. The method according to claim 1, wherein in step S1, error calculation is performed by the formula e (k) =r (k) -y (k), where e (k), r (k), and y (k) represent the current error, the expected rotational speed, and the actual rotational speed at time k, respectively, and are used as inputs to the BP neural network.
3. The method for realizing the FPGA of the PID closed-loop control of the neural network according to claim 1, wherein in the step S4, the control coefficient obtained by training the neural network is used as the proportional, integral and differential coefficients of an incremental PID algorithm, and the formula (5) is an incremental PID control formula; the quadruple frequency pulse counting is used for counting the number of pulses in a time period, and then the actual speed of the motor is calculated through an M velocimetry; the PWM technology is adopted to adjust the duty ratio of the motor to control the speed of the motor;
Δu(k)=K p [e(k)-e(k-1)]+K i e(k)+K d [e(k)-2e(k-1)+e(k-2)] (5)
wherein K is p 、K i And K d Respectively outputting the BP neural network output layers; proportional, integral and derivative corresponding to the PID control algorithm; e (k), e (k-1) and e (k-2) are errors of the current time, the upper time and the upper time, respectively; deltau (k) is the incremental change in the input to the control object.
4. The method according to claim 1, wherein the activation function of the hidden layer and the activation function of the output layer respectively select a taning function and a Sigmoid function, which use a lookup table to implement the activation function, the taning function value ranges between [ -1,1] and is symmetrical about the zero point, the Sigmoid function value ranges between [0,1], if the formula (6) shows:
e is the base of natural logarithms, e being approximately equal to 2.71828182; f (x) represents the sign of the implicit layer activation function, g (x) represents the sign of the output layer activation function; the meaning of the formula is that the activation function is used for adding a nonlinear factor, improving the expression capacity of the neural network to the model, and solving the problem which can be solved by the linear model.
CN202111052881.8A 2021-09-08 2021-09-08 FPGA implementation method for PID closed-loop control of neural network Active CN113780543B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111052881.8A CN113780543B (en) 2021-09-08 2021-09-08 FPGA implementation method for PID closed-loop control of neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111052881.8A CN113780543B (en) 2021-09-08 2021-09-08 FPGA implementation method for PID closed-loop control of neural network

Publications (2)

Publication Number Publication Date
CN113780543A CN113780543A (en) 2021-12-10
CN113780543B true CN113780543B (en) 2024-02-02

Family

ID=78842013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111052881.8A Active CN113780543B (en) 2021-09-08 2021-09-08 FPGA implementation method for PID closed-loop control of neural network

Country Status (1)

Country Link
CN (1) CN113780543B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117389134A (en) * 2023-12-07 2024-01-12 中国汽车技术研究中心有限公司 Automobile field test mobile platform PID parameter calibration method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103410662A (en) * 2013-08-06 2013-11-27 江苏科技大学 Neural network compensation control method for capturing maximum wind energy in wind power system
CN104612898A (en) * 2014-11-27 2015-05-13 江苏科技大学 Wind power variable-pitch multi-variable fuzzy neural network PID control method
CN109670580A (en) * 2018-12-21 2019-04-23 浙江工业大学 A kind of data recovery method based on time series
WO2021099942A1 (en) * 2019-11-18 2021-05-27 Immervision Inc. Using imager with on-purpose controlled distortion for inference or training of an artificial intelligence neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3051429A1 (en) * 2018-08-08 2020-02-08 Applied Brain Research Inc. Digital circuits for evaluating neural engineering framework style neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103410662A (en) * 2013-08-06 2013-11-27 江苏科技大学 Neural network compensation control method for capturing maximum wind energy in wind power system
CN104612898A (en) * 2014-11-27 2015-05-13 江苏科技大学 Wind power variable-pitch multi-variable fuzzy neural network PID control method
CN109670580A (en) * 2018-12-21 2019-04-23 浙江工业大学 A kind of data recovery method based on time series
WO2021099942A1 (en) * 2019-11-18 2021-05-27 Immervision Inc. Using imager with on-purpose controlled distortion for inference or training of an artificial intelligence neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Switching Techniques and Intelligent Controllers for Induction Motor Drive: Issues and Recommendations;M. A. Hannan, J. A. Ali, P. J. Ker, A. Mohamed, M. S. H. Lipu and A. Hussain;IEEE Access;第6卷;47489-47510 *
田丹兰.基于BP神经网络的无刷直流电机调速系统研究与设计.硕士电子期刊 工程科技Ⅱ辑.2019,(第1期),11-29. *
自顶向下基于DSP Builder的PID控制系统开发;戢方,雷勇,王俊;现代电子技术(第09期);127-129 *

Also Published As

Publication number Publication date
CN113780543A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
CN108284442B (en) Mechanical arm flexible joint control method based on fuzzy neural network
Zarzycki et al. Advanced predictive control for GRU and LSTM networks
CN112904728A (en) Mechanical arm sliding mode control trajectory tracking method based on improved approach law
CN113780543B (en) FPGA implementation method for PID closed-loop control of neural network
CN102497156A (en) Neural-network self-correcting control method of permanent magnet synchronous motor speed loop
CN101986564A (en) Backlash operator and neural network-based adaptive filter
JPH05127706A (en) Neural network type simulator
CN102645894B (en) Fuzzy adaptive dynamic programming method
Chow et al. A real-time learning control approach for nonlinear continuous-time system using recurrent neural networks
CN113659176B (en) Self-adaptive control method and device for hydrogen fuel cell
CN104834216A (en) Binomial-based wireless sensor network trust management method
Belov et al. State observer based Elman recurrent neural network for electric drive of Optical-Mechanical complexes
CN116402209A (en) Three gorges reservoir dam front drift prediction method based on improved neural network model
CN204695010U (en) A kind of circuit regulating PI controller parameter based on BP neural network
CN114384931B (en) Multi-target optimal control method and equipment for unmanned aerial vehicle based on strategy gradient
CN114094896A (en) Self-configuration T-S type fuzzy neural network control method of permanent magnet synchronous motor
Wang et al. Performance analysis of an improved single neuron adaptive pid control
Pasero et al. Hw-Sw codesign of a flexible neural controller through a FPGA-based neural network programmed in VHDL
Zhang et al. An application of cerebellar control model for prehension movements
CN112015083B (en) Parameter self-tuning method of SISO (SISO) compact-format model-free controller based on ensemble learning
Derajić et al. A Normalised Gradient Descent PI Controller For Speed Servomechanism
Chinnam et al. Neural network-based quality controllers for manufacturing systems
Kobzev et al. THE USE OF NEURAL NETWORKS IN SYSTEMS WITH COMPLEMENTARY CORRECTION OF THE CONTROL ACTION.
Galvan et al. Two neural networks for solving the linear system identification problem
Kaminski et al. FPGA realization of the neural speed estimator for the drive system with elastic coupling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant