CN108393892A - A kind of robot feedforward torque compensation method - Google Patents

A kind of robot feedforward torque compensation method Download PDF

Info

Publication number
CN108393892A
CN108393892A CN201810178273.3A CN201810178273A CN108393892A CN 108393892 A CN108393892 A CN 108393892A CN 201810178273 A CN201810178273 A CN 201810178273A CN 108393892 A CN108393892 A CN 108393892A
Authority
CN
China
Prior art keywords
robot
action
state
neural network
current time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810178273.3A
Other languages
Chinese (zh)
Other versions
CN108393892B (en
Inventor
刘暾东
吴晓敏
高凤强
贺苗
邵桂芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN201810178273.3A priority Critical patent/CN108393892B/en
Publication of CN108393892A publication Critical patent/CN108393892A/en
Application granted granted Critical
Publication of CN108393892B publication Critical patent/CN108393892B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Feedback Control In General (AREA)
  • Manipulator (AREA)

Abstract

The present invention relates to a kind of industrial robot feedforward torque compensation methods, include the following steps:S1:Establish value of the movements neural network;S2:Generate training track;S3:According to the state at current time, corresponding action is selected, is output on motor current ring feedforward path after then being integrated to the feedforward torque increment of selected action, and obtains the state of the return and subsequent time immediately at current time;S4:Using the state at current time, selected feedforward torque increment and the state of return and subsequent time is as the training sample of neural network immediately, after the training sample is normalized, is stored in queue;S5:A part of training sample is randomly choosed from queue, value of the movements neural network is trained using stochastic gradient descent method, is less than error threshold until reaching maximum frequency of training or joint tracking error.Real-time compensation industrial robot joint torque can be realized without establishing complicated kinetic model in the present invention, realizes high-precision control.

Description

A kind of robot feedforward torque compensation method
Technical field
The present invention relates to technical field of robot control more particularly to a kind of methods of robot feedforward torque compensation.
Background technology
The features such as robot is with its flexibility, versatility, high-precision and low cost, becomes engineering machinery manufacturing field application One of widest equipment.With the robot application field constantly high speed development of extension and modern industry, high-speed, high precision Have become robot Main Trends of The Development, and robot torque feedforward compensation is the key that promote its kinematic accuracy.Therefore essence True torque feedforward compensation is to realizing that robot high-speed, high precision control has great importance.
Robot feedovers torque compensation generally using using kinetic model calculating feedforward torque, and this method can theoretically be done To good compensation effect, but in practical application, it is often difficult to obtain ideal effect.Its common problem has:(1) algorithm mistake In dependence kinetic model precision;(2) kinetic parameters are excessive, it is difficult to recognize;(3) in robot During Process of Long-term Operation, Cause kinetic model precision to reduce since abrasion, temperature change and load change, even fail.
Invention content
The torque compensation method to solve the above problems, a kind of robot of the present invention feedovers, without establishing complicated dynamics Model, you can realize real-time compensation joint of robot torque, realize high-precision control, and use the method for intensified learning, energy Enough in the process of running, dynamic adjustment feedforward torque output, to adapt to the variation of environment.Meanwhile before a kind of robot of the present invention Presenting torque compensation method has wider application, is suitable for the robot of different model.
Concrete scheme is as follows:
A kind of robot feedforward torque compensation method, includes the following steps:
S1:Foundation acts-it is worth neural network, determine the input, output and activation primitive of the neural network;
S2:Generate the training track of robot;
S3:According to the state at the current time of robot, corresponding action is selected, then to the feedforward torque of selected action Increment is output to after being integrated on the motor current ring feedforward path of robot, and obtains the current time of robot immediately The state of return and subsequent time;
S4:Using the state at current time, selected feedforward torque increment and immediately return and subsequent time state as The training sample of action-value neural network after the training sample is normalized, is stored in playback experience queue;
S5:A part of training sample is randomly choosed from the playback experience queue, using stochastic gradient descent method to dynamic Work-value neural network is trained, and is less than error threshold until reaching maximum frequency of training or joint tracking error.
Further, step S1 is specifically included:
S11:Foundation acts-is worth neural network, including one layer of input layer, N layers of hidden layer, wherein N >=2 and one layer it is defeated Go out layer, the form connected entirely is used between different layers;
S12:The input that input layer is arranged is the state of robot;
S13:The output that output layer is arranged is the value of everything;
S14:Setting acts-is worth the first layer of neural network using linear activation primitive, and expression formula is f (x)=x, Other layers of activation primitive use the Sigmoid functions, expression formula to be
S15:Setting acts-is worth the initiation parameter of neural network, and initialization weight equation is Wherein ninFor preceding layer neuron number, noutFor later layer neuron number, initialization offset formula is Var (b)=0.
Further, step S12 is specifically included:The state for being arranged for the n-th moment is sn, current time indicates with t, then currently The state at moment is st, settingWherein, qtetRespectively represent current time Joint position, joint velocity, joint velocity and joint tracking error.
Further, step S13 is specifically included:It is 9 that output action, which is arranged, i.e.,:Q1、Q2、Q3、Q4、Q5、Q6、Q7、Q8、 Q9, 9 output actions correspond to respectively robot feedforward torque variable quantity be:" reducing 4% " " reducing 3% ", " is reduced 2% ", " reducing 1% ", " constant ", " increasing by 1% ", " increasing by 2% ", " increasing by 3% ", " increasing by 4% ", wherein a% are motor volume Determine the percentage of torque.
Further, step S2 is specifically included:Using the sinusoidal velocity track in the joint of robot as neural network Training track.
Further, the generating process of the sinusoidal velocity track is the random sinusoidal velocity signal for generating different frequency, Maximum amplitude is 6000r/min, frequency range [0.1,1] Hz.
Further, selected described in step S3 corresponding feedforward torque increment detailed process for:It generates one (0,1) Random number εa, judge whether to meet εa>ε, wherein ε are greedy select probability, one action a of random selection if meetingt, otherwise The maximum action a of Optional Valuet, i.e.,:
at=max (Q1t,Q2t,Q3t,Q4t,Q5t,Q6t,Q7t,Q8t,Q9t)
Wherein Q1t,Q2tQ3t,Q4t,Q5t,Q6t,Q7t,Q8t,Q9tValue of corresponding 9 output actions at current time respectively.
Further, the return r immediately at the current time of robot described in step S3tIt indicates, sets rtCalculating it is public Formula is:
rt=-| et+1|
Wherein etFor the joint tracking error at current time.
Further, tool is trained to action-value neural network using stochastic gradient descent method described in step S5 Body includes:
S51:The return of each sample is calculated, setting sample return is yj, calculation formula is:
yj=rj+γmaxa′Q(sj′,a′)
Wherein, rjFor this action ajReturn immediately after execution, γ are discount factor, s 'jIndicate that execution acts ajAfterwards State, Q (s 'j, a ') and it is state s 'jThe Q values of lower action a '.
S52:Loss function loss is defined, and uses stochastic gradient descent method, to action-value network progress weight and partially The calculation formula of the update set, loss function is:
The present invention uses technical solution as above, and has advantageous effect:
1, kinetic model need not be established, you can realize joint of robot torque feedforward compensation.
2, in the process of running, the effect of feedforward compensation can be continuously improved.
3, wide adaptability can be adapted for the robot of various models.
Description of the drawings
Fig. 1 show the step schematic diagram of the embodiment of the present invention.
Fig. 2 show action-value network schematic diagram of the embodiment of the present invention.
Fig. 3 show the experiment effect figure of the embodiment of the present invention.
Specific implementation mode
To further illustrate that each embodiment, the present invention are provided with attached drawing.These attached drawings are that the invention discloses one of content Point, mainly to illustrate embodiment, and the associated description of specification can be coordinated to explain the operation principles of embodiment.Cooperation ginseng These contents are examined, those of ordinary skill in the art will be understood that other possible embodiments and advantages of the present invention.In figure Component be not necessarily to scale, and similar component symbol is conventionally used to indicate similar component.
In conjunction with the drawings and specific embodiments, the present invention is further described.
An embodiment of the present invention provides a kind of robot feedforward torque compensation methods, as shown in Figure 1, it is implemented for the present invention The flow diagram of robot feedforward torque compensation method described in example, the method may include following steps:
Step 1:Initialize intensified learning parameter, setting learning rate α=0.01, discount factor γ=0.8, greediness selection Probability ε=0.9, playback experience capacity of queue N=1000;
Step 2:Foundation action-value network Q, the action-value network use BP neural network, the neural network Including one layer of input layer, three layers of hidden layer and one layer of output layer are all made of full type of attachment between each layer, as shown in Figure 2.
In the action-value network, the input that input layer is arranged is the state of robot.
In the embodiment, the state at the n-th moment of setting is sn, current time indicates with t, then the state at current time is st, Setting:Wherein, qtFor joint position,For joint velocity,Joint velocity, etTo close Save tracking error;Hidden layer 10 neurons of every layer of setting;The output of output layer is the value of everything, output action setting It is 9, is (Q1,Q2,Q3,Q4,Q5,Q6,Q7,Q8,Q9), the increment for corresponding to joint of robot feedforward torque respectively is " to reduce 4% ", " reducing 3% ", " reduction 2% ", " reducing 1% ", " constant ", " increasing by 1% ", " increasing by 2% ", " increasing by 3% ", " increase 4% ", wherein a% are the percentage of Rated motor torque.
The first layer of setting action-value network use linear activation primitive, expression formula be f (x)=, other layers activation Function uses Sigmoid functions, and expression formula is
The Sigmoid functions are the function of a common S type in biology, also referred to as S sigmoid growth curves.Believing In breath science, since singly properties, the Sigmoid functions such as increasing and the increasing of inverse function list are often used as the threshold value letter of neural network for it Number, by variable mappings to 0, between 1.
Setting acts-is worth the initiation parameter of neural network, and initialization weight equation is Wherein ninFor preceding layer neuron number, noutFor later layer neuron number, initialization offset formula is Var (b)=0.
Step 3:Joint of robot position, speed, acceleration motion range in, generate oint motion trajectory at random.
Use the sinusoidal velocity track in the joint of industrial robot as the training track of neural network in the embodiment.Institute The generating process for stating sinusoidal velocity track is to generate the sinusoidal velocity signal of different frequency at random, maximum amplitude 6000r/min, Frequency range [0.1,1] Hz.
Step 4:The action for being arranged for the n-th moment is an, setting current time is t moment, then the action at current time is at, according to the state at the current time of industrial robot, select corresponding action at, steps are as follows:
Robotary information is obtained, by current stateIt is input in action-value network, Obtain corresponding output matrix.
Generate the random number ε of one (0,1)a.If εa>ε then randomly chooses an action atIf εa≤ ε, then Optional Value Maximum action at, i.e. at=max (Q1t,Q2t,Q3t,Q4t,Q5t,Q6t,Q7t,Q8t,Q9t), wherein Q1t, Q2t,Q3t,Q4t,Q5t,Q6t, Q7t,Q8t,Q9tValue of corresponding 9 output actions at current time respectively.
Step 5:Execution acts at, feedforward torque increment corresponding to current time integrates, and is output to current of electric On ring feedforward path, and obtain the return r immediately at current timetWith subsequent time robotary st+1, described to return r immediatelyt Calculation formula be:
rt=-| et+1|。
Step 6:By the state at current time, selected feedforward torque increment and the state of return and subsequent time is made immediately For the training sample of action-value neural network, that is, set sample as:φ=(st,at,et,st+1) by the training sample into After row normalization, it is stored in playback experience queue D, if the number of parameters stored in D is more than capacity N, by first in, first out Principle replaces old parameter.
Step 7:50 samples are taken out at random from playback experience queue D, if sample number is less than 50 in D, are all taken Go out, then calculates the return of each sample, setting sample return is yj, calculation formula is:
yj=rj+γmaxa′Q(s′j,a′)
Wherein, rjFor this action ajReturn immediately after execution, γ are discount factor, s 'jIndicate that execution acts ajAfterwards State, Q (s 'j, a ') and it is state s 'jThe Q values of lower action a '.
Step 8:Define loss function loss, and use stochastic gradient descent method, to action-value network carry out weight and The update of biasing.The calculation formula of loss function is:
Step 9:Step 4-8 is repeated, until robot stop motion.
Step 10:Step 3-9 is repeated, until robot reaches maximum frequency of training or joint tracking error less than error threshold Value.
The validity of extracting method in order to verify, we are tested on first joint of six-joint robot, maximum Frequency of training is 1000 times, and experimental result is as shown in Figure 3.As can be seen that carrying out torque feedforward compensation using the method for the present invention, have Effect improves joint of robot tracking accuracy, and tracking error root mean square slip reaches as high as 82%.
Although specifically showing and describing the present invention in conjunction with preferred embodiment, those skilled in the art should be bright In vain, it is not departing from the spirit and scope of the present invention defined by the appended claims, it in the form and details can be right The present invention makes a variety of changes, and is protection scope of the present invention.

Claims (9)

  1. A kind of torque compensation method 1. robot feedovers, which is characterized in that include the following steps:
    S1:Foundation acts-it is worth neural network, determine the input, output and activation primitive of the neural network;
    S2:Generate the training track of robot;
    S3:According to the state at the current time of robot, corresponding action is selected, then to the feedforward torque increment of selected action It is output to after being integrated on the motor current ring feedforward path of robot, and obtains the return immediately at the current time of robot With the state of subsequent time;
    S4:It returns the state at current time, selected feedforward torque increment and immediately and the state of subsequent time is as action- It is worth the training sample of neural network, after the training sample is normalized, is stored in playback experience queue;
    S5:A part of training sample is randomly choosed from the playback experience queue, using stochastic gradient descent method to action-valence Value neural network is trained, and is less than error threshold until reaching maximum frequency of training or joint tracking error.
  2. The torque compensation method 2. robot according to claim 1 feedovers, it is characterised in that:Step S1 is specifically included:
    S11:Foundation acts-is worth neural network, including one layer of input layer, N layers of hidden layer, wherein N >=2 and one layer of output layer, Using the form connected entirely between different layers;
    S12:The input that input layer is arranged is the state of robot;
    S13:The output that output layer is arranged is the value of everything;
    S14:The first layer that setting acted-be worth neural network uses linear activation primitive, and expression formula is f (x)=x, other Layer activation primitive uses Sigmoid functions, and expression formula is
    S15:Setting acts-is worth the initiation parameter of neural network, and initialization weight equation is Wherein ninFor preceding layer neuron number, noutFor later layer neuron number, initialization offset formula is Var (b)=0.
  3. The torque compensation method 3. robot according to claim 2 feedovers, it is characterised in that:Step S12 is specifically included:If The state for setting for the n-th moment is sn, current time indicates with t, then the state at current time is st, settingWherein, qtetRespectively represent the joint position at current time, joint velocity, pass Save acceleration and joint tracking error.
  4. The torque compensation method 4. robot according to claim 3 feedovers, it is characterised in that:Step S13 is specifically included:If It is 9 to set output action, i.e.,:Q1、Q2、Q3、Q4、Q5、Q6、Q7、Q8、Q9, wherein 9 output actions correspond to robot feedforward respectively The variable quantity of torque is:" reducing 4% ", " reducing 3% ", " reducing 1% ", " constant ", " increasing by 1% ", " increase " reducing 2% " 2% ", " increasing by 3% ", " increasing by 4% ", wherein a% are the percentage of Rated motor torque.
  5. The torque compensation method 5. robot according to claim 1 feedovers, it is characterised in that:Step S2 is specifically included:Make Use the sinusoidal velocity track in the joint of robot as the training track of neural network.
  6. The torque compensation method 6. robot according to claim 5 feedovers, it is characterised in that:The sinusoidal velocity track Generating process is the random sinusoidal velocity signal for generating different frequency, maximum amplitude 6000r/min, frequency range [0.1,1] Hz。
  7. The torque compensation method 7. robot according to claim 4 feedovers, it is characterised in that:Selection pair described in step S3 The feedforward torque increment detailed process answered is:Generate the random number ε of one (0,1)a, judge whether to meet εa>ε, wherein ε are greedy Greedy select probability, one action a of random selection if meetingt, the otherwise maximum action a of Optional Valuet, i.e.,:
    at=max (Q1t,Q2t,Q3t,Q4t,Q5t,Q6t,Q7t,Q8t,Q9t),
    Wherein Q1t,Q2t,Q3t,Q4t,Q5t,Q6t,Q7t,Q8t,Q9tValue of corresponding 9 output actions at current time respectively.
  8. The torque compensation method 8. robot according to claim 1 feedovers, it is characterised in that:Robot described in step S3 Current time return r immediatelytIt indicates, sets rtCalculation formula be:
    rt=-| et+1|
    Wherein etFor the joint tracking error at current time.
  9. The torque compensation method 9. robot according to claim 1 feedovers, it is characterised in that:Use described in step S5 with Machine gradient descent method is trained and specifically includes to action-value neural network:
    S51:The return of each sample is calculated, setting sample return is yj, calculation formula is:
    yj=rj+γmaxa′Q(s′j,a′)
    Wherein, rjFor this action ajReturn immediately after execution, γ are discount factor, s 'jIndicate that execution acts ajState afterwards, Q(s′j, a ') and it is state s 'jThe Q values of lower action a ';
    S52:Loss function loss is defined, and uses stochastic gradient descent method, weight and biasing are carried out to action-value network Update, the calculation formula of loss function are:
CN201810178273.3A 2018-03-05 2018-03-05 Robot feedforward torque compensation method Active CN108393892B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810178273.3A CN108393892B (en) 2018-03-05 2018-03-05 Robot feedforward torque compensation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810178273.3A CN108393892B (en) 2018-03-05 2018-03-05 Robot feedforward torque compensation method

Publications (2)

Publication Number Publication Date
CN108393892A true CN108393892A (en) 2018-08-14
CN108393892B CN108393892B (en) 2020-07-24

Family

ID=63092202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810178273.3A Active CN108393892B (en) 2018-03-05 2018-03-05 Robot feedforward torque compensation method

Country Status (1)

Country Link
CN (1) CN108393892B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109397046A (en) * 2018-12-10 2019-03-01 佳奕筱安(上海)机器人科技有限公司 Milling robot and its application method based on electric current loop power control
CN109605377A (en) * 2019-01-21 2019-04-12 厦门大学 A kind of joint of robot motion control method and system based on intensified learning
CN110569588A (en) * 2019-08-29 2019-12-13 华中科技大学 Industrial robot complete machine performance estimation method based on feedforward neural network
CN110580055A (en) * 2019-09-10 2019-12-17 深圳慧源创新科技有限公司 action track identification method and mobile terminal
CN110861090A (en) * 2019-12-03 2020-03-06 泉州华中科技大学智能制造研究院 Torque feedforward control system and method
CN111030299A (en) * 2019-12-16 2020-04-17 南方电网科学研究院有限责任公司 Side channel-based power grid embedded terminal safety monitoring method and system
CN111639749A (en) * 2020-05-25 2020-09-08 上海智殷自动化科技有限公司 Industrial robot friction force identification method based on deep learning
CN112545652A (en) * 2020-12-02 2021-03-26 哈尔滨工业大学 High-precision control method for minimally invasive surgery robot flexible wire transmission surgical instrument
CN113110493A (en) * 2021-05-07 2021-07-13 北京邮电大学 Path planning equipment and path planning method based on photonic neural network
CN114310917A (en) * 2022-03-11 2022-04-12 山东高原油气装备有限公司 Joint track error compensation method for oil pipe transfer robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06274227A (en) * 1993-03-19 1994-09-30 Nippon Telegr & Teleph Corp <Ntt> Method for calculating compensation torque of controlled target
CN102378669A (en) * 2009-01-30 2012-03-14 麻省理工学院 Model-based neuromechanical controller for a robotic leg
CN105137967A (en) * 2015-07-16 2015-12-09 北京工业大学 Mobile robot path planning method with combination of depth automatic encoder and Q-learning algorithm
CN106313044A (en) * 2016-09-20 2017-01-11 华南理工大学 Feedforward torque compensating method used for industrial robot
CN107065881A (en) * 2017-05-17 2017-08-18 清华大学 A kind of robot global path planning method learnt based on deeply
US20170252922A1 (en) * 2016-03-03 2017-09-07 Google Inc. Deep machine learning methods and apparatus for robotic grasping

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06274227A (en) * 1993-03-19 1994-09-30 Nippon Telegr & Teleph Corp <Ntt> Method for calculating compensation torque of controlled target
CN102378669A (en) * 2009-01-30 2012-03-14 麻省理工学院 Model-based neuromechanical controller for a robotic leg
CN105137967A (en) * 2015-07-16 2015-12-09 北京工业大学 Mobile robot path planning method with combination of depth automatic encoder and Q-learning algorithm
US20170252922A1 (en) * 2016-03-03 2017-09-07 Google Inc. Deep machine learning methods and apparatus for robotic grasping
CN106313044A (en) * 2016-09-20 2017-01-11 华南理工大学 Feedforward torque compensating method used for industrial robot
CN107065881A (en) * 2017-05-17 2017-08-18 清华大学 A kind of robot global path planning method learnt based on deeply

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109397046A (en) * 2018-12-10 2019-03-01 佳奕筱安(上海)机器人科技有限公司 Milling robot and its application method based on electric current loop power control
CN109605377A (en) * 2019-01-21 2019-04-12 厦门大学 A kind of joint of robot motion control method and system based on intensified learning
CN110569588B (en) * 2019-08-29 2021-04-20 华中科技大学 Industrial robot complete machine performance estimation method based on feedforward neural network
CN110569588A (en) * 2019-08-29 2019-12-13 华中科技大学 Industrial robot complete machine performance estimation method based on feedforward neural network
CN110580055A (en) * 2019-09-10 2019-12-17 深圳慧源创新科技有限公司 action track identification method and mobile terminal
CN110580055B (en) * 2019-09-10 2023-02-10 深圳慧源创新科技有限公司 Action track identification method and mobile terminal
CN110861090A (en) * 2019-12-03 2020-03-06 泉州华中科技大学智能制造研究院 Torque feedforward control system and method
CN111030299A (en) * 2019-12-16 2020-04-17 南方电网科学研究院有限责任公司 Side channel-based power grid embedded terminal safety monitoring method and system
CN111639749A (en) * 2020-05-25 2020-09-08 上海智殷自动化科技有限公司 Industrial robot friction force identification method based on deep learning
CN112545652B (en) * 2020-12-02 2022-07-19 哈尔滨工业大学 High-precision off-line control method for flexible wire transmission surgical instrument of minimally invasive surgical robot
CN112545652A (en) * 2020-12-02 2021-03-26 哈尔滨工业大学 High-precision control method for minimally invasive surgery robot flexible wire transmission surgical instrument
CN113110493A (en) * 2021-05-07 2021-07-13 北京邮电大学 Path planning equipment and path planning method based on photonic neural network
CN114310917A (en) * 2022-03-11 2022-04-12 山东高原油气装备有限公司 Joint track error compensation method for oil pipe transfer robot
CN114310917B (en) * 2022-03-11 2022-06-14 山东高原油气装备有限公司 Oil pipe transfer robot joint track error compensation method

Also Published As

Publication number Publication date
CN108393892B (en) 2020-07-24

Similar Documents

Publication Publication Date Title
CN108393892A (en) A kind of robot feedforward torque compensation method
Su et al. Credit assigned CMAC and its application to online learning robust controllers
LU501897B1 (en) Vehicle speed tracking control method, device and equipment, and storage medium
CN112904728B (en) Mechanical arm sliding mode control track tracking method based on improved approach law
CN108942924A (en) Model uncertainty mechanical arm motion control method based on multilayer neural network
CN107608209A (en) The feedforward of piezoelectric ceramic actuator and closed loop composite control method, system
CN101508112B (en) Acquisition method of three freedom-degree transportation industrial robot multiple-objective optimization design parameter
CN107577146B (en) Neural network self-adaptive control method of servo system based on friction integral approximation
CN110083167A (en) A kind of path following method and device of mobile robot
CN108388114A (en) A kind of flexible mechanical arm composite control method based on Output Redefinition
CN113110059B (en) Control method for actual tracking of single-link mechanical arm system based on event triggering
CN111047085A (en) Hybrid vehicle working condition prediction method based on meta-learning
CN113091768B (en) MIMU integral dynamic intelligent calibration compensation method
CN108445742A (en) A kind of intelligent PID control method of gas suspension platform
CN110181510A (en) A kind of mechanical arm Trajectory Tracking Control method based on time delay estimation and fuzzy logic
CN107765548B (en) Launching platform high-precision motion control method based on double observers
Chiang The velocity control of an electro‐hydraulic displacement‐controlled system using adaptive fuzzy controller with self‐tuning fuzzy sliding mode compensation
Deng et al. The Smith-PID Control of Three-Tank-System Based on Fuzzy Theory.
CN111241749B (en) Permanent magnet synchronous motor chaos prediction method based on reserve pool calculation
CN106059412A (en) Method for controlling rotating speed of separately excited DC motor based on belief rule base reasoning
CN114859725A (en) Self-adaptive event trigger control method and system for nonlinear system
CN113741469A (en) Output feedback trajectory tracking control method with preset performance and dead zone input constraint for electromechanical system
CN110187633A (en) A kind of BP ~ RNN modified integral algorithm of PID towards road simulation dynamometer
CN112684706B (en) Control method of direct-drive gantry motion platform
Dakheel Speed control of separately exited DC motor using artificial neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant