CN114111772A - Underwater robot soft operation hand position tracking method based on data gloves - Google Patents

Underwater robot soft operation hand position tracking method based on data gloves Download PDF

Info

Publication number
CN114111772A
CN114111772A CN202111429539.5A CN202111429539A CN114111772A CN 114111772 A CN114111772 A CN 114111772A CN 202111429539 A CN202111429539 A CN 202111429539A CN 114111772 A CN114111772 A CN 114111772A
Authority
CN
China
Prior art keywords
time
state
underwater
value
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111429539.5A
Other languages
Chinese (zh)
Other versions
CN114111772B (en
Inventor
曾庆军
杨淦华
邱海洋
张永林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Science and Technology
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and Technology filed Critical Jiangsu University of Science and Technology
Priority to CN202111429539.5A priority Critical patent/CN114111772B/en
Publication of CN114111772A publication Critical patent/CN114111772A/en
Application granted granted Critical
Publication of CN114111772B publication Critical patent/CN114111772B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Abstract

The invention discloses a method for tracking the position of a soft operating hand of an underwater robot based on data gloves, aiming at the dynamic capture accuracy of the data gloves and the inaccuracy of the position tracking of the underwater soft operating hand. Aiming at the problem that the angle is inaccurate when the data glove captures data of the human hand joint, an attitude fusion algorithm is applied to attitude acquisition of the data glove, and the data of a three-axis magnetometer, a three-axis accelerometer and a three-axis gyroscope are fused to solve the attitude angle. Aiming at the problem of tracking the position of the hand of the underwater soft operating hand, the invention provides a method for realizing the dynamic tracking control of the underwater soft operating hand by adopting dynamic matrix predictive control, establishing a tracking error constraint condition by designing a track tracking error performance optimization index, and converting the performance optimization problem meeting the constraint condition into a quadratic programming problem for solving a control increment, thereby obtaining the predictive control meeting the error constraint condition in a time domain, and improving the accuracy of the dynamic capture of the data glove and the position tracking of the underwater soft operating hand.

Description

Underwater robot soft operation hand position tracking method based on data gloves
Technical Field
The invention relates to the technical field of data glove dynamic capture and underwater soft body operation hand position tracking, in particular to an underwater robot soft body operation hand position tracking method based on data gloves.
Background
In recent years, due to the development of the operation technology of the cable underwater Robot (ROV), more and more complex operation tasks are undertaken by the cable underwater Robot (ROV), and the man-machine interaction of the underwater robot also becomes a research hotspot and a future direction of the operation technology of the underwater robot. The data glove developed based on the motion capture technology can realize the acquisition of the gesture of the fingers of the human hand, and converts the acquired data of the bending angle of the fingers and the gesture of the arm into the rotation angle of a driving motor of the mechanical hand, thereby realizing the remote control of the gesture of the mechanical hand. The motion capture technology is a technology for converting the motion posture of a human body in a three-dimensional space into digital information, is widely used in the human-computer interaction fields of computer animation, sports and education, interactive games, virtual reality, medical research and the like at present, and is classified from the capture principle level, and can be divided into an optical type, a mechanical type, an inertial sensor and the like. Among the commonly used motion capture technologies are the six major categories of optical, mechanical, and inertial-based sensors. Optical motion capture needs to preset optical mark points on a captured object, then tracks the marks by a camera, and performs image analysis on the shot mark point videos to finish high-precision motion capture, and the optical motion capture system is high in overall cost and strict in requirements on illumination and reflection conditions of the environment, so that the optical motion capture system is often suitable for scenes such as 3D film shooting;
currently, the closest algorithm to the text is kingdom's hand pose estimation method and system based on the fusion of visual and inertial information. It proposes to first construct hand gesture data. Then, performing feature extraction, including performing visual information feature extraction on the color image acquired by the AR glasses through a Resnet50 residual error network to finally obtain an image feature vector; extracting inertia information features by constructing a convolutional neural network to obtain inertia information feature vectors; and connecting the image feature vector and the inertia information feature vector to obtain a fused feature vector. And then performing 2D and 3D posture estimation of the hand. And deploying the trained hand posture estimation network model to AR glasses through network training and testing, and carrying out real-time hand posture estimation by calling a color camera and data gloves.
Although the technical scheme closest to the text has obvious advantages and strong adaptability to self uncertain parameters and external environments, the accelerometer is easy to generate high-frequency noise during movement, the dynamic characteristic of the accelerometer is poor, and meanwhile, the low-stage dynamic frequency characteristic of the gyroscope is poor, and the integral error is gradually increased along with time. And only a three-axis accelerometer and a three-axis gyroscope in the inertia measurement unit can not measure the transverse movement of the fingers, so that the algorithm can not accurately capture the accurate position of the hand, and is not easy to apply to actual engineering.
Meanwhile, the underwater soft operating hand has high value for the target to be grabbed by the soft operating hand in the grabbing process, and the moving position of the underwater soft operating hand has important significance for ensuring the target to be safe and intact. The common sliding mode control or PID control position can exceed the bearing capacity of the target, so that the target and the underwater soft operating hand are injured again due to the fact that the target and the underwater soft operating hand do not move coordinately; the position state exceeds the limit of space environment, and the underwater soft manipulator can also collide with surrounding objects.
Disclosure of Invention
The invention aims to provide a method for tracking the position of a soft operating hand of an underwater robot based on a data glove, which aims to improve the capability of the underwater soft operating hand for grabbing a high-value target object and protect the integrity of the underwater soft operating hand; in the aspect of tracking and controlling the position of the underwater soft operating hand to the hand, the invention provides an improved dynamic matrix predictive control algorithm to improve the robustness and the anti-interference capability of the system, and the problem of position error of the underwater soft operating hand and the hand is solved by adding a limiting condition and rolling optimization; the method for dynamically capturing the data gloves and tracking the positions of the underwater soft operating hands designed by the invention can enable the cabled underwater robot to carry the underwater soft operating hands to open and carry out underwater high-value object detection and acquisition.
The purpose of the invention is realized by the following technical scheme:
a method for tracking the position of a soft operating hand of an underwater robot based on data gloves comprises the following steps:
step 1, building a complementary filter, eliminating high-frequency noise of an accelerometer and a magnetometer and low-frequency noise of a gyroscope, designing a discrete Kalman filter, solving a four-element differential equation by using a four-order Runge-Kutta method, solving an attitude angle represented by an Euler angle at the current moment through a conversion relation between four elements and the Euler angle, and building a state equation of the process;
and 2, reading the current gyroscope data in the state equation and the observation equation. Pre-estimation of state quantity, attitude angle data calculated by the acceleration and the magnetometer and residue in the measuring process are calculated;
step 3, calculating Kalman gain, updating state estimation and error covariance of the system, waiting for sampling time delta t, and returning to execute the first step to estimate the angle at the next moment;
and 4, building an underwater software operation hand kinematics control increment prediction model. Design dynamic matrix prediction controller, set as [0t ]]The positions of the joints of the human hands captured by the data gloves at n moments in time
Figure BDA0003379631240000031
The coordinate value of the current moment and the position psi of the underwater soft body operation hand joint are used as the coordinate value in the last period;
step 5, the positions of the joints of the human hands captured by the data gloves
Figure BDA0003379631240000032
Subtracting the position psi of the underwater soft operation hand joint to obtain an increment, and correcting the underwater soft operation hand kinematic model through model parameters;
step 6, inputting the psi value into the underwater software operation hand kinematic model, and predicting the predicted value of the underwater software operation hand coordinate at the next moment
Figure BDA0003379631240000033
Then repeating the steps in the next period, setting the limiting conditions and rollingAnd the position tracking control of the underwater soft operating hand is realized through dynamic optimization.
The object of the invention can be further achieved by the following technical measures:
further, the step (1) specifically comprises:
step (1.1): establishing a four-element differential equation according to angular velocity data output by the gyroscope, wherein the calculation formula is as follows:
Figure BDA0003379631240000034
namely:
Figure BDA0003379631240000035
wherein: ω denotes the angular velocity, ωx、ωy、ωzRepresenting the angular velocity component, Q representing the attitude angle,
Figure BDA0003379631240000036
representing four-element components;
step (1.2): solving the above equation of differential by a four-order Runge Kutta method to obtain four elements of the attitude at the current moment;
step (1.3): the attitude angle expressed by the Euler angle at the current time can be obtained through the conversion relation between the four elements and the Euler angle.
Further, the step (2) specifically comprises:
step (2.1): reading current gyroscope data in a state equation and an observation equation;
cabled underwater Robot (ROV) data glove kalman filter equation of state:
θk+1=θk+[ωkerr_k]·Δt+vk (3)
cabled underwater Robot (ROV) data glove kalman filter observation equation:
Figure BDA0003379631240000041
wherein: thetakIs the attitude angle, ω, of the target at time kkAngular velocity, ω, of the gyroscope output at time kerr_kError of output angular velocity of gyroscope at k-th time, vkFor input noise, Δ t is the sampling period of the system, θk+1Is known as thetakAngle value, ω, at time k +1, estimated from gyroscope data in the case of angle of timekAngular velocity, ε, output by a gyroscope at time kkIs a random signal, yk+1The variable value at the moment k + 1;
step (2.2): because the error generated by the gyroscope for attitude calculation mainly comes from integral accumulation and the angular velocity measured by the gyroscope at the current moment is relatively stable, the angular velocity measurement error of the gyroscope can be regarded as a constant, namely
ωerr_k+1=ωerr_k (5)
Step (2.3): for the system, θkAnd ωerr_kFor the observed state of the system, ωkAnd if the system input variable is the system input variable, the system state matrix equation established by the gyroscope is as follows:
Figure BDA0003379631240000042
wherein: a and B are coefficient matrices, xk+1Is the system state vector at time k +1, vkIs a random signal, normally distributed white noise, vk~N(0,Q);
Step (2.4): the state matrix of the system can be obtained from the state equations (1) and (4) of the system as follows:
Figure BDA0003379631240000051
step (2.5): from the state matrix (7) of the system and the observation equation (4), a gain matrix of state quantities to observed quantities is obtained:
H=(1 0) (8)
wherein: h is the gain of the observed quantity;
step (2.6): attitude angle data y for accelerometers and magnetometerskNamely, the measurement equation is:
yk=Hxkk (9)
wherein: x iskIs a system state vector at the moment k;
step (2.7): residue in the measurement process:
Figure BDA0003379631240000052
wherein s iskAs a residue, the amount of the organic solvent,
Figure BDA0003379631240000053
in order to predict the difference between the two,
Figure BDA0003379631240000054
is a priori estimation;
step (2.8): error covariance of prior estimate:
xk=Axk-1+BUk+vk (11)
wherein: a and B are coefficient matrices, UkInputting the quantity for the system;
the state x at the time k-1 can be determined by the system according to the state equationk-1Carrying out prior estimation on the current state, namely a prediction part of a Kalman filter, wherein the obtained system state value is a prior estimation value and has a certain error;
step (2.9): at this time, the measurement equation is obtained from (9):
yk=Hxkk (12)
wherein: h is a coefficient matrix, xkIs the system state vector at time k, εkIs a random signal, normally distributed white noise, epsilonk~N(0,R);
Step (2.10): the prior estimation of the k-time of the process based on the system k-1 time is now given by
Figure BDA0003379631240000055
The posterior estimate of the corrected state of which is made using the measurement equation is
Figure BDA0003379631240000056
Then there are:
Figure BDA0003379631240000057
in which the variable y is measuredkAnd the difference between their predictions
Figure BDA0003379631240000061
The method is an innovation of the measuring process and reflects the deviation degree between the predicted value and the true value. KkIs the kalman gain, whose role is to minimize the posterior estimation error covariance of the process;
Pk∣k-1=APk-1AT+Q (14)
wherein: pk∣k-1Is the error covariance of the a priori estimates.
Further, the step (3) specifically comprises:
step (3.1): kalman gain Kk
Kk=Pk∣k-1HT[HPk∣k-1HT+R]-1 (15)
Wherein: r is the radius of the adaptive receiving circle;
step (3.2): updating state, i.e. current state
Figure BDA0003379631240000062
And the error covariance in this state is:
Figure BDA0003379631240000063
step (3.3): error covariance Pk
Pk=(1-KkH)Pk∣k-1 (17)
Step (3.4): and waiting for the sampling time, and returning to execute the first step to estimate the angle at the next moment.
Further, the step (4) specifically comprises:
establishing a finger unidirectional bending kinematics model:
Figure BDA0003379631240000064
Figure BDA0003379631240000065
Figure BDA0003379631240000066
Figure BDA0003379631240000067
wherein:jPijthe center point of each joint hinge is a connecting point;
Figure BDA0003379631240000068
and the homogeneous coordinate transformation matrix is used for expressing the conversion of the connecting point on the ith finger from the j coordinate system to the j-1 coordinate system. ThetaijThe rotation angle of the unidirectional bending joint; l isijFor each joint length, s and c are abbreviated sin and cos. x is the number ofiAnd yiAnd respectively representing the coordinate values of the origin of each finger coordinate system in the palm coordinate system.
Further, the step (5) specifically comprises:
step (5.1): to the finger one-way bending kinematics model at the expected path point
Figure BDA0003379631240000071
The first-order Taylor expansion linearization processing is carried out to obtain the following prediction model:
Figure BDA0003379631240000072
wherein:
Figure BDA0003379631240000073
Figure BDA0003379631240000074
A. b is a Jacobian matrix;
step (5.2): carrying out linearization and discretization on the model to obtain a state space model in a control increment form:
Figure BDA0003379631240000075
Figure BDA0003379631240000076
wherein:
Figure BDA0003379631240000077
Figure BDA0003379631240000078
Figure BDA0003379631240000079
Figure BDA00033796312400000710
γ represents an output amount.
Further, the step (6) specifically comprises:
step (6.1): inputting the psi value into the underwater soft body operation hand kinematic model, predicting the predicted value of the underwater soft body operation hand coordinate at the next moment
Figure BDA00033796312400000711
Then repeating the step in the next period;
step (6.2): setting constraint conditions, and limiting the control increment, the control quantity and the output quantity in the current time and the prediction time domain as follows:
Figure BDA00033796312400000712
Figure BDA00033796312400000713
γmin≤γ(k+i)≤γmax,i=0,1,2,…,Np (27)
wherein: n is a radical ofCRepresenting the control time domain, NpWhich represents the prediction time domain, is,
Figure BDA00033796312400000714
representing the control increment at time k + i,
Figure BDA00033796312400000715
represents the control quantity at the time k + i, γ (k + i) represents the output quantity at the time k + i,
Figure BDA00033796312400000716
which represents the minimum value of the control increment,
Figure BDA0003379631240000081
which represents the maximum value of the control increment,
Figure BDA0003379631240000082
the minimum value of the control amount is represented,
Figure BDA0003379631240000083
and the maximum value of the control quantity is selected according to the bending performance of the finger. Gamma rayminIndicating the minimum value of the output, gammamaxRepresenting a maximum output;
step (6.3): performing rolling optimization to minimize the deviation of the controlled variable and the expected value in a future period of time;
Figure BDA0003379631240000084
where γ represents an output value at the current time, γrefRepresents the expected value after adaptive line-of-sight processing,
Figure BDA0003379631240000085
and expressing control increment, Q and R express weight matrixes, selecting a diagonal matrix with the main diagonal value being an integer and less than 100, and J expresses a performance index.
Compared with the prior art, the invention has the beneficial effects that:
1. the underwater robot data glove system has better anti-interference capability in motion capture, a first-order low-pass filter in a complementary filter can effectively inhibit high-frequency noise generated by an accelerometer during motion, and a first-order high-pass filter can effectively inhibit low-frequency noise of a gyroscope and eliminate the defect that integral error gradually increases along with time.
2. The underwater robot data glove system has more accurate acquisition capacity in motion capture, the three-axis magnetometer is introduced, the three-axis accelerometer and the three-axis gyroscope form a nine-axis inertial sensor, and the hand transverse movement acquisition capacity is increased, and meanwhile, noise is more effectively eliminated, so that dynamic capture is more accurate.
3. The underwater robot data glove system has better resolving capability in dynamic capture, introduces an attitude fusion algorithm, combines the advantages of respective sensors, and combines a Kalman filtering algorithm to enable the data settlement of the sensors to be more accurate, thereby obtaining accurate hand action positions.
4. The underwater soft operating hand has more accurate position tracking capability when grabbing a target, the position of the underwater soft operating hand is continuously improved and limited by adopting dynamic matrix predictive control and making the position of the hand joint and the position of the underwater soft operating hand joint different, so that the position of the underwater soft operating hand joint approaches to the position of the hand, the grabbing accuracy is ensured, and the occurrence of misoperation is reduced.
5. The underwater soft manipulator adopts a controller designed by dynamic matrix predictive control, the method adopts a rolling optimization strategy, and has better dynamic control performance, and a closed-loop control system designed by the method has strong anti-jamming capability.
6. The invention combines the data glove, the attitude fusion algorithm and the dynamic matrix control algorithm to carry out position tracking control on the underwater soft operating hand, can efficiently grab an underwater high-value target and protect the integrity of the underwater high-value target.
Drawings
FIG. 1 is a block diagram of a method for tracking the position of a soft working hand of an underwater robot based on data gloves;
FIG. 2 is a flow chart of a pose fusion algorithm;
FIG. 3 is a Kalman filtering process diagram;
FIG. 4 is a flow chart of the underwater software worker position dynamic matrix predictive control.
Detailed description of the preferred embodiments
The invention is further described with reference to the following figures and specific examples.
According to the method shown in the figure 1, a complementary filter is built, high-frequency noise of an accelerometer and a magnetometer and low-frequency noise of a gyroscope are eliminated, a discrete Kalman filter is designed, a four-element differential equation is solved by a four-order Runge-Kutta method, an attitude angle represented by an Euler angle at the current moment is obtained through a conversion relation between four elements and the Euler angle, and a state equation of the process is built;
establishing a four-element differential equation according to angular velocity data output by the gyroscope, wherein the calculation formula is as follows:
Figure BDA0003379631240000091
namely:
Figure BDA0003379631240000092
wherein: ω represents angular velocity, Q represents attitude angle;
solving the above equation of the differential equation by a fourth-order Runge Kutta method to obtain four elements of the attitude at the current moment, and obtaining the attitude angle expressed by the Euler angle at the current moment by the conversion relation between the four elements and the Euler angle:
Figure BDA0003379631240000101
wherein: h is the solving step length, kiAre coefficients.
From the illustration in fig. 2, the current gyroscope data is read in both the state equation and the observation equation. And calculating a pre-estimate of the state quantity, attitude angle data calculated from the acceleration and magnetometer, and calculating the residuals in the measurement process:
1) cabled underwater Robot (ROV) data glove kalman filter equation of state:
θk+1=θk+[ωkerr_k]·Δt+vk (32)
2) cabled underwater Robot (ROV) data glove kalman filter observation equation:
Figure BDA0003379631240000102
wherein: thetakIs the attitude angle, ω, of the target at time kkAngular velocity, ω, of the gyroscope output at time kerr_kError of output angular velocity of gyroscope at k-th time, vtFor input noise, Δ t is the sampling period of the system, θk+1Is known as thetakAn angle value at a time k +1 estimated from the gyroscope data in the case of the angle at the time; omegakThe angular velocity output by the gyroscope at the kth moment is the error generated by the attitude calculation of the gyroscope mainly from integral accumulation, and the angular velocity measured by the gyroscope at the current moment is relatively highStable, so the angular velocity measurement error of the gyroscope can be considered as a constant, i.e.
ωerr_k+1=ωerr_k (34)
For the system, θkAnd ωerr_kFor the observed state of the system, ωkAnd if the system input variable is the system input variable, the system state matrix equation established by the gyroscope is as follows:
Figure BDA0003379631240000111
the state matrix of the system can be derived from the state equations (29) and (32) of the system as follows:
Figure BDA0003379631240000112
from the state matrix (35) of the system and the observation equation (32), a gain matrix of state quantities to observed quantities is available:
H=(1 0) (37)
attitude angle data y for accelerometers and magnetometerskNamely, the measurement equation is:
yk=Hxkk (38)
residue in the measurement process:
Figure BDA0003379631240000113
error covariance of prior estimate:
xk=Axk-1+BUk+vk (40)
wherein: a and B are coefficient matrices, xkIs the system state vector at time k, UkAs system input quantity, vkIs a random signal, normally distributed white noise, vk~N(0,Q)。
According to FIG. 3, at time k-1, the system may be based on the equation of stateState xk-1Carrying out prior estimation on the current state, namely a prediction part of a Kalman filter, wherein the obtained system state value is a prior estimation value and has a certain error;
at this time, the measurement equation is given by (37):
yk=Hxkk (41)
wherein: h is a coefficient matrix, xkIs the system state vector at time k, εkIs a random signal, normally distributed white noise, epsilonk~N(0,R);
The prior estimation of the k-time of the process based on the system k-1 time is now given by
Figure BDA0003379631240000121
The posterior estimate of the corrected state of which is made using the measurement equation is
Figure BDA0003379631240000122
Then there are:
Figure BDA0003379631240000123
in which the variable y is measuredkAnd the difference between their predictions
Figure BDA0003379631240000124
The method is an innovation of the measuring process and reflects the deviation degree between the predicted value and the true value. KkIs the kalman gain, whose role is to minimize the posterior estimation error covariance of the process;
wherein: pk∣k-1Is the error covariance of the a priori estimates.
Pk∣k-1=APk-1AT+Q (43)
Calculating Kalman gain, updating state estimation and error covariance of the system, waiting for sampling time delta t, and returning to execute the first step to estimate the angle at the next moment;
kalman gain Kk
Kk=Pk∣k-1HT[HPk∣k-1HT+R]-1 (44)
Updating state, i.e. current state
Figure BDA0003379631240000125
And the error covariance in this state is:
Figure BDA0003379631240000126
error covariance Pk
Pk=(1-KkH)Pk∣k-1 (46)
And waiting for the sampling time, and returning to execute the first step to estimate the angle at the next moment.
According to the figure 4, an underwater software operation hand kinematics control incremental prediction model is built; design dynamic matrix prediction controller, set as [0t ]]The positions of the joints of the human hands captured by the data gloves at n moments in time
Figure BDA0003379631240000127
The coordinate value of the current moment and the position psi of the underwater soft body operation hand joint are used as the coordinate value in the last period;
establishing a finger unidirectional bending kinematics model:
Figure BDA0003379631240000128
Figure BDA0003379631240000131
Figure BDA0003379631240000132
Figure BDA0003379631240000133
wherein:jPijthe center point of each joint hinge is a connecting point;
Figure BDA0003379631240000134
a homogeneous coordinate transformation matrix for expressing the conversion of the connection point on the ith finger from a j coordinate system to a j-1 coordinate system; thetaijThe rotation angle of the unidirectional bending joint; l isijThe length of each joint rod, s and c are shorthand for sin and cos; x is the number ofiAnd yiAnd respectively representing the coordinate values of the origin of each finger coordinate system in the palm coordinate system.
Hand joint position captured with data glove
Figure BDA0003379631240000135
Subtracting the position psi of the underwater soft operation hand joint to obtain an increment, and correcting the underwater soft operation hand kinematic model through model parameters;
to the finger one-way bending kinematics model at the expected path point
Figure BDA0003379631240000136
The first-order Taylor expansion linearization processing is carried out to obtain the following prediction model:
Figure BDA0003379631240000137
wherein:
Figure BDA0003379631240000138
Figure BDA0003379631240000139
A. b is a Jacobian matrix;
carrying out linearization and discretization on the model to obtain a state space model in a control increment form:
Figure BDA00033796312400001310
Figure BDA00033796312400001311
wherein:
Figure BDA00033796312400001312
Figure BDA00033796312400001313
Figure BDA00033796312400001314
Figure BDA00033796312400001315
γ represents an output amount.
Inputting the psi value into the underwater soft body operation hand kinematic model, predicting the predicted value of the underwater soft body operation hand coordinate at the next moment
Figure BDA0003379631240000141
Then repeating the steps in the next period to realize the position tracking control of the underwater soft manipulator;
and simultaneously setting constraint conditions, and limiting the control increment, the control quantity and the output quantity in the current time and the prediction time domain as follows:
Figure BDA0003379631240000142
Figure BDA0003379631240000143
Figure BDA0003379631240000144
wherein: n is a radical ofCRepresenting the control time domain, NpWhich represents the prediction time domain, is,
Figure BDA0003379631240000145
representing the control increment at time k + i,
Figure BDA0003379631240000146
represents the control quantity at the time k + i, γ (k + i) represents the output quantity at the time k + i,
Figure BDA0003379631240000147
which represents the minimum value of the control increment,
Figure BDA0003379631240000148
which represents the maximum value of the control increment,
Figure BDA0003379631240000149
the minimum value of the control amount is represented,
Figure BDA00033796312400001410
and the maximum value of the control quantity is selected according to the bending performance of the finger. Gamma rayminIndicating the minimum value of the output, gammamaxThe output maximum value is indicated.
Performing rolling optimization to minimize the deviation of the controlled variable and the expected value in a future period of time;
Figure BDA00033796312400001411
where γ represents an output value at the current time, γrefRepresents the expected value after adaptive line-of-sight processing,
Figure BDA00033796312400001412
and expressing control increment, Q and R express weight matrixes, selecting a diagonal matrix with the main diagonal value being an integer and less than 100, and J expresses a performance index.

Claims (7)

1. A method for tracking the position of a soft operating hand of an underwater robot based on data gloves is characterized by comprising the following steps:
step 1, building a complementary filter, eliminating high-frequency noise of an accelerometer and a magnetometer and low-frequency noise of a gyroscope, designing a discrete Kalman filter, solving a four-element differential equation by using a four-order Runge-Kutta method, solving an attitude angle represented by an Euler angle at the current moment through a conversion relation between four elements and the Euler angle, and building a state equation of the process;
reading the current gyroscope data in a state equation and an observation equation, calculating the pre-estimation of state quantity, the attitude angle data calculated by the acceleration and the magnetometer, and calculating the residue in the measuring process;
step 3, calculating Kalman gain, updating state estimation and error covariance of the system, waiting for sampling time delta t, and returning to execute the first step to estimate the angle at the next moment;
step 4, building an underwater soft body operation hand kinematics control increment prediction model, designing a dynamic matrix prediction controller, and setting the model as 0t]The positions of the joints of the human hands captured by the data gloves at n moments in time
Figure FDA0003379631230000011
The coordinate value of the current moment and the position psi of the underwater soft body operation hand joint are used as the coordinate value in the last period;
step 5, the positions of the joints of the human hands captured by the data gloves
Figure FDA0003379631230000012
Subtracting the position psi of the underwater soft operation hand joint to obtain an increment, and correcting the underwater soft operation hand kinematic model through model parameters;
step 6, inputting the psi value into the underwater software operation hand kinematic model, and predicting the predicted value of the underwater software operation hand coordinate at the next moment
Figure FDA0003379631230000013
Then repeating this step in the next cycle, anAnd setting limiting conditions, and performing rolling optimization to realize position tracking control on the underwater soft operating hand.
2. The underwater robot soft body operation hand position tracking method based on the data gloves as claimed in claim 1, characterized in that, in the step 1, the attitude angle represented by the euler angle at the current moment is obtained through the conversion relation between four elements and the euler angle, and the specific steps of constructing the state equation of the process are as follows:
step 1.1: establishing a four-element differential equation according to angular velocity data output by the gyroscope, wherein the calculation formula is as follows:
Figure FDA0003379631230000014
namely:
Figure FDA0003379631230000015
wherein: ω denotes the angular velocity, ωx、ωy、ωzRepresenting the angular velocity component, Q representing the attitude angle,
Figure FDA0003379631230000021
representing four-element components;
step 1.2: solving the above equation of differential by a four-order Runge Kutta method to obtain four elements of the attitude at the current moment;
step 1.3: the attitude angle expressed by the Euler angle at the current time can be obtained through the conversion relation between the four elements and the Euler angle.
3. The underwater robot soft working hand position tracking method based on the data glove as claimed in claim 1, wherein the step 2 of calculating the pre-estimation of the state quantity and calculating the residue in the measurement process comprises the following specific steps:
step 2.1: reading current gyroscope data in a state equation and an observation equation;
cabled underwater Robot (ROV) data glove kalman filter equation of state:
θk+1=θk+[ωkerr_k]·Δt+vk (3)
cabled underwater Robot (ROV) data glove kalman filter observation equation:
Figure FDA0003379631230000022
wherein: thetakIs the attitude angle, ω, of the target at time kkAngular velocity, ω, of the gyroscope output at time kerr_kError of output angular velocity of gyroscope at k-th time, vkFor input noise, Δ t is the sampling period of the system, θk+1Is known as thetakAngle value, ω, at time k +1, estimated from gyroscope data in the case of angle of timekAngular velocity, ε, output by a gyroscope at time kkIs a random signal, yk+1The variable value at the moment k + 1;
step 2.2: because the error generated by the gyroscope for attitude calculation mainly comes from integral accumulation and the angular velocity measured by the gyroscope at the current moment is relatively stable, the angular velocity measurement error of the gyroscope can be regarded as a constant, namely
ωerr_k+1=ωerr_k (5)
Step 2.3: for the system, θkAnd ωerr_kFor the observed state of the system, ωkAnd if the system input variable is the system input variable, the system state matrix equation established by the gyroscope is as follows:
Figure FDA0003379631230000031
wherein: a and B are coefficient matrices, xk+1Is the system state vector at time k +1, vkIs a random signal, normally distributed white noise, vk~N(0,Q);
Step 2.4: the state matrix of the system can be obtained from the state equations (1) and (4) of the system as follows:
Figure FDA0003379631230000032
step 2.5: from the state matrix (7) of the system and the observation equation (4), a gain matrix of state quantities to observed quantities is obtained:
H=(1 0) (8)
wherein: h is the gain of the observed quantity;
step 2.6: attitude angle data y for accelerometers and magnetometerskNamely, the measurement equation is:
yk=Hxkk (9)
wherein: x is the number ofkIs a system state vector at the moment k;
step 2.7: residue in the measurement process:
Figure FDA0003379631230000033
wherein s iskAs a residue, the amount of the organic solvent,
Figure FDA0003379631230000034
in order to predict the difference between the two,
Figure FDA0003379631230000035
is a priori estimation;
step 2.8: error covariance of prior estimate:
xk=Axk-1+BUk+vk (11)
wherein: a and B are coefficient matrices, UkInputting the quantity for the system;
the state x at the time k-1 can be determined by the system according to the state equationk-1Making a priori estimates of the current state, i.e. the prediction part of the Kalman filter, which is obtained at this timeThe system state value of (1) is a priori estimated value, and a certain error exists;
step 2.9: at this time, the measurement equation is obtained from (9):
yk=Hxkk (12)
wherein: h is a coefficient matrix, xkIs the system state vector at time k, εkIs a random signal, normally distributed white noise, epsilonk~N(0,R);
Step 2.10: the prior estimation of the k-time of the process based on the system k-1 time is now given by
Figure FDA0003379631230000036
The posterior estimate of the corrected state of which is made using the measurement equation is
Figure FDA0003379631230000037
Then there are:
Figure FDA0003379631230000041
in which the variable y is measuredkAnd the difference between their predictions
Figure FDA0003379631230000042
The method is an innovation of the measuring process and reflects the deviation degree between the predicted value and the true value. KkIs the kalman gain, whose role is to minimize the posterior estimation error covariance of the process;
Pk∣k-1=APk-1AT+Q (14)
wherein: pk∣k-1Is the error covariance of the a priori estimates.
4. The underwater robot soft body operation hand position tracking method based on the data gloves as claimed in claim 1, wherein the step 3 of calculating kalman gain, updating state estimation and error covariance of the system, waiting for sampling time Δ t, and returning to execute the first step of estimating the angle at the next moment is specifically as follows:
step 3.1: kalman gain Kk
Kk=Pk∣k-1HT[HPk∣k-1HT+R]-1 (15)
Wherein: r is the radius of the adaptive receiving circle;
step 3.2: updating state, i.e. current state
Figure FDA0003379631230000043
And the error covariance in this state is:
Figure FDA0003379631230000044
step 3.3: error covariance Pk
Pk=(1-KkH)Pk∣k-1 (17)
Step 3.4: and waiting for the sampling time, and returning to execute the first step to estimate the angle at the next moment.
5. The underwater robot soft-body operating hand position tracking method based on the data gloves as claimed in claim 1, characterized in that, the step 4 builds an underwater soft-body operating hand kinematic control increment prediction model; design dynamic matrix prediction controller, set as [0t ]]The positions of the joints of the human hands captured by the data gloves at n moments in time
Figure FDA0003379631230000045
The specific steps of taking the position psi of the underwater soft body operation hand joint as the coordinate value in the previous period are as follows:
establishing a finger unidirectional bending kinematics model:
Figure FDA0003379631230000046
Figure FDA0003379631230000051
Figure FDA0003379631230000052
Figure FDA0003379631230000053
wherein:jPijthe center point of each joint hinge is a connecting point;
Figure FDA0003379631230000054
a homogeneous coordinate transformation matrix for expressing the conversion of the connection point on the ith finger from a j coordinate system to a j-1 coordinate system; thetaijThe rotation angle of the unidirectional bending joint; l isijThe length of each joint rod, s and c are shorthand for sin and cos; x is the number ofiAnd yiAnd respectively representing the coordinate values of the origin of each finger coordinate system in the palm coordinate system.
6. The underwater robot soft working hand position tracking method based on data gloves as claimed in claim 1, wherein the step 5 is to capture the joint position of the human hand by the data gloves
Figure FDA0003379631230000055
Subtracting the position psi of the underwater soft operation hand joint to obtain an increment, and correcting the underwater soft operation hand kinematic model through model parameters, wherein the method comprises the following specific steps:
step 5.1: for the finger unidirectional bending kinematics model at the expected path point (psi)R,
Figure FDA0003379631230000056
) Is located to advancePerforming first-order Taylor expansion linearization processing to obtain the following prediction model:
Figure FDA0003379631230000057
wherein:
Figure FDA0003379631230000058
A. b is a Jacobian matrix;
step 5.2: carrying out linearization and discretization on the model to obtain a state space model in a control increment form:
Figure FDA0003379631230000059
Figure FDA00033796312300000510
wherein:
Figure FDA00033796312300000511
γ represents an output amount.
7. The underwater robot software worker hand position tracking method based on the data glove as claimed in claim 1, wherein the step 6 predicts a next time coordinate value; the specific steps of limiting the control increment, the control quantity and the output quantity and carrying out rolling optimization are as follows:
step 6.1: inputting the psi value into the underwater soft body operation hand kinematic model, predicting the predicted value of the underwater soft body operation hand coordinate at the next moment
Figure FDA0003379631230000061
Then repeating the step in the next period;
step 6.2: setting constraint conditions, and limiting the control increment, the control quantity and the output quantity in the current time and the prediction time domain as follows:
Figure FDA0003379631230000062
Figure FDA0003379631230000063
γmin≤γ(k+i)≤γmax,i=0,1,2,…,Np (27)
wherein: n is a radical ofCRepresenting the control time domain, NpWhich represents the prediction time domain, is,
Figure FDA0003379631230000064
representing the control increment at time k + i,
Figure FDA0003379631230000065
represents the control quantity at the time k + i, γ (k + i) represents the output quantity at the time k + i,
Figure FDA0003379631230000066
which represents the minimum value of the control increment,
Figure FDA0003379631230000067
which represents the maximum value of the control increment,
Figure FDA0003379631230000068
the minimum value of the control amount is represented,
Figure FDA0003379631230000069
and the maximum value of the control quantity is selected according to the bending performance of the finger. Gamma rayminIndicating the minimum value of the output, gammamaxRepresenting a maximum output;
step 6.3: the roll optimization is performed to minimize the deviation of the controlled variable from the desired value over a future period of time.
Figure FDA00033796312300000610
Where γ represents an output value at the current time, γrefRepresents the expected value after adaptive line-of-sight processing,
Figure FDA00033796312300000611
and expressing control increment, Q and R express weight matrixes, selecting a diagonal matrix with the main diagonal value being an integer and less than 100, and J expresses a performance index.
CN202111429539.5A 2021-11-29 2021-11-29 Underwater robot soft operation hand position tracking method based on data glove Active CN114111772B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111429539.5A CN114111772B (en) 2021-11-29 2021-11-29 Underwater robot soft operation hand position tracking method based on data glove

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111429539.5A CN114111772B (en) 2021-11-29 2021-11-29 Underwater robot soft operation hand position tracking method based on data glove

Publications (2)

Publication Number Publication Date
CN114111772A true CN114111772A (en) 2022-03-01
CN114111772B CN114111772B (en) 2023-10-03

Family

ID=80370970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111429539.5A Active CN114111772B (en) 2021-11-29 2021-11-29 Underwater robot soft operation hand position tracking method based on data glove

Country Status (1)

Country Link
CN (1) CN114111772B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114454174A (en) * 2022-03-08 2022-05-10 江南大学 Mechanical arm motion capturing method, medium, electronic equipment and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201118662A (en) * 2009-11-30 2011-06-01 Yin-Chen Chang Trace-generating systems and methods thereof
CN104764452A (en) * 2015-04-23 2015-07-08 北京理工大学 Hybrid position-posture tracking method based on inertia and optical tracking systems
US20170123487A1 (en) * 2015-10-30 2017-05-04 Ostendo Technologies, Inc. System and methods for on-body gestural interfaces and projection displays
CN106679649A (en) * 2016-12-12 2017-05-17 浙江大学 Hand movement tracking system and tracking method
CN109481226A (en) * 2018-09-27 2019-03-19 南昌大学 A kind of both hands tracking mode multiple degrees of freedom software finger gymnastic robot and application method
WO2020253854A1 (en) * 2019-06-21 2020-12-24 台州知通科技有限公司 Mobile robot posture angle calculation method
CN113332104A (en) * 2021-07-08 2021-09-03 中国科学技术大学 Recovered robot gloves of articulated type software

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201118662A (en) * 2009-11-30 2011-06-01 Yin-Chen Chang Trace-generating systems and methods thereof
CN104764452A (en) * 2015-04-23 2015-07-08 北京理工大学 Hybrid position-posture tracking method based on inertia and optical tracking systems
US20170123487A1 (en) * 2015-10-30 2017-05-04 Ostendo Technologies, Inc. System and methods for on-body gestural interfaces and projection displays
CN106679649A (en) * 2016-12-12 2017-05-17 浙江大学 Hand movement tracking system and tracking method
CN109481226A (en) * 2018-09-27 2019-03-19 南昌大学 A kind of both hands tracking mode multiple degrees of freedom software finger gymnastic robot and application method
WO2020253854A1 (en) * 2019-06-21 2020-12-24 台州知通科技有限公司 Mobile robot posture angle calculation method
CN113332104A (en) * 2021-07-08 2021-09-03 中国科学技术大学 Recovered robot gloves of articulated type software

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHANG X ET AL.: "Research on optimal grasping planning based on flexible wrist-hand", CHINESE JOURNAL OF ENGINEERING DESIGN, vol. 27, no. 3 *
谢先武;熊禾根;陶永;刘辉;许曦;孙柏树;: "一种面向机器人分拣的杂乱工件视觉检测识别方法", 高技术通讯, no. 04 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114454174A (en) * 2022-03-08 2022-05-10 江南大学 Mechanical arm motion capturing method, medium, electronic equipment and system

Also Published As

Publication number Publication date
CN114111772B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
Du et al. Online serial manipulator calibration based on multisensory process via extended Kalman and particle filters
CN107627303B (en) PD-SMC control method of visual servo system based on eye-on-hand structure
Du et al. Online robot kinematic calibration using hybrid filter with multiple sensors
CN110253574B (en) Multi-task mechanical arm pose detection and error compensation method
Jung et al. Upper body motion tracking with inertial sensors
Huster et al. Relative position estimation for AUVs by fusing bearing and inertial rate sensor measurements
Du et al. A novel human–manipulators interface using hybrid sensors with Kalman filter and particle filter
CN115046545A (en) Positioning method combining deep network and filtering
CN114111772B (en) Underwater robot soft operation hand position tracking method based on data glove
Tan et al. New varying-parameter recursive neural networks for model-free kinematic control of redundant manipulators with limited measurements
Qu et al. Dynamic visual tracking for robot manipulator using adaptive fading Kalman filter
CN110967017A (en) Cooperative positioning method for rigid body cooperative transportation of double mobile robots
CN113340324B (en) Visual inertia self-calibration method based on depth certainty strategy gradient
Lan et al. Action synchronization between human and UAV robotic arms for remote operation
Lei et al. Visually guided robotic tracking and grasping of a moving object
Tan et al. Uncalibrated and unmodeled image-based visual servoing of robot manipulators using zeroing neural networks
Luo et al. End-Effector Pose Estimation in Complex Environments Using Complementary Enhancement and Adaptive Fusion of Multisensor
Pankert et al. Learning Contact-Based State Estimation for Assembly Tasks
Dubus et al. Vibration control of a flexible arm for the ITER maintenance using unknown visual features from inside the vessel
Du et al. Human-manipulator interface using particle filter
Anderle et al. Sensor fusion for simple walking robot using low-level implementation of Extended Kalman Filter
CN114417738A (en) Sparse IMU real-time human body motion capture and joint stress prediction method and system
Zuo et al. Robust Visual-Inertial Odometry Based on Deep Learning and Extended Kalman Filter
Kim et al. Robot visual servo through trajectory estimation of a moving object using kalman filter
Mishra et al. Motion and parameter estimation for the robotic capture of a non-cooperative space target considering egomotion uncertainty

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant