CN108051001B - Robot movement control method and system and inertial sensing control device - Google Patents
Robot movement control method and system and inertial sensing control device Download PDFInfo
- Publication number
- CN108051001B CN108051001B CN201711232485.7A CN201711232485A CN108051001B CN 108051001 B CN108051001 B CN 108051001B CN 201711232485 A CN201711232485 A CN 201711232485A CN 108051001 B CN108051001 B CN 108051001B
- Authority
- CN
- China
- Prior art keywords
- angular velocity
- value
- derivative
- quaternion
- state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 74
- 230000033001 locomotion Effects 0.000 title claims abstract description 46
- 239000011159 matrix material Substances 0.000 claims abstract description 51
- 238000001914 filtration Methods 0.000 claims abstract description 24
- 230000003044 adaptive effect Effects 0.000 claims description 34
- 238000005070 sampling Methods 0.000 claims description 28
- 230000008569 process Effects 0.000 claims description 18
- 230000001133 acceleration Effects 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 13
- 238000005311 autocorrelation function Methods 0.000 claims description 11
- 238000005314 correlation function Methods 0.000 claims description 11
- 238000007781 pre-processing Methods 0.000 claims description 10
- 230000006978 adaptation Effects 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000004891 communication Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000013179 statistical model Methods 0.000 claims description 3
- 230000008901 benefit Effects 0.000 abstract description 5
- 230000000694 effects Effects 0.000 abstract description 4
- 238000005259 measurement Methods 0.000 description 12
- 239000013598 vector Substances 0.000 description 7
- 230000009471 action Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 230000007704 transition Effects 0.000 description 4
- 230000006399 behavior Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 239000002904 solvent Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000005057 finger movement Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005309 stochastic process Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 230000036962 time dependent Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000002087 whitening effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
Abstract
The invention relates to a robot movement control method, which comprises the following steps: acquiring angular velocity data of an inertial sensor, and carrying out filtering pretreatment on the angular velocity data; establishing a quaternion differential equation according to the angular velocity data, and solving the quaternion differential equation by using a Runge-Kutta method to obtain an attitude matrix comprising a target attitude angle; converting the target attitude angle from the carrier coordinate system into a navigation coordinate system; excluding target attitude angles which are not within a threshold range from the target attitude angles in the navigation coordinate system; and controlling the robot to act according to the target attitude angle within the threshold range. The invention can utilize the inertial sensor to control the robot to move, and has the advantages of high precision, good online identification effect, strong universality and good application prospect value. The invention also provides a robot movement control system and an inertial sensing control device.
Description
Technical Field
The invention relates to the technical field of robot control, in particular to a robot movement control method and system and an inertial sensing control device.
Background
In a plurality of extended applications, a human-computer interaction mode occupies an important role, and human-robot interaction is an important subject in the field of robot technology, particularly the field of life-assisting robots. In the past, people have not stopped searching for more natural and humanized man-machine interaction steps, but controlling a robot by gestures can replace complicated and tedious program operation, control the robot simply and conveniently, issue commands to the robot, interact with the robot, and become a research hotspot.
The gesture recognition essentially senses the operation intention of a user according to the gesture micro-operation of the user, belongs to the field of multi-channel interaction, and relates to a series of related subjects such as pattern recognition, robots, image processing, computer vision and the like. The study of gesture recognition may not only facilitate the development of these disciplines to some extent, but it also has great practical significance because of some inherent advantages of gesture actions.
At present, gesture recognition mainly comprises 2 methods, one method is a gesture recognition technology based on vision, the technology is developed earlier and maturely, but the requirements on equipment and environment are strict, and the use limitation is large. The other is a gesture recognition technology based on an inertial sensor, which is not affected by environment and light, and mainly performs gesture recognition by measuring changes of acceleration and angular velocity, but the inertial sensor has drift errors, and the problem of inaccurate judgment of precision still exists in gesture recognition, such as judgment of tiny actions of fingers.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art and provides a robot movement control method, a robot movement control system and an inertial sensing control device.
The technical scheme for solving the technical problems is as follows: a robot movement control method comprising:
s1, acquiring angular velocity data of the inertial sensor;
s2, carrying out online adaptive filtering pretreatment on the angular velocity data;
s3, establishing a quaternion differential equation according to the angular velocity data subjected to online adaptive filtering pretreatment, and solving the quaternion differential equation by using a Runge-Kutta method to obtain an attitude matrix comprising a target attitude angle;
s4, converting the target attitude angle from the carrier coordinate system into a navigation coordinate system;
s5, excluding the target attitude angles which are not within the threshold range from the target attitude angles in the navigation coordinate system;
and S6, controlling the robot to move according to the target attitude angle within the threshold value range.
Another technical solution of the present invention for solving the above technical problems is as follows: a robot movement control system comprising:
the acquisition unit is used for acquiring angular velocity data of the inertial sensor;
the preprocessing unit is used for preprocessing the online adaptive filtering of the angular velocity data;
the processing unit is used for establishing a quaternion differential equation according to the angular velocity data subjected to online adaptive filtering pretreatment, solving the quaternion differential equation by using a Runge-Kutta method and acquiring an attitude matrix comprising a target attitude angle;
a coordinate system conversion unit for converting the target attitude angle from the carrier coordinate system to a navigation coordinate system;
a screening unit, configured to exclude target attitude angles that are not within a threshold range from among the target attitude angles in the navigation coordinate system;
and the control unit is used for controlling the robot to act according to the target attitude angle within the threshold range.
Another technical solution of the present invention for solving the above technical problems is as follows: an inertial sensing control device comprises the robot movement control system in the technical scheme, and the inertial sensing control device is in wireless communication with a robot.
The invention has the beneficial effects that: aiming at the problem of real-time processing of sensor data, the online self-adaptive filtering method is utilized to realize online denoising, so that the influence of noise on the later-stage updating of the attitude matrix is reduced, and the attitude angle of the attitude matrix is more accurately solved; the attitude matrix is described by using a quaternion method, a differential equation is solved, the calculated amount is small, the precision is high, and the situation that the attitude matrix falls into a singular point can be avoided; the finger misoperation behaviors are excluded by utilizing a threshold value, and different micro gesture motions are recognized; the invention can utilize the inertial sensor to control the robot to move, and has the advantages of high precision, good online identification effect, strong universality and good application prospect value.
Drawings
Fig. 1 is a schematic flowchart of a robot movement control method according to an embodiment of the present invention;
FIG. 2 is a flow chart of system signal processing according to an embodiment of the present invention;
FIG. 3 illustrates the variation of the pitch angle and the course angle when the finger moves up and down according to an embodiment of the present invention;
FIG. 4 illustrates changes in pitch and course as the finger moves left and right according to an embodiment of the present invention;
fig. 5 is a schematic block diagram of a robot movement control system according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Fig. 1 shows a schematic flowchart of a robot movement control method according to an embodiment of the present invention. As shown in fig. 1, the method includes:
s1, acquiring angular velocity data of an inertial sensor, wherein the inertial sensor can be worn on a finger of a user;
s2, carrying out online adaptive filtering pretreatment on the angular velocity data;
s3, establishing a quaternion differential equation according to the angular velocity data subjected to online adaptive filtering pretreatment, and solving the quaternion differential equation by using a Runge-Kutta method to obtain an attitude matrix comprising a target attitude angle;
s4, converting the target attitude angle from the carrier coordinate system into a navigation coordinate system;
s5, excluding the target attitude angles which are not within the threshold range from the target attitude angles in the navigation coordinate system;
and S6, controlling the robot to move according to the target attitude angle within the threshold value range.
In the embodiment, aiming at the problem of real-time processing of sensor data, online denoising is realized by using an online adaptive filtering method; the attitude matrix is described by using a quaternion method, a differential equation is solved, the calculated amount is small, the precision is high, and the situation that the attitude matrix falls into a singular point can be avoided; the finger misoperation behaviors are excluded by utilizing a threshold value, and different micro gesture motions are recognized; the invention can utilize the inertial sensor to control the robot to move, and has the advantages of high precision, good online identification effect, strong universality and good application prospect value.
Optionally, as an embodiment of the present invention, the preprocessing of performing online adaptive filtering on the angular velocity data includes:
and S2.1, initializing state quantity and system self-adaptive parameters. In particular, the amount of the solvent to be used,
2.1.1, setting initial value of stateThe method is generally set as a 3-dimensional all-0 column vector, the dimension is the dimension of a state vector of a state process model, and the angular velocity, a first derivative of the angular velocity and a second derivative of the angular velocity are the state vectors;
2.1.2, initial value of adaptive parameter α ═ α0Andtaking any positive number, e.g. α0Value takingThe value of 3 is taken as the reference value,
And S2.2, establishing a state process model with system self-adaptive parameters. In particular, the amount of the solvent to be used,
2.2.1 describing the movement characteristics of the target using the following equation;
similar to Singer model, time-dependent stochastic process with non-zero mean angular velocity second derivativeWhereinThe mean value of the angular velocity second derivative, a (t) is a zero-mean exponential correlation colored noise model, and the correlation function is as follows:
wherein t is any sampling time, tau is related measurement parameter, Ra(τ) represents a correlation function;representing the acceleration variance, α is the maneuvering frequency reflecting the random characteristic of the change of the state;
whitening the colored noise a (t) to obtain
sampling is carried out according to a period T, and the state change of the discretized system meets the following equation:
whereinIs a 3-dimensional state column vector and,the method comprises the steps that angular velocity, a first derivative of the angular velocity and a second derivative of the angular velocity are respectively adopted, x (k +1) is a state vector at the moment k +1, and k is sampling moment; a (k +1, k) is a state transition matrix; x (k) is the state vector of the target at time k; u (k) is a control matrix;the mean value of the angular velocity second derivative of the target from the time 0 to the time k, w (k) is process noise, the mean value is 0, the variance is Q (k), A (k +1, k), U (k) and Q (k) contain maneuvering frequency α and the variance of the angular velocity second derivativeChanges with changes in system adaptation parameters; the expression of the state transition matrix A (k +1, k) is as follows
The expression of the control matrix U (k) is as follows
The variance Q (k) of the process noise w (k) is expressed as follows
Wherein,
2.2.2, the measurement equation is as follows
y(k)=H(k)x(k)+v(k) (8)
Wherein k is the sampling time, y (k) the measured value at the time y (k), H (k) is the measurement matrix, and x (k) is the state vector of the target at the time k; v (k) is white gaussian measurement noise with variance R and is independent of process noise w (k).
And S2.3, predicting the target moving state according to the established state process model with the system self-adaptive parameters, and acquiring a state predicted value and a state covariance predicted value.
2.3.1 according to the established state process model with the system self-adaptive parameters and the one-step prediction of the completion state of the initial value, the prediction equation is as follows:
whereinRepresenting the state of a prediction target at the moment k-1, wherein k is the sampling moment, and A (k, k-1) is a state transition matrix;representing the state estimation value of the target k-1 moment; u (k-1) is a control matrix;is the mean value of the second derivative of the angular velocity starting from the time 0 to k-1;
2.3.2 one-step prediction of state covariance is accomplished as follows:
P(k|k-1)=A(k,k-1)P(k-1|k-1)AT(k,k-1)+Q(k-1) (10)
p (k | k-1) represents the state covariance of the prediction target at the k moment at the k-1 moment, k is the sampling moment, and | represents a conditional operator; p (k-1| k-1) represents an estimate of the state covariance of the target at time k-1; a (k, k-1) is a state transition matrix; q (k-1) is the process noise covariance.
And S2.4, updating the target moving state according to the state predicted value, the measured data value and the state covariance predicted value to obtain a state estimated value.
2.4.1 calculating the gain of the filter according to the following formula according to the state covariance predicted value, the measurement matrix and the measurement noise variance;
K(k)=P(k|k-1)HT(k)[H(k)P(k|k-1)HT(k)+R]T(11)
wherein K (k) is the filter gain, k is the sampling time, P (k | k-1) represents the state covariance of the prediction target at k time at k-1 time, H (k) is the measurement matrix at k time, R is the variance of Gaussian measurement white noise, H (k) is the variance of Gaussian measurement white noiseT(k) Transposing a measurement matrix at the time k;
2.4.2 calculating the target Current State estimate using the State prediction value and the observed data value, as follows
Wherein,which represents the estimated value of the state at time k,representing the state of the prediction at time k-1, k being the sampling time, k (k) being the filter gain at time k, y (k) being the observed value at time k, and h (k) being the measurement matrix at time k;
2.4.3 calculating the estimated value of the state covariance according to the following formula;
P(k|k)=(I-K(k)H(k))P(k|k-1) (13)
where I is a 3-dimensional identity matrix, P (k | k) represents the estimated value of the state covariance at time k, k is the sampling time, K (k) is the filter gain at time k, H (k) is the measurement matrix at time k, and P (k | k-1) represents the state covariance predicted at time k-1.
And S2.5, calculating the angular velocity second derivative mean value and the angular velocity second derivative estimated value according to the state estimated value.
Calculating the average value of the angular velocity second derivative by using the following formula;
whereinIs the average value of the angular velocity second derivative from time 0 to k,state estimate at time kK is the sampling time; and acquiring the angular velocity second derivative estimated values of the system at the k-1 moment and the k moment according to the following formula
WhereinEstimating state for time k-1The value of the third row of (c),estimating state for time kThe third row value of (2).
S2.6, correcting the system self-adaptive parameters according to the angular velocity second derivative estimated value.
According to the value of k at the sampling time, selecting and modifying the adaptive parameters α andif k is less than or equal to 4, the step 2.6.1 is carried out, and if k is more than 4, the step 2.6.2 is carried out,
2.6.1 when the sampling time k is less than or equal to 4, because the sampling data is less, the system adaptive parameters α and α are calculated by adopting the parameter value taking method of the current statistical model as follows
α=α0α therein0Is an initial value of the system adaptation parameter α,
wherein,is the second derivative estimated value of the angular velocity at the moment k, pi is the circumferential rate, and is taken as 3.14, aMIs a positive constant, taken as 3, a-MIs a andMa negative constant with equal absolute value is taken as-3;
wherein b is a constant greater than 1, rk(1) For the second derivative of angular velocity at time k, the correlation function of the previous step, rk-1(1) The angular velocity second derivative is the previous step correlation function for time k-1,andrespectively estimating values of angular velocity second derivative at the k-1 moment and the k moment; r isk(0) Is the autocorrelation function of the second derivative of angular velocity at time k-1, rk-1(0) Is the autocorrelation function of the second derivative of the angular velocity at the moment k-1;
for example, b in the formulas (17) and (18) is 10, which is shown as follows:
according to system equationObtaining a second derivative of the angular velocity satisfying the following first order Markov random sequence:
whereinThe second derivative of angular velocity at time k +1,acceleration at time k, β maneuver frequency of random sequence of discrete rear accelerations, wa(k) Is a zero mean white noise discrete sequence with a variance ofWhereinThe variance of white noise w (t) with zero mean, β and α has a relation of β ═ e-αT;
The first order Markov time acceleration sequence satisfies the parameter relationship shown below:
wherein r isk(1) For acceleration at time k, a one-step forward autocorrelation function, rk(0) α and β are respectively the maneuvering frequency of the acceleration and the maneuvering frequency of the discretized acceleration sequence thereof, adaptive parameters α andcan be calculated according to the following formula:
wherein r isk(1) The acceleration at time k is a one-step forward correlation function, rk(0) For the acceleration autocorrelation function at time k, ln is a logarithmic calculation based on e, α andt is the sampling interval for the system adaptation parameters.
And S2.7, updating the state process model according to the angular velocity second derivative average value and the corrected system adaptive parameters, and acquiring angular velocity data after online adaptive filtering.
Optionally, as an embodiment of the present invention, establishing a quaternion differential equation according to the angular velocity data, and solving the quaternion differential equation by using a longge-kutta method, wherein obtaining an attitude matrix including a target attitude angle includes:
s3.1, establishing a quaternion differential equation by using the filtered angular velocity data and quaternion;
specifically, using the filtered angular velocity information of the gyroscope in the inertial sensor, the following differential equation is established using quaternion:
wherein, the left side of the equation is derivation operation of quaternion, q (t) represents quaternion, symbol o represents quaternion multiplication,is a matrix expression of the triaxial angular velocity;
s3.2, solving the quaternion differential equation by using a fourth-order Runge-Kutta method to obtain an attitude matrix described by the quaternion, updating the attitude matrix by updating the element value of the quaternion, and updating a target attitude angle;
the initial value of the slope required for solving the quaternion differential equation by utilizing a fourth-order Runge Kutta method is determined by the initial value of the quaternion, and the initial value of the quaternion is determined by the initial value of the target attitude angle.
The differential equation is solved by using a fourth-order Runge Kutta method, the initial value of the slope required by the first step of the iterative updating algorithm can be determined by the initial value of the quaternion, the initial value of the quaternion is determined by the initial value of the attitude angle, and after each element of the quaternion of the next step is calculated, the initial value of the slope can be substituted into the attitude matrix described by the quaternionAnd then the attitude angle information at the current moment can be solved.
Where K is the slope, t is the current time, h is the updated step size and is generally the same as the sensor sampling period, whereWhereinIs a matrix expression of the three-axis angular velocity.
Optionally, as an embodiment of the present invention, converting the target attitude angle from the carrier coordinate system to the navigation coordinate system includes: method for solving attitude matrix by utilizing quaternion method
Solving the attitude angle from the attitude matrix is as follows
θMaster and slave=arcsin C32
For accurately determining attitude angleThe truth values of theta and gamma are defined as the definition domain of the attitude angle, wherein the definition domain of the pitch angle is (-90 degrees and 90 degrees), the definition domain of the roll angle is (-180 degrees and 180 degrees), and the definition domain of the heading angle is (0 degrees and 360 degrees). Theta in the formulaMaster and slave、γMaster and slaveAndthe pitch angle, roll angle and course angle, theta, gamma andindicating the angle value converted into the defined domain.
θ=θMaster and slave
Because the inertial components of the strapdown inertial navigation system are fixedly connected on the carrier, the output values of the sensors are all output values under a carrier coordinate system, the measured values are required to be converted into a coordinate system which is convenient for calculating required navigation parameters, namely a navigation coordinate system, the attitude matrix is the conversion relation of various data measured on the carrier coordinate system from the carrier coordinate system to the navigation coordinate system, and the attitude angle of the carrier can be expressed after the attitude matrix of the carrier is determined.
Fig. 2 shows a system signal processing flow, and it can be seen from the drawing that the data acquired from the inertial sensing control device is filtered and quaternion is solved to obtain an attitude angle, and finally a control command of the robot is obtained.
Fig. 3 shows the changes of the Pitch angle and the course angle when the finger moves up and down, wherein Pitch represents the Pitch angle and Yaw represents the course angle.
Fig. 4 shows the changes of the Pitch angle and the course angle when the finger moves left and right, where Pitch represents the Pitch angle and Yaw represents the course angle, and it can be seen from the figure that the course angle shows periodic changes with the left and right movement of the finger, but the Pitch angle does not change much.
Optionally, as an embodiment of the present invention, excluding the target pose angles out of the threshold range from the target pose angles in the navigation coordinate system includes:
the gestures are defined as: the method comprises the following steps that a finger is lifted, downward, rightward and leftward move, and experiments show that when the finger is lifted, a pitch angle is positive, and the gesture is defined to control the robot to move forwards; when the fingers are put down, the pitch angle is negative, and the gesture is defined to control the robot to stop; when the finger is right, the course angle is positive, and the gesture is defined to control the robot to turn right; when the finger is leftwards, the course angle is negative, and the robot is controlled to be leftwards.
In the process, when the Finger wears the inertial sensor, because the slight or large-amplitude motion of the Finger can influence the inaccuracy of Finger identification, through experimental demonstration, when the Finger is lifted (Finger _ Up), the pitch angle is 20 degrees (threshold value 1, T is used by people)1Expressed) to 50 degrees (threshold 2, we use T)2Representing), the normal range of the movement of the fingers is taken as a correct gesture for controlling the robot to move forwards; when the Finger is downward (Finger _ Down), the pitch angle is between-20 degrees and-50 degrees, which is the normal range of Finger movement as the correct gesture for controlling the robot to stop. When the Finger turns to the Right (Finger _ Right), the heading angle is 20 degrees (threshold 3, we use T)3Expressed) to 40 degrees (threshold 4, we use T)4Indicating) is the normal range of finger turning as the correct gesture for controlling the robot to turn right; when the Finger turns Left (Finger _ Left) and the course angle is between-20 degrees and-40 degrees, the normal range of the Finger turning is used as the correct gesture for controlling the robot to turn Left.
With the above threshold, the following instructions to control the gesture can be obtained:
wherein Pitch represents the Pitch angle and Yaw represents the heading angle.
Optionally, as an embodiment of the present invention, the controlling the robot motion according to the target attitude angle within the threshold range includes:
after the sensor is worn by the finger, the sensor is connected through WIFI to be communicated with the robot, the inertial sensor is used for collecting a moving signal of the finger, the computer obtains the moving signal, the gesture of the finger is judged, then a moving instruction is transmitted to the robot through the wireless module, and 4 commands described by G, S, R and L are listed in table 1. When the gesture is lifted, the sensor captures the movement of the finger and transmits the movement to the computer for judging the gesture, and when the set threshold range is met, the computer sends a 'G' instruction to the robot, and the robot makes forward motion; when the gesture is downward, the sensor captures the movement of the finger and transmits the movement to the computer for judging the gesture, and when the set threshold range is met, the computer sends an S command to the robot, and the robot makes a stopping action; when the gesture is right, the sensor captures the movement of the finger and transmits the movement to the computer for judging the gesture, and when the set threshold range is met, the computer sends an 'R' instruction to the robot, and the robot makes a right turn; when the gesture is leftward, the sensor captures the movement of the finger, the movement is transmitted to the computer to judge the gesture, and when the set threshold range is met, the computer sends an 'L' instruction to the robot, so that the robot makes a left-turning motion.
The flow chart of the experiment performed by taking a finger control robot as an example is shown in fig. 1, aiming at accurately identifying the finger action by an inertial sensor. A typical robot platform, namely an NAO robot, is selected, the robot system is provided with a complete self-balancing module, when an instruction is input, the NAO can walk stably, and therefore only whether the NAO can obtain an accurate command based on a gesture is considered.
The gesture recognition method provided by the invention mainly depends on the inertial sensor to finish data acquisition and is realized by controlling the robot to move the finishing system. Firstly, a gyroscope and an accelerometer are respectively used for measuring angular movement and linear movement information of a carrier, and online adaptive filtering is carried out to remove noise; then, solving a differential equation described by a quaternion method by using a fourth-order Runge-Kutta method, and calculating a pitch angle and a course angle by reversely solving a trigonometric function through elements of the quaternion; and finally, adaptively determining the identification threshold values of the pitch angle and the course angle according to the movement range of the finger, thereby excluding the wrong operation of the finger.
The invention provides a method for acquiring small-range motion data of a finger through an inertial sensor, carrying out online adaptive filtering and denoising on the data, constructing a quaternion attitude matrix, solving a quaternion differential equation by using a Runge-Kutta method, updating a quaternion, updating the attitude matrix to obtain a real-time attitude angle, and setting a threshold to eliminate the condition of finger misoperation due to the recognition of small-range motion of the finger.
The method for providing robot movement control according to the embodiment of the present invention is described in detail above with reference to fig. 1 to 4. The following describes in detail a robot movement control system according to an embodiment of the present invention with reference to fig. 5.
Fig. 5 is a schematic structural block diagram of a robot movement control system according to an embodiment of the present invention. As shown in fig. 5, the system includes: an acquisition unit 510, a preprocessing unit 520, a processing unit 530, a coordinate system transformation unit 540, and a screening unit 550.
The acquisition unit 510 acquires angular velocity data of an inertial sensor, wherein the inertial sensor may be worn on a user's finger; the preprocessing unit 520 performs online adaptive filtering preprocessing on the angular velocity data; the processing unit 530 establishes a quaternion differential equation according to the angular velocity data subjected to the online adaptive filtering preprocessing, and solves the quaternion differential equation by using a Runge-Kutta method to obtain an attitude matrix including a target attitude angle; the coordinate system converting unit 540 converts the target attitude angle from the carrier coordinate system to the navigation coordinate system; the filtering unit 550 excludes target attitude angles that are not within a threshold range from among the target attitude angles in the navigation coordinate system; the control unit 560 controls the robot motion according to the target attitude angle within the threshold range.
In the embodiment, aiming at the problem of real-time processing of sensor data, online denoising is realized by using an online adaptive filtering method; the attitude matrix is described by using a quaternion method, a differential equation is solved, the calculated amount is small, the precision is high, and the situation that the attitude matrix falls into a singular point can be avoided; the finger misoperation behaviors are excluded by utilizing a threshold value, and different micro gesture motions are recognized; the invention can utilize the inertial sensor to control the robot to move, and has the advantages of high precision, good online identification effect, strong universality and good application prospect value.
The system is mainly realized by combining attitude angle resolving and self-adaptive threshold value analysis, firstly, denoising processing is carried out on measured data through self-adaptive filtering, then a quaternion-based strapdown inertial navigation system attitude resolving algorithm is used for completing quaternion attitude matrix, navigation parameter extraction and calculation, initial condition giving and initial data calculation, and course, attitude, speed and position information are continuously calculated through updating of the attitude matrix; and then, a threshold analysis means is used for realizing noise suppression and misoperation judgment, detecting and identifying the tiny gesture movement condition in real time, and then completing corresponding movement control of the robot.
The embodiment of the invention also provides an inertial sensing control device, which comprises the robot movement control system in the technical scheme, wherein the inertial sensing control device is in wireless communication with the robot.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (8)
1. A robot movement control method, characterized by comprising:
s1, acquiring angular velocity data of the inertial sensor;
s2, carrying out online adaptive filtering pretreatment on the angular velocity data;
the S2 includes:
s2.1, initializing state quantities and system self-adaptive parameters, wherein the state quantities comprise angular velocity, angular velocity first-order derivatives and angular velocity second-order derivatives;
s2.2, establishing a state process model with system self-adaptive parameters;
s2.3, predicting the state quantity according to the established state process model with the system self-adaptive parameters to obtain a state predicted value and a state covariance predicted value;
s2.4, updating the state quantity according to the state predicted value, the measured data value and the state covariance predicted value to obtain a state estimated value;
s2.5, calculating the average value of the angular velocity second derivative and the estimation value of the angular velocity second derivative according to the state estimation value;
calculating the average value of the angular velocity second derivative by using the following formula;
whereinIs the average value of the angular velocity second derivative from time 0 to k,is kState estimation of timeK is the sampling time; and acquiring the angular velocity second derivative estimated values of the system at the k-1 moment and the k moment according to the following formula
WhereinEstimating state for time k-1The value of the third row of (c),estimating state for time kA third row value of;
s2.6, correcting the system self-adaptive parameters according to the angular velocity second derivative estimated value;
according to the value of k at the sampling time, selecting and modifying the adaptive parameters α andif k is less than or equal to 4, the step 2.6.1 is carried out, and if k is more than 4, the step 2.6.2 is carried out,
2.6.1 when the sampling time k is less than or equal to 4, because the sampling data is less, the method adoptsThe parameter value taking method of the current statistical model calculates the system self-adaptive parameters α and α as follows
α=α0α therein0Is an initial value of the system adaptation parameter α,
wherein,is the second derivative estimated value of the angular velocity at the moment k, pi is the circumferential rate, and is taken as 3.14, aMIs a positive constant, taken as 3, a-MIs a andMa negative constant with equal absolute value is taken as-3;
wherein b is a constant greater than 1, rk(1) For the second derivative of angular velocity at time k, the correlation function of the previous step, rk-1(1) The angular velocity second derivative is the previous step correlation function for time k-1,andrespectively estimating values of angular velocity second derivative at the k-1 moment and the k moment; r isk(0) Is the autocorrelation function of the second derivative of angular velocity at time k-1, rk-1(0) Is the autocorrelation function of the second derivative of the angular velocity at the moment k-1;
wherein r isk(1) The acceleration at time k is a one-step forward correlation function, rk(0) For the acceleration autocorrelation function at time k, ln is a logarithmic calculation based on e, α andis a system adaptive parameter, and T is a sampling interval;
s2.7, updating the state process model according to the angular velocity second derivative average value and the corrected system adaptive parameters, and acquiring angular velocity data after online adaptive filtering;
s3, establishing a quaternion differential equation according to the angular velocity data subjected to online adaptive filtering pretreatment, and solving the quaternion differential equation by using a Runge-Kutta method to obtain an attitude matrix comprising a target attitude angle;
s4, converting the target attitude angle from the carrier coordinate system into a navigation coordinate system;
s5, excluding the target attitude angles which are not within the threshold range from the target attitude angles in the navigation coordinate system;
and S6, controlling the robot to move according to the target attitude angle within the threshold value range.
2. The method according to claim 1, wherein the S3 includes:
s3.1, establishing a quaternion differential equation by using the filtered angular velocity data and quaternion;
s3.2, solving the quaternion differential equation by using a fourth-order Runge-Kutta method to obtain an attitude matrix described by the quaternion, updating the attitude matrix by updating the element value of the quaternion, and updating a target attitude angle;
the initial value of the slope required for solving the quaternion differential equation by utilizing a fourth-order Runge-Kutta method is determined by the initial value of the quaternion, and the initial value of the quaternion is determined by the initial value of the target attitude angle.
3. The method of claim 2, wherein solving the quaternion differential equation using the longge-kutta method obtains the attitude matrix described by the quaternion as follows:
4. The method of any one of claims 1 to 3, wherein the target attitude angles include a pitch angle and a heading angle; the absolute value of the threshold range of the pitch angle is twenty to fifty degrees, and the threshold range of the course angle is twenty to forty degrees.
5. A robot movement control system, comprising:
the acquisition unit is used for acquiring angular velocity data of the inertial sensor; the preprocessing unit is used for preprocessing the online adaptive filtering of the angular velocity data;
the preprocessing unit is specifically configured to:
initializing a state quantity and system self-adaptive parameters, wherein the state quantity comprises angular velocity, a first derivative of the angular velocity and a second derivative of the angular velocity;
establishing a state process model with system self-adaptive parameters;
predicting the state quantity according to the established state process model with the system self-adaptive parameters to obtain a state predicted value and a state covariance predicted value;
updating the state quantity according to the state predicted value, the measured data value and the state covariance predicted value to obtain a state estimated value;
calculating the average value of the angular velocity second derivative and the estimation value of the angular velocity second derivative according to the state estimation value;
calculating the average value of the angular velocity second derivative by using the following formula;
whereinIs the average value of the angular velocity second derivative from time 0 to k,state estimate at time kK is the sampling time; and acquiring the angular velocity second derivative estimated values of the system at the k-1 moment and the k moment according to the following formula
WhereinEstimating state for time k-1The value of the third row of (c),estimating state for time kA third row value of;
correcting the system self-adaptive parameters according to the angular velocity second derivative estimated value;
according to the value of k at the sampling time, selecting and modifying the adaptive parameters α andif k is less than or equal to 4, the step 2.6.1 is carried out, and if k is more than 4, the step 2.6.2 is carried out,
2.6.1 when the sampling time k is less than or equal to 4, because the sampling data is less, the system adaptive parameters α and α are calculated by adopting the parameter value taking method of the current statistical model as follows
α=α0α therein0Is an initial value of the system adaptation parameter α,
wherein,is the second derivative estimated value of the angular velocity at the moment k, pi is the circumferential rate, and is taken as 3.14, aMIs a positive constant, taken as 3, a-MIs a andMa negative constant with equal absolute value is taken as-3;
wherein b is a constant greater than 1, rk(1) For the second derivative of angular velocity at time k, the correlation function of the previous step, rk-1(1) The angular velocity second derivative is the previous step correlation function for time k-1,andrespectively estimating values of angular velocity second derivative at the k-1 moment and the k moment; r isk(0) Is the autocorrelation function of the second derivative of angular velocity at time k-1, rk-1(0) Is the autocorrelation function of the second derivative of the angular velocity at the moment k-1;
wherein r isk(1) The acceleration at time k is a one-step forward correlation function, rk(0) For the acceleration autocorrelation function at time k, ln is a logarithmic calculation based on e, α andis a system adaptive parameter, and T is a sampling interval;
updating the state process model according to the angular velocity second derivative mean value and the corrected system adaptive parameters, and acquiring angular velocity data after online adaptive filtering;
the processing unit is used for establishing a quaternion differential equation according to the angular velocity data subjected to online adaptive filtering pretreatment, solving the quaternion differential equation by using a Runge-Kutta method and acquiring an attitude matrix comprising a target attitude angle;
a coordinate system conversion unit for converting the target attitude angle from the carrier coordinate system to a navigation coordinate system;
the screening unit is used for excluding target attitude angles which are not in a threshold range from the target attitude angles in the navigation coordinate system;
and the control unit is used for controlling the robot to act according to the target attitude angle within the threshold range.
6. The system of claim 5, wherein the processing unit is specifically configured to:
establishing a quaternion differential equation by using the filtered angular velocity data and quaternion;
solving the quaternion differential equation by using a fourth-order Runge-Kutta method to obtain an attitude matrix described by the quaternion, updating the attitude matrix by updating the element value of the quaternion, and updating a target attitude angle;
the initial value of the slope required for solving the quaternion differential equation by utilizing a fourth-order Runge-Kutta method is determined by the initial value of the quaternion, and the initial value of the quaternion is determined by the initial value of the target attitude angle.
7. The system according to any one of claims 5 to 6, wherein said solving of said quaternion differential equations using the Runge-Kutta method, obtaining the attitude matrix described by quaternion as follows:
8. An inertial sensing control device comprising a robot movement control system according to any one of claims 5 to 7, the inertial sensing control device being in wireless communication with a robot.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711232485.7A CN108051001B (en) | 2017-11-30 | 2017-11-30 | Robot movement control method and system and inertial sensing control device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711232485.7A CN108051001B (en) | 2017-11-30 | 2017-11-30 | Robot movement control method and system and inertial sensing control device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108051001A CN108051001A (en) | 2018-05-18 |
CN108051001B true CN108051001B (en) | 2020-09-04 |
Family
ID=62121365
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711232485.7A Active CN108051001B (en) | 2017-11-30 | 2017-11-30 | Robot movement control method and system and inertial sensing control device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108051001B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110876275A (en) * | 2019-04-30 | 2020-03-10 | 深圳市大疆创新科技有限公司 | Aiming control method, mobile robot and computer readable storage medium |
CN113496165B (en) * | 2020-04-01 | 2024-04-16 | 京东科技信息技术有限公司 | User gesture recognition method and device, hand intelligent wearable device and storage medium |
CN114102600B (en) * | 2021-12-02 | 2023-08-04 | 西安交通大学 | Multi-space fusion human-machine skill migration and parameter compensation method and system |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107122724A (en) * | 2017-04-18 | 2017-09-01 | 北京工商大学 | A kind of method of the online denoising of sensing data based on adaptive-filtering |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9221170B2 (en) * | 2013-06-13 | 2015-12-29 | GM Global Technology Operations LLC | Method and apparatus for controlling a robotic device via wearable sensors |
-
2017
- 2017-11-30 CN CN201711232485.7A patent/CN108051001B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107122724A (en) * | 2017-04-18 | 2017-09-01 | 北京工商大学 | A kind of method of the online denoising of sensing data based on adaptive-filtering |
Non-Patent Citations (3)
Title |
---|
一种手势控制小车运动系统的设计与实现;刘梁;《数字技术与应用》;20170228;第3.2,4.3节 * |
基于四元数和卡尔曼滤波的姿态角估计算法研究与应用;陈伟;《中国优秀硕士学位论文全文数据库信息科技辑》;中国学术期刊(光盘版)电子杂志社;20160115(第1期);第2.4.3节 * |
遥操作护理机器人系统的操作者姿态解算方法研究;左国玉等;《自动化学报》;20161230;第42卷(第12期);第1,2.1,2.2,4.1节 * |
Also Published As
Publication number | Publication date |
---|---|
CN108051001A (en) | 2018-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wilson et al. | Formulation of a new gradient descent MARG orientation algorithm: Case study on robot teleoperation | |
WO2019192172A1 (en) | Attitude prediction method and apparatus, and electronic device | |
EP3737912B1 (en) | Determining the location of a mobile device | |
Du et al. | Markerless human–manipulator interface using leap motion with interval Kalman filter and improved particle filter | |
CN110986939B (en) | Visual inertia odometer method based on IMU (inertial measurement Unit) pre-integration | |
US9221170B2 (en) | Method and apparatus for controlling a robotic device via wearable sensors | |
CN111666891B (en) | Method and device for estimating movement state of obstacle | |
CN108519090B (en) | Method for realizing double-channel combined attitude determination algorithm based on optimized UKF algorithm | |
Zhang et al. | IMU data processing for inertial aided navigation: A recurrent neural network based approach | |
CN105953796A (en) | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone | |
CN108051001B (en) | Robot movement control method and system and inertial sensing control device | |
Chen et al. | RNIN-VIO: Robust neural inertial navigation aided visual-inertial odometry in challenging scenes | |
CN103517789A (en) | Device and method for controlling prediction of motion | |
EP3164786B1 (en) | Apparatus and method for determining an intended target | |
KR20180020262A (en) | Technologies for micro-motion-based input gesture control of wearable computing devices | |
CN111723624B (en) | Head movement tracking method and system | |
CN111145251A (en) | Robot, synchronous positioning and mapping method thereof and computer storage device | |
CN109655059B (en) | Vision-inertia fusion navigation system and method based on theta-increment learning | |
CN108592907A (en) | A kind of quasi real time step-by-step movement pedestrian navigation method based on bidirectional filtering smoothing technique | |
CN110572139A (en) | fusion filtering implementation method and device for vehicle state estimation, storage medium and vehicle | |
CN105509748B (en) | The air navigation aid and device of robot | |
CN115741717A (en) | Three-dimensional reconstruction and path planning method, device, equipment and storage medium | |
CN107203271B (en) | Double-hand recognition method based on multi-sensor fusion technology | |
US11782522B1 (en) | Methods and systems for multimodal hand state prediction | |
Golroudbari et al. | End-to-end deep learning framework for real-time inertial attitude estimation using 6dof imu |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |