CN108051001B - A robot movement control method, system and inertial sensing control device - Google Patents
A robot movement control method, system and inertial sensing control device Download PDFInfo
- Publication number
- CN108051001B CN108051001B CN201711232485.7A CN201711232485A CN108051001B CN 108051001 B CN108051001 B CN 108051001B CN 201711232485 A CN201711232485 A CN 201711232485A CN 108051001 B CN108051001 B CN 108051001B
- Authority
- CN
- China
- Prior art keywords
- angular velocity
- value
- time
- derivative
- state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 239000011159 matrix material Substances 0.000 claims abstract description 49
- 238000001914 filtration Methods 0.000 claims abstract description 22
- 238000007781 pre-processing Methods 0.000 claims abstract description 12
- 230000009471 action Effects 0.000 claims abstract description 9
- 230000003044 adaptive effect Effects 0.000 claims description 58
- 238000005070 sampling Methods 0.000 claims description 28
- 230000008569 process Effects 0.000 claims description 18
- 230000001133 acceleration Effects 0.000 claims description 15
- 238000005311 autocorrelation function Methods 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 12
- 238000005314 correlation function Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 4
- 238000013179 statistical model Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 4
- 238000005259 measurement Methods 0.000 description 13
- 230000008859 change Effects 0.000 description 8
- 230000003993 interaction Effects 0.000 description 5
- 230000001276 controlling effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000007704 transition Effects 0.000 description 4
- 230000005057 finger movement Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000000875 corresponding effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000005309 stochastic process Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 230000036962 time dependent Effects 0.000 description 1
- 230000002087 whitening effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
Abstract
本发明涉及一种机器人移动控制方法,包括:获取惯性传感器的角速度数据,对所述角速度数据进行滤波预处理;根据所述角速度数据建立四元数微分方程,利用龙格‑库塔法求解所述四元数微分方程,获取包括目标姿态角的姿态矩阵;将目标姿态角从载体坐标系转换为导航坐标系;将所述导航坐标系中的目标姿态角中不在阈值范围内的目标姿态角排除;根据阈值范围内的目标姿态角控制机器人动作。本发明可以利用惯性传感器控制机器人移动,具有较高精度和良好在线识别效果,普适性强,应用前景价值较好。本发明还提供一种机器人移动控制系统及惯性传感控制装置。
The invention relates to a robot movement control method, comprising: acquiring angular velocity data of an inertial sensor, filtering and preprocessing the angular velocity data; establishing a quaternion differential equation according to the angular velocity data, and using the Runge-Kutta method to solve the problem The quaternion differential equation is described, and the attitude matrix including the target attitude angle is obtained; the target attitude angle is converted from the carrier coordinate system to the navigation coordinate system; the target attitude angle in the target attitude angle in the navigation coordinate system is not within the threshold range. Exclude; control the robot action according to the target attitude angle within the threshold range. The invention can control the movement of the robot by using the inertial sensor, has high precision and good online recognition effect, has strong universality, and has good application prospect value. The invention also provides a robot movement control system and an inertial sensing control device.
Description
技术领域technical field
本发明涉及机器人控制技术领域,尤其涉及一种机器人移动控制方法、系统及惯性传感控制装置。The present invention relates to the technical field of robot control, in particular to a robot movement control method, system and inertial sensing control device.
背景技术Background technique
在众多的扩展应用中,人机交互方式占据着一个重要的角色,人与机器人交互则是机器人技术领域、尤其是生活辅助机器人领域的重要课题。一直以来,人们寻找更自然、更人性化的人机交互方式的脚步从未停止过,而利用手势来控制机器人可以代替复杂繁琐的程序操作,简单方便地操纵机器人,向机器人发布命令,与机器人进行交互,已成为研究的热点。In many extended applications, human-computer interaction plays an important role, and human-robot interaction is an important topic in the field of robotics, especially in the field of life-assist robots. For a long time, people have never stopped to find a more natural and humanized way of human-computer interaction, and using gestures to control robots can replace complicated and cumbersome program operations, simply and conveniently manipulate robots, issue commands to robots, and communicate with robots. Interaction has become a research hotspot.
手势识别本质是根据用户的手势微小操作来感知用户的操作意图,属于多通道交互的范畴,其研究内容涉及到模式识别、机器人、图像处理、计算机视觉等一系列相关学科。手势识别的研究不仅可以在一定程度上促进这些学科的发展,而且因为手势动作的一些先天固有优点,其还具有很大的现实意义。The essence of gesture recognition is to perceive the user's operation intention according to the user's small gesture operations, which belongs to the category of multi-channel interaction. Its research content involves a series of related disciplines such as pattern recognition, robotics, image processing, and computer vision. The research on gesture recognition can not only promote the development of these disciplines to a certain extent, but also has great practical significance because of some inherent advantages of gesture movements.
目前手势识别主要有2种方法,一种是基于视觉的手势识别技术,该技术发展比较早也相对成熟,但对设备和环境要求严格,使用局限性较大。另一种是基于惯性传感器的手势识别技术,该技术不受环境、光线的影响,主要通过测量加速度和角速度的变化,从而进行手势识别,但惯性传感器存在漂移误差,在手势识别,例如判断手指这种微小动作中仍存在精度不高判断不准的问题。At present, there are two main methods of gesture recognition. One is the vision-based gesture recognition technology. This technology has developed relatively early and is relatively mature, but it has strict requirements on equipment and environment, and its use is limited. The other is gesture recognition technology based on inertial sensors. This technology is not affected by the environment and light. It mainly measures the changes in acceleration and angular velocity for gesture recognition. However, inertial sensors have drift errors. In gesture recognition, such as judging fingers There is still the problem of low precision and inaccurate judgment in such small movements.
发明内容SUMMARY OF THE INVENTION
本发明所要解决的技术问题是针对现有技术的不足,提供一种机器人移动控制方法、系统及惯性传感控制装置。The technical problem to be solved by the present invention is to provide a robot movement control method, system and inertial sensing control device aiming at the deficiencies of the prior art.
本发明解决上述技术问题的技术方案如下:一种机器人移动控制方法,包括:The technical solution of the present invention to solve the above-mentioned technical problems is as follows: a robot movement control method, comprising:
S1,获取惯性传感器的角速度数据;S1, obtain the angular velocity data of the inertial sensor;
S2,对所述角速度数据进行在线自适应滤波的预处理;S2, performing online adaptive filtering preprocessing on the angular velocity data;
S3,根据经在线自适应滤波预处理的所述角速度数据建立四元数微分方程,利用龙格-库塔法求解所述四元数微分方程,获取包括目标姿态角的姿态矩阵;S3, establishing a quaternion differential equation according to the angular velocity data preprocessed by the online adaptive filtering, using the Runge-Kutta method to solve the quaternion differential equation, and obtaining an attitude matrix including the target attitude angle;
S4,将目标姿态角从载体坐标系转换为导航坐标系;S4, convert the target attitude angle from the carrier coordinate system to the navigation coordinate system;
S5,将所述所述导航坐标系中的目标姿态角中不在阈值范围内的目标姿态角排除;S5, exclude target attitude angles that are not within the threshold range from the target attitude angles in the navigation coordinate system;
S6,根据阈值范围内的目标姿态角控制机器人动作。S6, control the action of the robot according to the target attitude angle within the threshold range.
本发明解决上述技术问题的另一技术方案如下:一种机器人移动控制系统,包括:Another technical solution of the present invention to solve the above-mentioned technical problems is as follows: a robot movement control system, comprising:
采集单元,用于获取惯性传感器的角速度数据;an acquisition unit for acquiring the angular velocity data of the inertial sensor;
预处理单元,用于对所述角速度数据进行在线自适应滤波的预处理;a preprocessing unit, configured to perform online adaptive filtering preprocessing on the angular velocity data;
处理单元,用于根据经在线自适应滤波预处理的所述角速度数据建立四元数微分方程,利用龙格-库塔法求解所述四元数微分方程,获取包括目标姿态角的姿态矩阵;a processing unit, configured to establish a quaternion differential equation according to the angular velocity data preprocessed by the online adaptive filtering, use the Runge-Kutta method to solve the quaternion differential equation, and obtain an attitude matrix including the target attitude angle;
坐标系转换单元,用于将目标姿态角从载体坐标系转换为导航坐标系;The coordinate system conversion unit is used to convert the target attitude angle from the carrier coordinate system to the navigation coordinate system;
筛选单元,用于将所述所述导航坐标系中的目标姿态角中不在阈值范围内的目标姿态角排除;a screening unit, configured to exclude target attitude angles that are not within the threshold range among the target attitude angles in the navigation coordinate system;
控制单元,用于根据阈值范围内的目标姿态角控制机器人动作。The control unit is used to control the robot action according to the target attitude angle within the threshold range.
本发明解决上述技术问题的另一技术方案如下:一种惯性传感控制装置,包括上述技术方案所述的机器人移动控制系统,所述惯性传感控制装置与机器人无线通信。Another technical solution of the present invention to solve the above technical problem is as follows: an inertial sensing control device, comprising the robot movement control system described in the above technical solution, and the inertial sensing control device wirelessly communicates with the robot.
本发明的有益效果是:本发明针对传感器数据实时处理的问题,利用在线自适应滤波方法实现在线去噪,减小噪声对后期更新姿态矩阵的影响,使姿态矩阵求解姿态角更准确;利用四元数法描述姿态矩阵,求解微分方程,计算量较小,具有较高的精度,且可以避免陷入“奇点”;利用阈值将手指误操作行为排除在外,完成对不同微小手势动作识别;本发明可以利用惯性传感器控制机器人移动,具有较高精度和良好在线识别效果,普适性强,应用前景价值较好。The beneficial effects of the present invention are: aiming at the problem of real-time processing of sensor data, the present invention utilizes the online adaptive filtering method to realize online denoising, reduces the influence of noise on the later updated attitude matrix, and makes the attitude matrix to solve the attitude angle more accurately; The arity method describes the attitude matrix and solves the differential equation, which requires less calculation and has high precision, and can avoid falling into the "singularity"; the threshold is used to exclude the misoperation of fingers, and the recognition of different tiny gestures is completed; The invention can use the inertial sensor to control the movement of the robot, has high precision and good online recognition effect, has strong universality, and has good application prospect value.
附图说明Description of drawings
图1为本发明一施例提供的机器人移动控制方法的示意性流程图;FIG. 1 is a schematic flowchart of a method for controlling movement of a robot according to an embodiment of the present invention;
图2是本发明一实施例提供的系统信号处理流程图;2 is a flow chart of system signal processing provided by an embodiment of the present invention;
图3是本发明一实施例提供的手指上下移动时,俯仰角和航向角的变化;Fig. 3 is the change of pitch angle and heading angle when the finger moves up and down according to an embodiment of the present invention;
图4是本发明一实施例提供的手指左右移动时,俯仰角和航向角的变化;Fig. 4 is the change of the pitch angle and the heading angle when the finger moves left and right according to an embodiment of the present invention;
图5为本发明一施例提供的机器人移动控制系统的示意性结构框图。FIG. 5 is a schematic structural block diagram of a robot movement control system provided by an embodiment of the present invention.
具体实施方式Detailed ways
以下结合附图对本发明的原理和特征进行描述,所举实例只用于解释本发明,并非用于限定本发明的范围。The principles and features of the present invention will be described below with reference to the accompanying drawings. The examples are only used to explain the present invention, but not to limit the scope of the present invention.
图1给出了本发明实施例提供的一种机器人移动控制方法的示意性流程图。如图1所示,该方法包括:FIG. 1 is a schematic flowchart of a method for controlling movement of a robot provided by an embodiment of the present invention. As shown in Figure 1, the method includes:
S1,获取惯性传感器的角速度数据,其中,所述惯性传感器可以佩戴在用户手指上;S1, acquiring the angular velocity data of the inertial sensor, wherein the inertial sensor can be worn on the user's finger;
S2,对所述角速度数据进行在线自适应滤波的预处理;S2, performing online adaptive filtering preprocessing on the angular velocity data;
S3,根据经在线自适应滤波预处理的所述角速度数据建立四元数微分方程,利用龙格-库塔法求解所述四元数微分方程,获取包括目标姿态角的姿态矩阵;S3, establishing a quaternion differential equation according to the angular velocity data preprocessed by the online adaptive filtering, using the Runge-Kutta method to solve the quaternion differential equation, and obtaining an attitude matrix including the target attitude angle;
S4,将目标姿态角从载体坐标系转换为导航坐标系;S4, convert the target attitude angle from the carrier coordinate system to the navigation coordinate system;
S5,将所述所述导航坐标系中的目标姿态角中不在阈值范围内的目标姿态角排除;S5, exclude target attitude angles that are not within the threshold range from the target attitude angles in the navigation coordinate system;
S6,根据阈值范围内的目标姿态角控制机器人动作。S6, control the action of the robot according to the target attitude angle within the threshold range.
该实施例中,针对传感器数据实时处理的问题,利用在线自适应滤波方法实现在线去噪;利用四元数法描述姿态矩阵,求解微分方程,计算量较小,具有较高的精度,且可以避免陷入“奇点”;利用阈值将手指误操作行为排除在外,完成对不同微小手势动作识别;本发明可以利用惯性传感器控制机器人移动,具有较高精度和良好在线识别效果,普适性强,应用前景价值较好。In this embodiment, for the problem of real-time processing of sensor data, the online adaptive filtering method is used to realize online denoising; the quaternion method is used to describe the attitude matrix, and the differential equation is solved, which requires less calculation, has high precision, and can Avoid falling into the "singularity"; use the threshold to exclude the misoperation of fingers, and complete the recognition of different tiny gestures; the invention can use the inertial sensor to control the movement of the robot, has high precision and good online recognition effect, and has strong universality. The application prospect value is good.
可选地,作为本发明一个实施例,对所述角速度数据进行在线自适应滤波的预处理包括:Optionally, as an embodiment of the present invention, the preprocessing of performing online adaptive filtering on the angular velocity data includes:
S2.1,初始化状态量和系统自适应参数。具体地,S2.1, initialize state quantities and system adaptive parameters. specifically,
2.1.1,设置状态初值一般将其设为3维全0列向量,维数为状态过程模型状态向量的维数,角速度、角速度一阶导数和角速度二阶导数为状态向量;2.1.1, set the initial value of the state Generally, it is set as a 3-dimensional all-zero column vector, the dimension is the dimension of the state vector of the state process model, and the angular velocity, the first derivative of the angular velocity and the second derivative of the angular velocity are the state vector;
2.1.2,系统自适应参数初值α=α0和取任意正数,例如α0取值取值3,2.1.2, the initial value of the system adaptive parameter α=α 0 and Take any positive number, such as α 0 value take value 3,
2.1.3,自相关函数初值r0(0)和r0(1)的初值取为 2.1.3, the initial values of autocorrelation function r 0 (0) and r 0 (1) are taken as
2.1.4,角速度二阶导数初值一般取0。2.1.4, the initial value of the second derivative of the angular velocity generally Take 0.
S2.2,建立具有系统自适应参数的状态过程模型。具体地,S2.2, establish a state process model with system adaptive parameters. specifically,
2.2.1利用下式描述目标的移动特征;2.2.1 Use the following formula to describe the movement characteristics of the target;
和Singer模型类似,设角速度二阶导数为非零均值的时间相关随机过程其中为角速度二阶导数均值,a(t)为零均值指数相关有色噪声模型,其相关函数为:Similar to the Singer model, the second derivative of the angular velocity is a time-dependent stochastic process with a non-zero mean in is the mean value of the second derivative of the angular velocity, a(t) is a zero-mean exponentially correlated colored noise model, and its correlation function is:
其中,t为任意采样时刻,τ为相关度量参数,Ra(τ)表示相关函数;表示加速度方差;α为机动频率,反应了状态的变化随机特性;Among them, t is any sampling time, τ is the correlation measurement parameter, and R a (τ) represents the correlation function; represents the acceleration variance; α is the maneuvering frequency, which reflects the random characteristics of the state change;
对有色噪声a(t)做白化处理,得到Whitening the colored noise a(t), we get
其中,w(t)为零均值白噪声,方差为 where w(t) is zero mean white noise and the variance is
由和得到状态变化的连续状态方程如下:Depend on and The continuous state equation for the state change is obtained as follows:
以周期T采样,离散化后系统状态变化满足以下方程:Sampling with period T, the state change of the system after discretization satisfies the following equation:
其中为3维状态列向量,分别是角速度、角速度一阶导数和角速度二阶导数,x(k+1)为k+1时刻状态向量,k为采样时刻;A(k+1,k)为状态转移矩阵;x(k)为k时刻目标的状态向量;U(k)为控制矩阵;为0时刻开始至k时刻目标的角速度二阶导数均值;w(k)为过程噪声,其均值为0,方差为Q(k);所述A(k+1,k)、U(k)及Q(k)中含有机动频率α和角速度二阶导数方差随着系统自适应参数的变化而变化;状态转移矩阵A(k+1,k)的表达式如下式in is a 3-dimensional state column vector, are the angular velocity, the first derivative of the angular velocity and the second derivative of the angular velocity respectively, x(k+1) is the state vector at time k+1, k is the sampling time; A(k+1, k) is the state transition matrix; x(k) is the state vector of the target at time k; U(k) is the control matrix; is the mean value of the second derivative of the angular velocity of the target from
控制矩阵U(k)的表达式如下式The expression of the control matrix U(k) is as follows
过程噪声w(k)的方差Q(k)的表达式如下式The expression of the variance Q(k) of the process noise w(k) is as follows
其中,in,
2.2.2,测量方程如下2.2.2, the measurement equation is as follows
y(k)=H(k)x(k)+v(k) (8)y(k)=H(k)x(k)+v(k) (8)
其中k为采样时刻,y(k)在y(k)时刻的测量值,H(k)为测量矩阵,x(k)为k时刻目标的状态向量;v(k)为高斯测量白噪声,其方差为R,且与过程噪声w(k)相互独立。Where k is the sampling time, y(k) is the measurement value at y(k) time, H(k) is the measurement matrix, x(k) is the state vector of the target at time k; v(k) is the Gaussian measurement white noise, Its variance is R and is independent of the process noise w(k).
S2.3,根据建立的具有系统自适应参数的状态过程模型对所述目标移动状态进行预测,获取状态预测值以及状态协方差预测值。S2.3: Predict the target moving state according to the established state process model with system adaptive parameters, and obtain a state prediction value and a state covariance prediction value.
2.3.1根据建立的具有系统自适应参数的状态过程模型和初始值完成状态的一步预测,预测方程式如下:2.3.1 According to the established state process model with system adaptive parameters and the initial value to complete the one-step prediction of the state, the prediction equation is as follows:
其中表示k-1时刻预测目标在k时刻的状态,k为采样时刻,A(k,k-1)为状态转移矩阵;表示目标k-1时刻状态估计值;U(k-1)为控制矩阵;为从0时刻开始至k-1的角速度二阶导数均值;in Indicates the state of the prediction target at time k at time k-1, k is the sampling time, and A(k, k-1) is the state transition matrix; Represents the estimated value of the state at the time of target k-1; U(k-1) is the control matrix; is the mean value of the second derivative of the angular velocity from
2.3.2按照下式完成状态协方差的一步预测:2.3.2 Complete the one-step prediction of the state covariance according to the following formula:
P(k|k-1)=A(k,k-1)P(k-1|k-1)AT(k,k-1)+Q(k-1) (10)P(k|k-1)=A(k,k-1)P(k-1|k-1)A T (k,k-1)+Q(k-1) (10)
其中P(k|k-1)表示k-1时刻预测目标在k时刻的状态协方差,k为采样时刻,|表示条件操作符;P(k-1|k-1)表示k-1时刻目标的状态协方差的估计值;A(k,k-1)为状态转移矩阵;Q(k-1)为过程噪声协方差。where P(k|k-1) represents the state covariance of the prediction target at time k-1, k is the sampling time, | represents the conditional operator; P(k-1|k-1) represents time k-1 The estimated value of the state covariance of the target; A(k, k-1) is the state transition matrix; Q(k-1) is the process noise covariance.
S2.4,根据所述状态预测值、测量数据值和状态协方差预测值对所述目标移动状态进行更新,获取状态估计值。S2.4: Update the target moving state according to the state predicted value, the measured data value, and the state covariance predicted value to obtain a state estimated value.
2.4.1根据状态协方差预测值、测量矩阵及测量噪声方差按照下式计算滤波器增益;2.4.1 Calculate the filter gain according to the following formula according to the predicted value of state covariance, measurement matrix and measurement noise variance;
K(k)=P(k|k-1)HT(k)[H(k)P(k|k-1)HT(k)+R]T (11)K(k)=P(k|k-1)H T (k)[H(k)P(k|k-1)H T (k)+R] T (11)
其中,K(k)为滤波器增益,k为采样时刻,P(k|k-1)表示k-1时刻预测目标在k时刻的状态协方差,H(k)为k时刻的测量矩阵,R为高斯测量白噪声的方差,HT(k)为k时刻测量矩阵的转置;Among them, K(k) is the filter gain, k is the sampling time, P(k|k-1) is the state covariance of the prediction target at time k-1 at time k, H(k) is the measurement matrix at time k, R is the variance of Gaussian measurement white noise, and H T (k) is the transpose of the measurement matrix at time k;
2.4.2利用状态预测值和观测数据值计算目标当前状态估计值,如下式2.4.2 Use the state predicted value and the observed data value to calculate the current state estimate value of the target, as follows
其中,表示k时刻状态估计值,表示k-1时刻时预测在k时刻的状态,k为采样时刻,K(k)为k时刻滤波器增益,y(k)为在k时刻的观测值,H(k)为k时刻的测量矩阵;in, represents the estimated state value at time k, Represents the state predicted at time k at time k-1, k is the sampling time, K(k) is the filter gain at time k, y(k) is the observed value at time k, and H(k) is the measurement at time k matrix;
2.4.3按照下式计算状态协方差的估计值;2.4.3 Calculate the estimated value of the state covariance according to the following formula;
P(k|k)=(I-K(k)H(k))P(k|k-1) (13)P(k|k)=(I-K(k)H(k))P(k|k-1) (13)
其中I是3维单位矩阵,P(k|k)表示k时刻的状态协方差的估计值,k为采样时刻,K(k)为k时刻滤波器增益,H(k)为k时刻的测量矩阵,P(k|k-1)表示k-1时刻预测在k时刻的状态协方差。where I is a 3-dimensional identity matrix, P(k|k) is the estimated value of the state covariance at time k, k is the sampling time, K(k) is the filter gain at time k, and H(k) is the measurement at time k Matrix, P(k|k-1) represents the state covariance predicted at time k at time k-1.
S2.5,根据所述状态估计值计算角速度二阶导数均值及角速度二阶导数估计值。S2.5: Calculate the mean value of the second-order derivative of the angular velocity and the estimated value of the second-order derivative of the angular velocity according to the state estimated value.
利用下式计算角速度二阶导数均值;Use the following formula to calculate the mean value of the second derivative of the angular velocity;
其中为0至k时刻角速度二阶导数均值,为k时刻的状态估计值的第三行值,k为采样时刻;并按照下式获取系统k-1时刻和k时刻的角速度二阶导数估计值 in is the mean value of the second derivative of the angular velocity from
其中为k-1时刻状态估计的第三行值,为k时刻状态估计的第三行值。in Estimate the state at time k-1 The third row value of , Estimate the state at time k the third row of values.
S2.6,根据所述角速度二阶导数估计值对所述系统自适应参数进行修正。S2.6, revise the system adaptive parameter according to the estimated value of the second derivative of the angular velocity.
根据采样时刻k值的大小,选择修正系统自适应参数α和的方法,若k小于等于4进入步骤2.6.1,若k大于4进入步骤2.6.2,According to the value of k at the sampling time, the adaptive parameters α and method, if k is less than or equal to 4, go to step 2.6.1, if k is greater than 4, go to step 2.6.2,
2.6.1当采样时刻k小于等于4时,因为采样数据较少,采用当前统计模型的参数取值方法,按如下方式计算系统自适应参数α和 2.6.1 When the sampling time k is less than or equal to 4, because the sampling data is less, the parameter value method of the current statistical model is used to calculate the system adaptive parameters α and
α=α0其中α0为系统自适应参数α的初值,α=α 0 where α 0 is the initial value of the system adaptive parameter α,
如果则取 if then take
如果则取 if then take
如果则取(0,10]之间的任意数,if but Take any number between (0,10],
其中,为k时刻角速度二阶导数估计值,π为圆周率,取为3.14,aM为正的常数,取为3,a-M为与aM绝对值相等的负常数,取为-3;in, is the estimated value of the second derivative of the angular velocity at time k, π is the pi, which is taken as 3.14, a M is a positive constant, which is taken as 3, and a -M is a negative constant equal to the absolute value of a M , which is taken as -3;
2.6.2当采样时刻k大于4时,按下式计算系统自适应参数α和 2.6.2 When the sampling time k is greater than 4, the system adaptive parameters α and
其中b是大于1的常数,rk(1)为k时刻角速度二阶导数向前一步相关函数,rk-1(1)为k-1时刻角速度二阶导数向前一步相关函数,和分别为k-1时刻和k时刻角速度二阶导数估计值;rk(0)为k-1时刻角速度二阶导数自相关函数,rk-1(0)为k-1时刻角速度二阶导数自相关函数;where b is a constant greater than 1, r k (1) is the one step forward correlation function of the second derivative of the angular velocity at time k, and r k-1 (1) is the one step forward correlation function of the second derivative of the angular velocity at time k-1, and are the estimated values of the second derivative of the angular velocity at time k-1 and time k, respectively; r k (0) is the autocorrelation function of the second derivative of the angular velocity at time k-1, and r k-1 (0) is the second derivative of the angular velocity at time k-1 autocorrelation function;
比如,取式(17)与(18)中的b为10,即如下式所示:For example, taking b in equations (17) and (18) as 10, it is as follows:
根据系统方程得到角速度二阶导数满足如下一阶马尔科夫随机序列:According to the system equation The second-order derivative of the angular velocity is obtained to satisfy the following first-order Markov random sequence:
其中为k+1时刻的角速度二阶导数,为k时刻的加速度,β为离散后加速度随机序列的机动频率,wa(k)为零均值白噪声离散序列,方差为其中为零均值白噪声w(t)的方差,β与α的关系为β=e-αT;in is the second derivative of the angular velocity at time k+1, is the acceleration at time k, β is the maneuver frequency of the random sequence of acceleration after discrete, w a (k) is a discrete sequence of zero mean white noise, and the variance is in The variance of zero mean white noise w(t), the relationship between β and α is β=e −αT ;
一阶马尔科夫时间加速度序列满足以下所示参数关系:The first-order Markov time acceleration sequence satisfies the following parameter relationships:
其中,rk(1)为k时刻的加速度向前一步自相关函数,rk(0)为k时刻的加速度自相关函数,α与β分别为加速度的机动频率及其离散化后加速度序列的机动频率,自适应参数α和可按照下式计算得到:Among them, r k (1) is the acceleration one step forward autocorrelation function at time k, r k (0) is the autocorrelation function of the acceleration at time k, α and β are the maneuvering frequency of acceleration and its discretized acceleration sequence, respectively. maneuvering frequency, adaptive parameter α and It can be calculated according to the following formula:
其中,rk(1)为k时刻的加速度向前一步相关函数,rk(0)为k时刻的加速度自相关函数,ln为取以e为底的对数计算;α和为系统自适应参数,T为采样间隔。Among them, r k (1) is the acceleration one step forward correlation function at time k , rk (0) is the acceleration autocorrelation function at time k, ln is the logarithm calculation with the base e; α and is the system adaptive parameter, and T is the sampling interval.
S2.7,根据所述角速度二阶导数均值和所述修正后的系统自适应参数更新所述状态过程模型,获取在线自适应滤波后的角速度数据。S2.7, update the state process model according to the mean value of the second derivative of the angular velocity and the modified system adaptive parameter, and obtain the angular velocity data after online adaptive filtering.
可选地,作为本发明一个实施例,根据所述角速度数据建立四元数微分方程,利用龙格-库塔法求解所述四元数微分方程,获取包括目标姿态角的姿态矩阵包括:Optionally, as an embodiment of the present invention, a quaternion differential equation is established according to the angular velocity data, the Runge-Kutta method is used to solve the quaternion differential equation, and obtaining an attitude matrix including the target attitude angle includes:
S3.1,利用滤波后的角速度数据和四元数建立四元数微分方程;S3.1, use the filtered angular velocity data and quaternion to establish a quaternion differential equation;
具体地,利用滤波后的惯性传感器中陀螺仪的角速度信息,利用四元数建立如下微分方程:Specifically, using the angular velocity information of the gyroscope in the filtered inertial sensor, the following differential equation is established by using quaternions:
其中,等式的左边为四元数的求导运算,q(t)表示四元数,符号ο表示四元数乘法,为三轴角速度的矩阵表达式;Among them, the left side of the equation is the quaternion derivation operation, q(t) represents the quaternion, and the symbol ο represents the quaternion multiplication, is the matrix expression of the triaxial angular velocity;
S3.2,利用四阶龙格-库塔法求解所述四元数微分方程,获取由四元数描述的姿态矩阵,通过更新四元数的元值从而更新姿态矩阵,更新目标姿态角;S3.2, use the fourth-order Runge-Kutta method to solve the quaternion differential equation, obtain the attitude matrix described by the quaternion, update the attitude matrix by updating the element value of the quaternion, and update the target attitude angle;
其中,利用四阶龙格库塔法求解所述四元数微分方程所需的斜率初始值由四元数的初始值确定,所述四元数的初始值由目标姿态角的初始值确定。Wherein, the initial value of the slope required to solve the quaternion differential equation using the fourth-order Runge-Kutta method is determined by the initial value of the quaternion, and the initial value of the quaternion is determined by the initial value of the target attitude angle.
使用四阶龙格库塔法求解上述微分方程,该迭代更新算法第一步所需的斜率初始值可由四元数的初始值确定,而四元数初始值由姿态角初值确定,在计算出下一步的四元数各个元素后,即可代入由四元数描述的姿态矩阵进而即可求解当前时刻的姿态角信息。Using the fourth-order Runge-Kutta method to solve the above differential equation, the initial value of the slope required for the first step of the iterative update algorithm can be determined by the initial value of the quaternion, and the initial value of the quaternion is determined by the initial value of the attitude angle. After each element of the quaternion in the next step is obtained, it can be substituted into the attitude matrix described by the quaternion Then, the attitude angle information at the current moment can be obtained.
其中K为斜率,t为当前时刻,h为更新的步长且其通常与传感器采样周期相同,式中其中为三轴角速度的矩阵表达式。where K is the slope, t is the current moment, h is the update step size and it is usually the same as the sensor sampling period, where in is the matrix expression of the triaxial angular velocity.
可选地,作为本发明一个实施例,将目标姿态角从载体坐标系转换为导航坐标系包括:利用四元数法求解姿态矩阵 Optionally, as an embodiment of the present invention, converting the target attitude angle from the carrier coordinate system to the navigation coordinate system includes: using the quaternion method to solve the attitude matrix
根据姿态矩阵求解姿态角如下The attitude angle is solved according to the attitude matrix as follows
θ主=arcsin C32 θ main = arcsin C 32
为了准确的确定姿态角θ、γ的真值,下面对姿态角的定义域做了定义,其中俯仰角的定义域为(-90°,90°),橫滚角的定义域为(-180°,180°),航向角的定义域为(0°,360°)。公式中的θ主、γ主和分别是由姿态矩阵计算得到的俯仰角、横滚角和航向角,θ、γ和表示转换到定义域内的角度值。In order to accurately determine the attitude angle The true values of θ and γ are defined below. The definition domain of the attitude angle is defined below. The definition domain of the pitch angle is (-90°, 90°), and the definition domain of the roll angle is (-180°, 180°) , the domain of the heading angle is (0°, 360°). Theta main , gamma main sum in the formula are the pitch angle, roll angle and heading angle calculated from the attitude matrix, respectively, θ, γ and Represents an angle value converted into the domain.
θ=θ主 theta = theta main
由于捷联惯导系统的惯性元器件是固联在载体上的,因此传感器的输出值均为载体坐标系下的输出值,所以必须将这些测量值转换至一个便于计算所需导航参数的坐标系下,即导航坐标系,而姿态矩阵就是在载体坐标系上测得的各项数据由载体坐标系转至导航坐标系的转换关系,当确定载体的姿态矩阵后就可以表示出载体的姿态角。Since the inertial components of the strapdown inertial navigation system are fixed on the carrier, the output values of the sensors are all output values in the carrier coordinate system, so these measured values must be converted to a coordinate that is convenient for calculating the required navigation parameters Under the system, that is, the navigation coordinate system, and the attitude matrix is the conversion relationship between the various data measured on the carrier coordinate system from the carrier coordinate system to the navigation coordinate system. When the attitude matrix of the carrier is determined, the attitude of the carrier can be expressed. horn.
图2给出了系统信号处理流程,从附图可以看出,从惯性传感控制装置采集的数据经过滤波、求解四元数获得姿态角、最后得到机器人的控制命令。Figure 2 shows the signal processing flow of the system. It can be seen from the attached figure that the data collected from the inertial sensing control device is filtered, the quaternion is solved to obtain the attitude angle, and finally the control command of the robot is obtained.
图3给出了手指上下移动时,俯仰角和航向角的变化,图中Pitch代表俯仰角,Yaw代表航向角,从附图可以看出,此时俯仰角随着手指的上下运动呈现出周期性变化,但航向角变化不大。Figure 3 shows the changes of the pitch angle and the heading angle when the finger moves up and down. In the figure, Pitch represents the pitch angle, and Yaw represents the heading angle. It can be seen from the attached figure that the pitch angle presents a cycle with the up and down movement of the finger. Sex changes, but the heading angle does not change much.
图4给出了手指左右移动时,俯仰角和航向角的变化,图中Pitch代表俯仰角,Yaw代表航向角,从附图可以看出,此时航向角随着手指的左右运动呈现出周期性变化,但俯仰角变化不大。Figure 4 shows the changes of the pitch angle and the heading angle when the finger moves left and right. In the figure, Pitch represents the pitch angle, and Yaw represents the heading angle. It can be seen from the attached figure that the heading angle presents a cycle with the left and right movement of the finger. Sex changes, but the pitch angle does not change much.
可选地,作为本发明一个实施例,将所述所述导航坐标系中的目标姿态角中不在阈值范围内的目标姿态角排除包括:Optionally, as an embodiment of the present invention, excluding target attitude angles that are not within the threshold range among the target attitude angles in the navigation coordinate system includes:
手势定义分别为:手指抬起,向下,向右和向左四个动作,通过实验我们发现当手指抬起时,俯仰角角度为正,定义该手势控制机器人前进;手指放下时,俯仰角角度为负,定义该手势控制机器人停止;手指向右时,航向角角度为正,定义该手势控制机器人向右转;手指向左时,航向角角度为负,控制机器人向左。Gestures are defined as four actions: finger up, down, right and left. Through experiments, we found that when the finger is raised, the pitch angle is positive, which defines the gesture to control the robot to move forward; when the finger is down, the pitch angle is positive. When the angle is negative, the gesture controls the robot to stop; when the finger is to the right, the heading angle is positive, and the gesture controls the robot to turn to the right; when the finger is left, the heading angle is negative and the robot is controlled to the left.
在此过程中,手指佩戴惯性传感器时,由于手指的轻微或者大幅度的动作都会影响手指识别的不准确,经过实验论证,当手指抬起(Finger_Up),俯仰角在20度(阈值1,我们用T1表示)到50度(阈值2,我们用T2表示)时,为手指移动的正常范围,作为控制机器人前进的正确手势;当手指向下(Finger_Down),俯仰角在-20度到-50度时,为手指移动的正常范围,作为控制机器人停止的正确手势。当手指向右转(Finger_Right),航向角在20度(阈值3,我们用T3表示)到40度(阈值4,我们用T4表示)时,为手指转向的正常范围,作为控制机器人右转的正确手势;当手指向左转(Finger_Left),航向角在-20度到-40度时,为手指转向的正常范围,作为控制机器人左转的正确手势。During this process, when the finger wears the inertial sensor, the inaccuracy of finger recognition will be affected by the slight or large movement of the finger. After experimental demonstration, when the finger is raised (Finger_Up), the pitch angle is 20 degrees (threshold 1, we Indicated by T 1 ) to 50 degrees (threshold 2, we denote by T 2 ), it is the normal range of finger movement, as the correct gesture to control the robot forward; when the finger is down (Finger_Down), the pitch angle is between -20 degrees to When -50 degrees, it is the normal range of finger movement, which is the correct gesture to control the robot to stop. When the finger turns to the right (Finger_Right), the heading angle is between 20 degrees (threshold 3, we use T 3 ) to 40 degrees (threshold 4, we use T 4 ), it is the normal range of finger turning, as the control robot right The correct gesture of turning; when the finger turns to the left (Finger_Left), when the heading angle is -20 degrees to -40 degrees, it is the normal range of finger turning, as the correct gesture to control the robot to turn left.
利用上述阈值,可以获得如下控制手势的指令:Using the above thresholds, the following control gesture instructions can be obtained:
其中,Pitch表示俯仰角,Yaw表示航向角。Among them, Pitch represents the pitch angle, and Yaw represents the heading angle.
可选地,作为本发明一个实施例,根据阈值范围内的目标姿态角控制机器人动作包括:Optionally, as an embodiment of the present invention, controlling the robot action according to the target attitude angle within the threshold range includes:
当手指佩戴好传感器后,通过WIFI连接传感器与机器人进行通信,利用惯性传感器采集手指的移动信号,计算机获取移动信号,判断手指的姿态,然后将移动指令通过无线模块传递给机器人,表1列出了“G”、“S”、“R”和“L”所描述的4个命令。当手势抬起时,传感器捕捉手指的移动,传送给计算机进行姿态的判断,当满足设定的阈值范围,计算机发送给机器人“G”指令,机器人做出前进的动作;当手势向下时,传感器捕捉手指的移动,传送给计算机进行姿态的判断,当满足设定的阈值范围,计算机发送给机器人“S”指令,机器人做出停止的动作;当手势向右时,传感器捕捉手指的移动,传送给计算机进行姿态的判断,当满足设定的阈值范围,计算机发送给机器人“R”指令,机器人做出右转的动作;当手势向左时,传感器捕捉手指的移动,传送给计算机进行姿态的判断,当满足设定的阈值范围,计算机发送给机器人“L”指令,机器人做出左转的动作。When the finger wears the sensor, it communicates with the robot through WIFI connection, and uses the inertial sensor to collect the movement signal of the finger. The computer obtains the movement signal, judges the posture of the finger, and then transmits the movement command to the robot through the wireless module, as listed in Table 1. 4 commands described by "G", "S", "R" and "L". When the gesture is raised, the sensor captures the movement of the finger and transmits it to the computer for posture judgment. When the set threshold range is met, the computer sends the "G" command to the robot, and the robot moves forward; when the gesture is downward, the The sensor captures the movement of the finger and transmits it to the computer for posture judgment. When the set threshold range is met, the computer sends the "S" command to the robot, and the robot stops; when the gesture is to the right, the sensor captures the movement of the finger, Send it to the computer for posture judgment. When the set threshold range is met, the computer sends the "R" command to the robot, and the robot makes a right turn; when the gesture is to the left, the sensor captures the movement of the finger and transmits it to the computer for posture When the set threshold range is met, the computer sends the "L" command to the robot, and the robot makes a left turn.
本发明针对惯性传感器准确识别手指动作,以手指控制机器人为例进行实验流程图如图1所示。我们选用典型的机器人平台——NAO机器人,该机器人系统有一个完整的自平衡模块,当有指令输入时,NAO可以自身平稳行走,因此,我们只考虑是否NAO可以获得基于手势的准确的命令。The present invention aims at accurately recognizing finger movements by inertial sensors, and takes the finger-controlled robot as an example to carry out an experiment flow chart as shown in FIG. 1 . We choose a typical robot platform, the NAO robot. The robot system has a complete self-balancing module. When there is a command input, NAO can walk smoothly by itself. Therefore, we only consider whether NAO can obtain accurate gesture-based commands.
本发明所提供的手势识别方法主要依靠惯性传感器完成数据采集,并通过控制机器人移动完成系统实现。首先,利用陀螺仪与加速度计分别用来测量载体的角移动、线移动信息,进行在线自适应滤波去除噪声;然后,利用四阶龙格-库塔法求解用四元数法描述的微分方程,经四元数的元来反向求反三角函数来计算俯仰角和航向角;最后,根据手指的移动范围自适应确定俯仰角和航向角的识别阈值,从而将手指的错误操作排除在外。The gesture recognition method provided by the present invention mainly relies on inertial sensors to complete data collection, and completes the system realization by controlling the movement of the robot. First, the gyroscope and accelerometer are used to measure the angular movement and line movement information of the carrier respectively, and online adaptive filtering is performed to remove noise; then, the fourth-order Runge-Kutta method is used to solve the differential equation described by the quaternion method , the pitch and heading angles are calculated by inverting the trigonometric functions through the elements of the quaternion; finally, the recognition thresholds of the pitch and heading angles are adaptively determined according to the movement range of the finger, so as to exclude the wrong operation of the finger.
本发明提供一种通过惯性传感器采集手指小范围动作数据,对数据进行在线自适应滤波去噪处理,构建四元数姿态矩阵,利用龙格-库塔法求解四元数微分方程,更新四元数,从而更新姿态矩阵得到实时的姿态角,并且由于识别手指小范围动作,设定阈值排除手指误操作的情况。The invention provides a method of collecting small-range motion data of a finger through an inertial sensor, performing online adaptive filtering and denoising processing on the data, constructing a quaternion attitude matrix, using the Runge-Kutta method to solve the quaternion differential equation, and updating the quaternion In order to update the posture matrix to obtain the real-time posture angle, and because of the recognition of small-scale movements of the finger, the threshold is set to eliminate the misoperation of the finger.
上文结合图1至4,详细描述了根据本发明实施例提供机器人移动控制方法。下面结合图5,详细描述本发明实施例提供机器人移动控制系统。The above describes in detail a method for providing a robot movement control method according to an embodiment of the present invention with reference to FIGS. 1 to 4 . The following describes in detail the robot movement control system provided by the embodiment of the present invention with reference to FIG. 5 .
图5给出了本发明实施例提供的一种机器人移动控制系统的示意性结构框图。如图5所示,该系统包括:采集单元510、预处理单元520、处理单元530、坐标系转换单元540筛选单元550。FIG. 5 is a schematic structural block diagram of a robot movement control system provided by an embodiment of the present invention. As shown in FIG. 5 , the system includes: a collection unit 510 , a preprocessing unit 520 , a processing unit 530 , a coordinate system conversion unit 540 , and a screening unit 550 .
采集单元510获取惯性传感器的角速度数据,其中,所述惯性传感器可以佩戴在用户手指上;预处理单元520对所述角速度数据进行在线自适应滤波的预处理;处理单元530根据经在线自适应滤波预处理的所述角速度数据建立四元数微分方程,利用龙格-库塔法求解所述四元数微分方程,获取包括目标姿态角的姿态矩阵;坐标系转换单元540将目标姿态角从载体坐标系转换为导航坐标系;筛选单元550将所述所述导航坐标系中的目标姿态角中不在阈值范围内的目标姿态角排除;控制单元560根据阈值范围内的目标姿态角控制机器人动作。The acquisition unit 510 acquires the angular velocity data of the inertial sensor, wherein the inertial sensor can be worn on the user's finger; the preprocessing unit 520 performs online adaptive filtering on the angular velocity data; The preprocessed angular velocity data establishes a quaternion differential equation, uses the Runge-Kutta method to solve the quaternion differential equation, and obtains an attitude matrix including the target attitude angle; the coordinate system conversion unit 540 converts the target attitude angle from the carrier. The coordinate system is converted into a navigation coordinate system; the screening unit 550 excludes target attitude angles that are not within the threshold range among the target attitude angles in the navigation coordinate system; the
该实施例中,针对传感器数据实时处理的问题,利用在线自适应滤波方法实现在线去噪;利用四元数法描述姿态矩阵,求解微分方程,计算量较小,具有较高的精度,且可以避免陷入“奇点”;利用阈值将手指误操作行为排除在外,完成对不同微小手势动作识别;本发明可以利用惯性传感器控制机器人移动,具有较高精度和良好在线识别效果,普适性强,应用前景价值较好。In this embodiment, for the problem of real-time processing of sensor data, the online adaptive filtering method is used to realize online denoising; the quaternion method is used to describe the attitude matrix, and the differential equation is solved, which requires less calculation, has high precision, and can Avoid falling into the "singularity"; use the threshold to exclude the misoperation of fingers, and complete the recognition of different tiny gestures; the invention can use the inertial sensor to control the movement of the robot, has high precision and good online recognition effect, and has strong universality. The application prospect value is good.
该系统主要结合姿态角解算和自适应阈值分析实现,先通过自适应滤波进行测量数据的去噪处理,再基于四元数的捷联式惯性导航系统姿态解算算法,完成包括四元数姿态矩阵、导航参数提取与计算、初始条件给定与初始数据计算,并通过姿态矩阵的更新来不断计算出航向、姿态、速度及位置信息;继而阈值分析手段来实现噪声抑制以及误操作判别,实时检测、识别出手势微小移动状况,继而完成对机器人相应的移动控制。The system is mainly implemented by combining attitude angle calculation and adaptive threshold analysis. First, the measurement data is denoised through adaptive filtering, and then the attitude calculation algorithm of the strapdown inertial navigation system based on quaternion is completed. Attitude matrix, navigation parameter extraction and calculation, initial condition given and initial data calculation, and the heading, attitude, speed and position information are continuously calculated through the update of the attitude matrix; then the threshold analysis method is used to realize noise suppression and misoperation judgment. Real-time detection and recognition of the slight movement of gestures, and then complete the corresponding movement control of the robot.
本发明实施例还提供一种惯性传感控制装置,包括上述技术方案所述的机器人移动控制系统,所述惯性传感控制装置与机器人无线通信。An embodiment of the present invention further provides an inertial sensing control device, including the robot movement control system described in the above technical solution, and the inertial sensing control device communicates wirelessly with the robot.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working process of the above-described devices and units may refer to the corresponding processes in the foregoing method embodiments, which will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are only illustrative. For example, the division of units is only a logical function division. In actual implementation, there may be other division methods, for example, multiple units or components may be combined or integrated. to another system, or some features can be ignored, or not implemented.
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本发明实施例方案的目的。Units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solutions in the embodiments of the present invention.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以是两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分,或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。The integrated unit, if implemented as a software functional unit and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention is essentially or a part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods of the various embodiments of the present invention. The aforementioned storage medium includes: U disk, removable hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above are only preferred embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included in the protection of the present invention. within the range.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711232485.7A CN108051001B (en) | 2017-11-30 | 2017-11-30 | A robot movement control method, system and inertial sensing control device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711232485.7A CN108051001B (en) | 2017-11-30 | 2017-11-30 | A robot movement control method, system and inertial sensing control device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108051001A CN108051001A (en) | 2018-05-18 |
CN108051001B true CN108051001B (en) | 2020-09-04 |
Family
ID=62121365
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711232485.7A Active CN108051001B (en) | 2017-11-30 | 2017-11-30 | A robot movement control method, system and inertial sensing control device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108051001B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020220284A1 (en) * | 2019-04-30 | 2020-11-05 | 深圳市大疆创新科技有限公司 | Aiming control method, mobile robot and computer-readable storage medium |
CN113496165B (en) * | 2020-04-01 | 2024-04-16 | 京东科技信息技术有限公司 | User gesture recognition method and device, hand intelligent wearable device and storage medium |
CN114102600B (en) * | 2021-12-02 | 2023-08-04 | 西安交通大学 | Multi-space fusion human-machine skill migration and parameter compensation method and system |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107122724A (en) * | 2017-04-18 | 2017-09-01 | 北京工商大学 | A kind of method of the online denoising of sensing data based on adaptive-filtering |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9221170B2 (en) * | 2013-06-13 | 2015-12-29 | GM Global Technology Operations LLC | Method and apparatus for controlling a robotic device via wearable sensors |
-
2017
- 2017-11-30 CN CN201711232485.7A patent/CN108051001B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107122724A (en) * | 2017-04-18 | 2017-09-01 | 北京工商大学 | A kind of method of the online denoising of sensing data based on adaptive-filtering |
Non-Patent Citations (3)
Title |
---|
一种手势控制小车运动系统的设计与实现;刘梁;《数字技术与应用》;20170228;第3.2,4.3节 * |
基于四元数和卡尔曼滤波的姿态角估计算法研究与应用;陈伟;《中国优秀硕士学位论文全文数据库信息科技辑》;中国学术期刊(光盘版)电子杂志社;20160115(第1期);第2.4.3节 * |
遥操作护理机器人系统的操作者姿态解算方法研究;左国玉等;《自动化学报》;20161230;第42卷(第12期);第1,2.1,2.2,4.1节 * |
Also Published As
Publication number | Publication date |
---|---|
CN108051001A (en) | 2018-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10649549B2 (en) | Device, method, and system to recognize motion using gripped object | |
Li | Human–robot interaction based on gesture and movement recognition | |
US9221170B2 (en) | Method and apparatus for controlling a robotic device via wearable sensors | |
CN105824420B (en) | A kind of gesture identification method based on acceleration transducer | |
CN110928432B (en) | Finger ring mouse, mouse control device and mouse control system | |
CN111566444A (en) | Determining a location of a mobile device | |
KR100630806B1 (en) | Command input method using gesture recognition device | |
US11079860B2 (en) | Kinematic chain motion predictions using results from multiple approaches combined via an artificial neural network | |
KR20180020262A (en) | Technologies for micro-motion-based input gesture control of wearable computing devices | |
Lu | A motion control method of intelligent wheelchair based on hand gesture recognition | |
CN108051001B (en) | A robot movement control method, system and inertial sensing control device | |
Wang et al. | Immersive human–computer interactive virtual environment using large-scale display system | |
CN104038799A (en) | Three-dimensional television-oriented gesture manipulation method | |
CN106406518A (en) | Gesture control device and gesture recognition method | |
CN103294226B (en) | A virtual input device and method | |
CN107203271B (en) | Hand Recognition Method Based on Multi-sensor Fusion Technology | |
CN110390281B (en) | Sign language recognition system based on sensing equipment and working method thereof | |
WO2023178984A1 (en) | Methods and systems for multimodal hand state prediction | |
CN106547339B (en) | Control method and device of computer equipment | |
Liu et al. | Ultrasonic positioning and IMU data fusion for pen-based 3D hand gesture recognition | |
Kim et al. | Development of a wearable HCI controller through sEMG & IMU sensor fusion | |
US9927917B2 (en) | Model-based touch event location adjustment | |
Jin et al. | Human-robot interaction for assisted object grasping by a wearable robotic object manipulation aid for the blind | |
Sung et al. | Motion quaternion-based motion estimation method of MYO using K-means algorithm and Bayesian probability | |
Lu et al. | I am the UAV: a wearable approach for manipulation of unmanned aerial vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |