WO2015165260A1 - Triaxial feature fusion method for human body movement identification - Google Patents

Triaxial feature fusion method for human body movement identification Download PDF

Info

Publication number
WO2015165260A1
WO2015165260A1 PCT/CN2014/092630 CN2014092630W WO2015165260A1 WO 2015165260 A1 WO2015165260 A1 WO 2015165260A1 CN 2014092630 W CN2014092630 W CN 2014092630W WO 2015165260 A1 WO2015165260 A1 WO 2015165260A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
axis
base
triaxial
fusion
Prior art date
Application number
PCT/CN2014/092630
Other languages
French (fr)
Chinese (zh)
Inventor
薛洋
胡耀全
金连文
Original Assignee
华南理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华南理工大学 filed Critical 华南理工大学
Publication of WO2015165260A1 publication Critical patent/WO2015165260A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the invention relates to a pattern recognition and artificial intelligence technology, in particular to a three-axis feature fusion method for human body motion recognition.
  • the object of the present invention is to overcome the shortcomings and deficiencies of the prior art, and to provide a three-axis feature fusion method for human body motion recognition, which is an extraction method suitable for a three-axis acceleration signal fusion feature of a smart phone.
  • a three-axis feature fusion method for human motion recognition comprising the following steps:
  • the triaxial feature is represented as a linear combination of feature bases, and the coefficients of the feature base of each axis are determined;
  • the triaxial feature representation based on the feature base is a linear combination in which the triaxial feature is represented as a feature base, thereby determining the coefficient of the feature base of each axis, and the specific method is as follows:
  • the characteristic bases [X 1 , X 2 , . . . , X m ] can uniquely represent the eigenvectors of each axis
  • ⁇ i represents the error balance for each axis, i ⁇ ⁇ x, y, z ⁇ ;
  • the base vector X is sparsely penalized by the L 1 norm, and the coefficient matrix A of the feature base is constrained by the L 2 norm to prevent the coefficient matrix A of the feature base from being too large; but the L 1 norm at the base vector X
  • the number is not divisible at 0, so the gradient function can't be used to optimize the above cost function, so to be derivable at 0, the formula (2) becomes:
  • the feature base X and the coefficient matrix A of the feature base which minimize J(A, x) can be determined, and the algorithm includes the following steps:
  • step 2) According to the given A of step 1), solve the X which can minimize J(A, x);
  • step 2) According to the X obtained in step 2), solve the A which can minimize J(A, x);
  • the step (2) fuses the weights, and the fusion weight coefficients of the feature vectors of each axis are extracted as follows:
  • the coefficients of the characteristic bases of the three axes of each type of action should be stabilized near the mean of the triaxial coefficients, respectively, so that the feature space of each type of action is stable, so that the feature spaces of different actions will be less overlapped. The recognition effect is better. Therefore, the variance contribution rate of the characteristic basis coefficient is recalculated as follows:
  • the fusion weight matrix W of the tri-axis feature is also a 3 ⁇ m matrix, and the feature fusion weight after amplitude compression is expressed as:
  • the triaxial feature fusion in the step (3) can obtain the fused feature vector by using the fusion weight of the triaxial feature:
  • EFF represents the fused feature vector
  • the invention is based on the fusion method of three-axis feature representation of feature base, fusion weight determination and triaxial feature fusion, and the specific method can also be described as follows:
  • the feature base [X 1 , X 2 , ..., X m ] can uniquely represent the feature vector of each axis.
  • ⁇ i represents the error balance for each axis, i ⁇ ⁇ x, y, z ⁇ .
  • the sparsity penalty is applied to the base vector X by the L 1 norm, and at the same time, to prevent the coefficient matrix A of the feature base from being too large, it is constrained by the L 2 norm.
  • the L 1 norm at the base vector X is not divisible at 0, so the above cost function cannot be optimized by the gradient descent method, so that in order to be derivable at 0, the formula (2) is changed to:
  • step 2) According to the given A of step 1), solve the X which can minimize J(A, x);
  • step 2) According to the X obtained in step 2), solve the A which can minimize J(A, x);
  • the fusion weight of each axis feature is calculated by using the variance contribution rate of the characteristic base coefficients of each axis. details as follows:
  • the coefficients of the characteristic bases of the three axes of each type of action should be stabilized near the mean of the triaxial coefficients, respectively, so that the feature space of each type of action is stable, so that the feature spaces of different actions will be less overlapped. The recognition effect is better. Therefore, based on formula (4), the variance contribution rate of the characteristic base coefficient of each axis is recalculated as follows:
  • the feature fusion weight after amplitude compression is expressed as:
  • the fused feature vector can be obtained by using the fusion weight of the triaxial feature obtained by formula (6):
  • EFF [F x , F y , F z ][W x , W y , W z ] T , (7)
  • EFF denotes the fused feature vector.
  • the motion recognition accuracy is high; the present invention utilizes the contribution of each axis feature to the different motion recognition to fuse the triaxial features, and the three-axis feature extracted from the three-axis acceleration signal obtained from the smartphone acceleration sensor is characterized.
  • the three-axis feature representation of the base, the fusion weight determination and the triaxial feature fusion achieve the purpose of improving the accuracy of motion recognition.
  • FIG. 1 is a flow chart of a feature fusion method of the present invention.
  • the input device used in this embodiment is a smart phone with a built-in three-axis acceleration sensor, and the present invention can be better implemented by processing with a computer.
  • FIG. 1 The system flow chart of the three-axis feature fusion algorithm for human motion recognition is shown in FIG. 1 , and the specific steps include:
  • the three-axis acceleration signal of different people is normalized to the range of [-1, 1] by linearly normalizing the ratio.
  • the feature base [X 1 , X 2 , ..., X m ] can uniquely represent the feature vector of each axis.
  • ⁇ i represents the error balance for each axis, i ⁇ ⁇ x, y, z ⁇ .
  • the sparsity penalty is applied to the base vector X by the L 1 norm, and at the same time, to prevent the coefficient matrix A of the feature base from being too large, it is constrained by the L 2 norm.
  • the L 1 norm at the base vector X is not divisible at 0, so the above cost function cannot be optimized by the gradient descent method, so that in order to be derivable at 0, the formula (2) is changed to:
  • step 2) According to the given A of step 1), solve the X which can minimize J(A, x);
  • step 2) According to the X obtained in step 2), solve the A which can minimize J(A, x);
  • the fusion weight of each axis feature is calculated by using the variance contribution rate of the characteristic base coefficients of each axis. details as follows:
  • the coefficients of the characteristic bases of the three axes of each type of action should be stabilized near the mean of the triaxial coefficients, respectively, so that the feature space of each type of action is stable, so that the feature spaces of different actions will be less overlapped. The recognition effect is better. Therefore, based on formula (4), the variance contribution rate of the characteristic base coefficient of each axis is recalculated as follows:
  • the feature fusion weight after amplitude compression is expressed as:
  • the Bayesian network classifier is trained with 80% of the sample feature set generated above, and then the remaining 20% uses the Bayesian network classifier to identify the action class for each test sample.
  • the motion recognition rate based on the triaxial feature fusion method proposed by the present invention is compared with the recognition rate of the five actions before the feature fusion.
  • Table 1 is a table comparing the five kinds of motion recognition rates before and after feature fusion
  • the recognition rates of five kinds of actions before and after feature fusion are given.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Provided is a triaxial feature fusion method for a human body movement identification, comprising the following steps: (1) based on the triaxial feature representation of a feature base, representing the triaxial features as a linear combination of the features base, and determining the coefficient of each axial feature base; (2) fusion weight: calculating the fusion weight of each axial feature based on a variance contribution ratio by utilizing the coefficient of each axial feature base; (3) triaxial feature fusion: fusing the triaxial features by utilizing the amount of contribution of each axial feature for different movements identification to improve the rate of movement identification. The present invention has a high accuracy rate of movement identification.

Description

一种用于人体动作识别的三轴特征融合方法Three-axis feature fusion method for human motion recognition 技术领域Technical field
本发明涉及一种模式识别与人工智能技术,特别涉及一种用于人体动作识别的三轴特征融合方法。The invention relates to a pattern recognition and artificial intelligence technology, in particular to a three-axis feature fusion method for human body motion recognition.
背景技术Background technique
近年来,随着个人电子设备的发展,智能手机越来越多的内置传感设备和增强嵌入式计算能力。当智能手机被人们放在裤兜或背包中,随着人体运动频率的改变,手机加速度传感器可以检测到人体运动的状态,这极大的提高了识别人体运动行为的方便,手机加速度传感器逐渐成为人体运动模式分类的理想平台。然而,基于智能手机加速度传感器的人体运动模式分类仍存在很大限制和困难,其中一项就是加速度信号特征的融合能力差,很多特征融合后识别率反倒下降,而成功的融合方法也基于很多考虑,融合效果不是很好,我们为此提出更普用的高识别率的特征融合算法。In recent years, with the development of personal electronic devices, smart phones have more and more built-in sensing devices and enhanced embedded computing capabilities. When the smart phone is placed in a trouser pocket or a backpack, as the frequency of the human body changes, the mobile phone acceleration sensor can detect the state of the human body motion, which greatly improves the convenience of recognizing the human body's motion behavior, and the mobile phone acceleration sensor gradually becomes the human body. The ideal platform for classification of sports patterns. However, the classification of human motion patterns based on smartphone accelerometers still has great limitations and difficulties. One of them is the poor fusion ability of the acceleration signal features. The convergence rate of many features is reduced after the fusion, and the successful fusion method is also based on many considerations. The fusion effect is not very good. For this reason, we propose a more common feature fusion algorithm with high recognition rate.
发明内容Summary of the invention
本发明的目的在于克服现有技术的缺点与不足,提供一种用于人体动作识别的三轴特征融合方法,该方法是一种适用于智能手机三轴加速度信号融合特征的提取方法。The object of the present invention is to overcome the shortcomings and deficiencies of the prior art, and to provide a three-axis feature fusion method for human body motion recognition, which is an extraction method suitable for a three-axis acceleration signal fusion feature of a smart phone.
本发明的目的通过下述技术方案实现:一种用于人体动作识别的三轴特征融合方法,包括以下步骤:The object of the present invention is achieved by the following technical solution: a three-axis feature fusion method for human motion recognition, comprising the following steps:
(1)基于特征基的三轴特征表示,将三轴特征表示成特征基的线性组合,确定每轴特征基的系数;(1) Based on the triaxial feature representation of the feature base, the triaxial feature is represented as a linear combination of feature bases, and the coefficients of the feature base of each axis are determined;
(2)融合权重,利用每轴特征基的系数,基于方差贡献率计算每轴特征的融合权重;(2) Fusion weights, using the coefficients of the feature base of each axis, and calculating the fusion weight of each axis feature based on the variance contribution rate;
(3)三轴特征融合,利用每轴特征对不同动作识别的贡献大小来融合三轴 特征,提高对动作识别的识别率。(3) Three-axis feature fusion, using the contribution of each axis feature to different motion recognition to fuse three axes Features to improve the recognition rate of motion recognition.
所述步骤(1)中,基于特征基的三轴特征表示是将三轴特征表示成特征基的线性组合,由此确定每轴特征基的系数,具体方法如下:In the step (1), the triaxial feature representation based on the feature base is a linear combination in which the triaxial feature is represented as a feature base, thereby determining the coefficient of the feature base of each axis, and the specific method is as follows:
从多类动作构成的大量三轴加速度信号样本集中分别提取每轴的时频域特征,构成三轴特征矢量空间,记为F=[Fx,Fy,Fz],Fx,Fy,Fz分别表示x轴、y轴和z轴的特征向量,每一轴的特征向量维数记为m,即三轴特征矢量空间F是一个3×m的矩阵,则三轴特征矢量可以表示成特征基[X1,X2,…,Xm]的线性组合,即:The time-frequency domain features of each axis are extracted from a large number of three-axis acceleration signal samples composed of multiple types of motions to form a three-axis feature vector space, which is denoted as F=[F x , F y , F z ], F x , F y , F z denotes the eigenvectors of the x-axis, the y-axis and the z-axis, respectively, and the eigenvector dimension of each axis is denoted as m, that is, the triaxial feature vector space F is a matrix of 3×m, then the triaxial feature vector can Expressed as a linear combination of the characteristic bases [X 1 , X 2 ,..., X m ], namely:
Fx=Ax1X1+Ax2X2+…+AxmXmx F x =A x1 X 1 +A x2 X 2 +...+A xm X mx
Fy=Ay1X1+Ay2X2+…+AymXmy,    (1)F y =A y1 X 1 +A y2 X 2 +...+A ym X my , (1)
Fz=Az1X1+Az2X2+…+AzmXmz F z =A z1 X 1 +A z2 X 2 +...+A zm X mz
其中,特征基[X1,X2,…,Xm]可以唯一的表示每一轴的特征向量,Aij是一个3×m的矩阵,表示特征基的系数,i∈{x,y,z},j=[1,2,…,m]。εi表示每轴的误差平衡,i∈{x,y,z};Wherein, the characteristic bases [X 1 , X 2 , . . . , X m ] can uniquely represent the eigenvectors of each axis, and A ij is a 3×m matrix representing the coefficients of the feature base, i∈{x, y, z}, j=[1,2,...,m]. ε i represents the error balance for each axis, i ∈ {x, y, z};
特征基X=[X1,X2,…,Xm]的线性组合通过稀疏编码重构三轴特征矢量空间F的代价函数,用矩阵形式表示如下:The linear combination of the characteristic base X = [X 1 , X 2 , ..., X m ] reconstructs the cost function of the triaxial eigenvector space F by sparse coding, expressed in matrix form as follows:
Figure PCTCN2014092630-appb-000001
Figure PCTCN2014092630-appb-000001
用L1范数对基矢量X做稀疏性惩罚,同时为防止特征基的系数矩阵A过大,对特征基的系数矩阵A用L2范数来约束;但是基矢量X处的L1范数在0点是不可微的,所以不能用梯度下降法对上面的代价函数寻优,于是为了在0处可导,将公式(2)变成:The base vector X is sparsely penalized by the L 1 norm, and the coefficient matrix A of the feature base is constrained by the L 2 norm to prevent the coefficient matrix A of the feature base from being too large; but the L 1 norm at the base vector X The number is not divisible at 0, so the gradient function can't be used to optimize the above cost function, so to be derivable at 0, the formula (2) becomes:
Figure PCTCN2014092630-appb-000002
Figure PCTCN2014092630-appb-000002
对公式(3)执行以下算法就可以确定使J(A,x)最小化的特征基X和特征基的系数矩阵A,其算法包括如下步骤:By performing the following algorithm on the formula (3), the feature base X and the coefficient matrix A of the feature base which minimize J(A, x) can be determined, and the algorithm includes the following steps:
1)随机初始化A; 1) Randomly initialize A;
2)根据步骤1)给定的A,求解能够最小化J(A,x)的X;2) According to the given A of step 1), solve the X which can minimize J(A, x);
3)根据步骤2)得到的X,求解能够最小化J(A,x)的A;3) According to the X obtained in step 2), solve the A which can minimize J(A, x);
4)重复步骤2)、3)直至AX收敛于F。4) Repeat steps 2) and 3) until AX converges to F.
所述步骤(2)融合权重,每轴特征向量的融合权重系数的提取如下:The step (2) fuses the weights, and the fusion weight coefficients of the feature vectors of each axis are extracted as follows:
利用每轴特征基的系数可以计算方差贡献率:The variance contribution rate can be calculated using the coefficients of the characteristic base per axis:
Figure PCTCN2014092630-appb-000003
Figure PCTCN2014092630-appb-000003
其中,
Figure PCTCN2014092630-appb-000004
表示每轴特征基系数的均值;
among them,
Figure PCTCN2014092630-appb-000004
Indicates the mean of the characteristic basis coefficients of each axis;
因为每一类动作的三个轴的特征基的系数应该分别稳定在三轴系数的均值附近,从而每一类动作的特征空间才是稳定的,这样不同动作的特征空间才会更少的重叠,识别效果才更好。所以特征基系数的方差贡献率重新计算如下:Because the coefficients of the characteristic bases of the three axes of each type of action should be stabilized near the mean of the triaxial coefficients, respectively, so that the feature space of each type of action is stable, so that the feature spaces of different actions will be less overlapped. The recognition effect is better. Therefore, the variance contribution rate of the characteristic basis coefficient is recalculated as follows:
Figure PCTCN2014092630-appb-000005
Figure PCTCN2014092630-appb-000005
将三轴的特征基系数的方差贡献率进行幅度上的压缩,就可以得到三轴特征的融合权重矩阵,记为W=[Wx,Wy,Wz],Wx,Wy,Wz分别表示x轴、y轴和z轴的特征融合权重,三轴特征的融合权重矩阵W也是一个3×m的矩阵,幅度压缩后的特征融合权重表示为:By compressing the variance contribution rate of the three-axis characteristic base coefficient, the fusion weight matrix of the triaxial feature can be obtained, which is denoted as W=[W x , W y , W z ], W x , W y , W z denotes the feature fusion weights of the x-axis, the y-axis and the z-axis, respectively. The fusion weight matrix W of the tri-axis feature is also a 3×m matrix, and the feature fusion weight after amplitude compression is expressed as:
Figure PCTCN2014092630-appb-000006
Figure PCTCN2014092630-appb-000006
其中,[Wx,Wy,Wz]表示幅度压缩后的特征融合权重。Where [W x , W y , W z ] represents the feature fusion weight after amplitude compression.
所述步骤(3)中的三轴特征融合,利用三轴特征的融合权重就可以获得融合后的特征向量:The triaxial feature fusion in the step (3) can obtain the fused feature vector by using the fusion weight of the triaxial feature:
EFF=[Fx,Fy,Fz][Wx,Wy,Wz]T,    (7)EFF=[F x ,F y ,F z ][W x ,W y ,W z ] T , (7)
其中,EFF表示融合后的特征向量。 Where EFF represents the fused feature vector.
本发明是基于特征基的三轴特征表示、融合权重确定和三轴特征融合的融合方法,其具体方法也可以描述如下:The invention is based on the fusion method of three-axis feature representation of feature base, fusion weight determination and triaxial feature fusion, and the specific method can also be described as follows:
1、基于特征基的三轴特征表示;1. A three-axis feature representation based on a feature base;
从多类动作构成的大量三轴加速度信号样本集中分别提取每轴的时频域特征,构成三轴特征矢量空间,记为F=[Fx,Fy,Fz],Fx,Fy,Fz分别表示x轴、y轴和z轴的特征向量,每一轴的特征向量维数记为m,即三轴特征矢量空间F是一个3×m的矩阵。则三轴特征矢量可以表示成特征基[X1,X2,…,Xm]的线性组合,即The time-frequency domain features of each axis are extracted from a large number of three-axis acceleration signal samples composed of multiple types of motions to form a three-axis feature vector space, which is denoted as F=[F x , F y , F z ], F x , F y , F z denotes the eigenvectors of the x-axis, the y-axis, and the z-axis, respectively, and the eigenvector dimension of each axis is denoted by m, that is, the triaxial feature vector space F is a matrix of 3×m. Then the triaxial feature vector can be expressed as a linear combination of the characteristic bases [X 1 , X 2 , . . . , X m ], ie
Fx=Ax1X1+Ax2X2+…+AxmXmx F x =A x1 X 1 +A x2 X 2 +...+A xm X mx
Fy=Ay1X1+Ay2X2+…+AymXmy,    (1)F y =A y1 X 1 +A y2 X 2 +...+A ym X my , (1)
Fz=Az1X1+Az2X2+…+AzmXmz F z =A z1 X 1 +A z2 X 2 +...+A zm X mz
其中,特征基[X1,X2,…,Xm]可以唯一的表示每一轴的特征向量。Aij是一个3×m的矩阵,表示特征基的系数,i∈{x,y,z},j=[1,2,…,m]。εi表示每轴的误差平衡,i∈{x,y,z}。Among them, the feature base [X 1 , X 2 , ..., X m ] can uniquely represent the feature vector of each axis. A ij is a 3 × m matrix representing the coefficients of the feature base, i ∈ {x, y, z}, j = [1, 2, ..., m]. ε i represents the error balance for each axis, i ∈ {x, y, z}.
特征基X=[X1,X2,…,Xm]的线性组合通过稀疏编码重构三轴特征矢量空间F的代价函数,用矩阵形式表示如下:The linear combination of the characteristic base X = [X 1 , X 2 , ..., X m ] reconstructs the cost function of the triaxial eigenvector space F by sparse coding, expressed in matrix form as follows:
Figure PCTCN2014092630-appb-000007
Figure PCTCN2014092630-appb-000007
这里用L1范数对基矢量X做了稀疏性惩罚,同时为防止特征基的系数矩阵A过大,对其用L2范数来约束。但是基矢量X处的L1范数在0点是不可微的,所以不能用梯度下降法对上面的代价函数寻优,于是为了在0处可导,将公式(2)变成:Here, the sparsity penalty is applied to the base vector X by the L 1 norm, and at the same time, to prevent the coefficient matrix A of the feature base from being too large, it is constrained by the L 2 norm. However, the L 1 norm at the base vector X is not divisible at 0, so the above cost function cannot be optimized by the gradient descent method, so that in order to be derivable at 0, the formula (2) is changed to:
Figure PCTCN2014092630-appb-000008
Figure PCTCN2014092630-appb-000008
对公式(3)执行以下算法就可以确定使J(A,x)最小化的特征基X和特征基的系数矩阵A。算法如下:By performing the following algorithm on the formula (3), it is possible to determine the feature base X which minimizes J(A, x) and the coefficient matrix A of the feature base. The algorithm is as follows:
1)随机初始化A; 1) Randomly initialize A;
2)根据步骤1)给定的A,求解能够最小化J(A,x)的X;2) According to the given A of step 1), solve the X which can minimize J(A, x);
3)根据步骤2)得到的X,求解能够最小化J(A,x)的A;3) According to the X obtained in step 2), solve the A which can minimize J(A, x);
4)重复步骤2)、3)直至AX收敛于F。4) Repeat steps 2) and 3) until AX converges to F.
2、融合权重的确定;2. Determination of the fusion weight;
利用每轴特征基系数的方差贡献率,计算每轴特征的融合权重。具体如下:The fusion weight of each axis feature is calculated by using the variance contribution rate of the characteristic base coefficients of each axis. details as follows:
利用每轴特征基的系数可以计算方差贡献率:The variance contribution rate can be calculated using the coefficients of the characteristic base per axis:
其中,
Figure PCTCN2014092630-appb-000010
表示每轴特征基系数的均值。
among them,
Figure PCTCN2014092630-appb-000010
Indicates the mean of the characteristic base coefficients for each axis.
因为每一类动作的三个轴的特征基的系数应该分别稳定在三轴系数的均值附近,从而每一类动作的特征空间才是稳定的,这样不同动作的特征空间才会更少的重叠,识别效果才更好。所以基于公式(4),每轴特征基系数的方差贡献率重新计算如下:Because the coefficients of the characteristic bases of the three axes of each type of action should be stabilized near the mean of the triaxial coefficients, respectively, so that the feature space of each type of action is stable, so that the feature spaces of different actions will be less overlapped. The recognition effect is better. Therefore, based on formula (4), the variance contribution rate of the characteristic base coefficient of each axis is recalculated as follows:
Figure PCTCN2014092630-appb-000011
Figure PCTCN2014092630-appb-000011
将三轴的特征基系数的方差贡献率进行幅度上的压缩,就可以得到三轴特征的融合权重矩阵,记为W=[Wx,Wy,Wz],Wx,Wy,Wz分别表示x轴、y轴和z轴的特征融合权重,三轴特征的融合权重W也是一个3×m的矩阵。幅度压缩后的特征融合权重表示为:By compressing the variance contribution rate of the three-axis characteristic base coefficient, the fusion weight matrix of the triaxial feature can be obtained, which is denoted as W=[W x , W y , W z ], W x , W y , W z represents the feature fusion weight of the x-axis, the y-axis, and the z-axis, respectively, and the fusion weight W of the tri-axis feature is also a 3×m matrix. The feature fusion weight after amplitude compression is expressed as:
Figure PCTCN2014092630-appb-000012
Figure PCTCN2014092630-appb-000012
3、三轴特征融合;3. Three-axis feature fusion;
利用公式(6)得到的三轴特征的融合权重就可以获得融合后的特征向量:The fused feature vector can be obtained by using the fusion weight of the triaxial feature obtained by formula (6):
EFF=[Fx,Fy,Fz][Wx,Wy,Wz]T,    (7) 其中,EFF表示融合后的特征向量。EFF=[F x , F y , F z ][W x , W y , W z ] T , (7) where EFF denotes the fused feature vector.
本发明相对于现有技术具有如下的优点及效果:The present invention has the following advantages and effects over the prior art:
动作识别准确率高;本发明利用每轴特征对不同动作识别的贡献大小对三轴特征进行融合,针对从智能手机加速度传感器获得的三轴加速度信号中提取的三轴特征,对其进行基于特征基的三轴特征表示、融合权重确定以及三轴特征融合,达到提高了动作识别准确率的目的。The motion recognition accuracy is high; the present invention utilizes the contribution of each axis feature to the different motion recognition to fuse the triaxial features, and the three-axis feature extracted from the three-axis acceleration signal obtained from the smartphone acceleration sensor is characterized. The three-axis feature representation of the base, the fusion weight determination and the triaxial feature fusion achieve the purpose of improving the accuracy of motion recognition.
附图说明DRAWINGS
图1是本发明的特征融合方法的流程框图。1 is a flow chart of a feature fusion method of the present invention.
具体实施方式detailed description
下面结合实施例及附图对本发明作进一步详细的描述,但本发明的实施方式不限于此。The present invention will be further described in detail below with reference to the embodiments and drawings, but the embodiments of the present invention are not limited thereto.
实施例Example
本实施所用的输入设备是内置三轴加速度传感器的智能手机,用计算机进行处理,便能较好地实施本发明。The input device used in this embodiment is a smart phone with a built-in three-axis acceleration sensor, and the present invention can be better implemented by processing with a computer.
用于人体动作识别的三轴特征融合算法的系统流程图如图1所示,具体步骤包括:The system flow chart of the three-axis feature fusion algorithm for human motion recognition is shown in FIG. 1 , and the specific steps include:
1、预处理;1. Pretreatment;
对于采集到的三轴加速度信号用比值线性归一化的方法,把不同人的三轴加速度信号归一化到[-1,1]的范围内。For the collected three-axis acceleration signal, the three-axis acceleration signal of different people is normalized to the range of [-1, 1] by linearly normalizing the ratio.
2、基于特征基的三轴特征表示,确定每轴特征基的系数;2. Determine the coefficient of the feature base of each axis based on the three-axis feature representation of the feature base;
从多类动作构成的大量三轴加速度信号样本集中分别提取每轴的时频域特征,构成三轴特征矢量空间,记为F=[Fx,Fy,Fz],Fx,Fy,Fz分别表示x轴、y轴和z轴的特征向量,每一轴的特征向量维数记为m,即三轴特征矢量空间F是一个3×m的矩阵。则三轴特征矢量可以表示成特征基[X1,X2,…,Xm]的线性组合,即 The time-frequency domain features of each axis are extracted from a large number of three-axis acceleration signal samples composed of multiple types of motions to form a three-axis feature vector space, which is denoted as F=[F x , F y , F z ], F x , F y , F z denotes the eigenvectors of the x-axis, the y-axis, and the z-axis, respectively, and the eigenvector dimension of each axis is denoted by m, that is, the triaxial feature vector space F is a matrix of 3×m. Then the triaxial feature vector can be expressed as a linear combination of the characteristic bases [X 1 , X 2 , . . . , X m ], ie
Fx=Ax1X1+Ax2X2+…+AxmXmx F x =A x1 X 1 +A x2 X 2 +...+A xm X mx
Fy=Ay1X1+Ay2X2+…+AymXmy,    (1)F y =A y1 X 1 +A y2 X 2 +...+A ym X my , (1)
Fz=Az1X1+Az2X2+…+AzmXmz F z =A z1 X 1 +A z2 X 2 +...+A zm X mz
其中,特征基[X1,X2,…,Xm]可以唯一的表示每一轴的特征向量。Aij是一个3×m的矩阵,表示特征基的系数,i∈{x,y,z},j=[1,2,…,m]。εi表示每轴的误差平衡,i∈{x,y,z}。Among them, the feature base [X 1 , X 2 , ..., X m ] can uniquely represent the feature vector of each axis. A ij is a 3 × m matrix representing the coefficients of the feature base, i ∈ {x, y, z}, j = [1, 2, ..., m]. ε i represents the error balance for each axis, i ∈ {x, y, z}.
特征基X=[X1,X2,…,Xm]的线性组合通过稀疏编码重构三轴特征矢量空间F的代价函数,用矩阵形式表示如下:The linear combination of the characteristic base X = [X 1 , X 2 , ..., X m ] reconstructs the cost function of the triaxial eigenvector space F by sparse coding, expressed in matrix form as follows:
Figure PCTCN2014092630-appb-000013
Figure PCTCN2014092630-appb-000013
这里用L1范数对基矢量X做了稀疏性惩罚,同时为防止特征基的系数矩阵A过大,对其用L2范数来约束。但是基矢量X处的L1范数在0点是不可微的,所以不能用梯度下降法对上面的代价函数寻优,于是为了在0处可导,将公式(2)变成:Here, the sparsity penalty is applied to the base vector X by the L 1 norm, and at the same time, to prevent the coefficient matrix A of the feature base from being too large, it is constrained by the L 2 norm. However, the L 1 norm at the base vector X is not divisible at 0, so the above cost function cannot be optimized by the gradient descent method, so that in order to be derivable at 0, the formula (2) is changed to:
Figure PCTCN2014092630-appb-000014
Figure PCTCN2014092630-appb-000014
对公式(3)执行以下算法就可以确定使J(A,x)最小化的特征基X和特征基的系数矩阵A。算法如下:By performing the following algorithm on the formula (3), it is possible to determine the feature base X which minimizes J(A, x) and the coefficient matrix A of the feature base. The algorithm is as follows:
1)随机初始化A;1) Randomly initialize A;
2)根据步骤1)给定的A,求解能够最小化J(A,x)的X;2) According to the given A of step 1), solve the X which can minimize J(A, x);
3)根据步骤2)得到的X,求解能够最小化J(A,x)的A;3) According to the X obtained in step 2), solve the A which can minimize J(A, x);
4)重复步骤2)、3)直至AX收敛于F。4) Repeat steps 2) and 3) until AX converges to F.
3、基于特征基系数的方差贡献率确定三轴特征的融合权重3. Determine the fusion weight of the triaxial feature based on the variance contribution rate of the characteristic basis coefficient
利用每轴特征基系数的方差贡献率,计算每轴特征的融合权重。具体如下:The fusion weight of each axis feature is calculated by using the variance contribution rate of the characteristic base coefficients of each axis. details as follows:
利用每轴特征基的系数可以计算方差贡献率: The variance contribution rate can be calculated using the coefficients of the characteristic base per axis:
Figure PCTCN2014092630-appb-000015
Figure PCTCN2014092630-appb-000015
其中,
Figure PCTCN2014092630-appb-000016
表示每轴特征基系数的均值。
among them,
Figure PCTCN2014092630-appb-000016
Indicates the mean of the characteristic base coefficients for each axis.
因为每一类动作的三个轴的特征基的系数应该分别稳定在三轴系数的均值附近,从而每一类动作的特征空间才是稳定的,这样不同动作的特征空间才会更少的重叠,识别效果才更好。所以基于公式(4),每轴特征基系数的方差贡献率重新计算如下:Because the coefficients of the characteristic bases of the three axes of each type of action should be stabilized near the mean of the triaxial coefficients, respectively, so that the feature space of each type of action is stable, so that the feature spaces of different actions will be less overlapped. The recognition effect is better. Therefore, based on formula (4), the variance contribution rate of the characteristic base coefficient of each axis is recalculated as follows:
Figure PCTCN2014092630-appb-000017
Figure PCTCN2014092630-appb-000017
将三轴的特征基系数的方差贡献率进行幅度上的压缩,就可以得到三轴特征的融合权重矩阵,记为W=[Wx,Wy,Wz],Wx,Wy,Wz分别表示x轴、y轴和z轴的特征融合权重,三轴特征的融合权重W也是一个m×3的矩阵。幅度压缩后的特征融合权重表示为:By compressing the variance contribution rate of the three-axis characteristic base coefficient, the fusion weight matrix of the triaxial feature can be obtained, which is denoted as W=[W x , W y , W z ], W x , W y , W z represents the feature fusion weight of the x-axis, the y-axis, and the z-axis, respectively, and the fusion weight W of the tri-axis feature is also an m×3 matrix. The feature fusion weight after amplitude compression is expressed as:
Figure PCTCN2014092630-appb-000018
Figure PCTCN2014092630-appb-000018
4、三轴特征融合;4. Three-axis feature fusion;
将三轴特征空间F=[Fx,Fy,Fz]用公式(6)获得的融合权重压缩成单轴的特征EFF:The three-axis feature space F=[F x , F y , F z ] is compressed by the formula (6) into a uniaxial feature EFF:
EFF=[Fx,Fy,Fz][Wx,Wy,Wz]T,    (7)EFF=[F x ,F y ,F z ][W x ,W y ,W z ] T , (7)
5、贝叶斯网络分类;5. Bayesian network classification;
用上面生成的样本特征集的80%训练贝叶斯网络分类器,然后对剩下的20%用贝叶斯网络分类器识别出每个测试样本的动作类别。The Bayesian network classifier is trained with 80% of the sample feature set generated above, and then the remaining 20% uses the Bayesian network classifier to identify the action class for each test sample.
本发明的优异性能通过大样本的实验得到了证实。下面描述采用本发明所述的特征融合方法,对大量人体运动的加速度信号样本进行相关实验的结果。 The superior performance of the present invention was confirmed by experiments in large samples. The results of correlation experiments on a large number of human body motion signal samples using the feature fusion method of the present invention are described below.
由于基于智能手机加速度传感器的人体运动识别目前还没有一个公共的数据库。本实施例采集了87个人的5种动作数据(走、跑、跳、上楼和下楼),共采集到87套数据。每个类随机选取70套样本(占每类总样本数的80%)进行训练,总训练样本数为350,余下的17套样本用于测试,总的测试样本数目为85。There is currently no public database for human motion recognition based on smartphone accelerometers. In this embodiment, five kinds of motion data (walking, running, jumping, going upstairs and downstairs) of 87 people are collected, and a total of 87 sets of data are collected. Each class randomly selected 70 sets of samples (80% of the total number of samples in each class) for training. The total number of training samples was 350, and the remaining 17 sets of samples were used for testing. The total number of test samples was 85.
在实验中将本发明提出的基于三轴特征融合方法后的动作识别率和特征融合前的5种动作的识别率做了比较。如表1所示(表1为特征融合前后5种动作识别率的比较的表格),给出了特征融合前后5种动作的识别率。In the experiment, the motion recognition rate based on the triaxial feature fusion method proposed by the present invention is compared with the recognition rate of the five actions before the feature fusion. As shown in Table 1 (Table 1 is a table comparing the five kinds of motion recognition rates before and after feature fusion), the recognition rates of five kinds of actions before and after feature fusion are given.
Figure PCTCN2014092630-appb-000019
Figure PCTCN2014092630-appb-000019
表1Table 1
由表1中可以看到,采用本发明提出的三轴特征融合提取方法获得的特征,其识别率明显高于未融合的三轴加速度信号的时频域特征。因此,实验结果显示了本发明所述的基于三轴特征的融合特征提取方法在性能方面明显优于传统的未融合的时频域特征。It can be seen from Table 1 that the feature obtained by the triaxial feature fusion extraction method proposed by the present invention has a recognition rate significantly higher than that of the unfused triaxial acceleration signal. Therefore, the experimental results show that the triaxial feature-based fusion feature extraction method of the present invention is significantly superior to the traditional unfused time-frequency domain feature in performance.
上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受上述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。 The above embodiments are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and combinations thereof may be made without departing from the spirit and scope of the invention. Simplifications should all be equivalent replacements and are included in the scope of the present invention.

Claims (4)

  1. 一种用于人体动作识别的三轴特征融合方法,其特征在于,包括以下步骤:A three-axis feature fusion method for human motion recognition, comprising the following steps:
    (1)基于特征基的三轴特征表示,将三轴特征表示成特征基的线性组合,确定每轴特征基的系数;(1) Based on the triaxial feature representation of the feature base, the triaxial feature is represented as a linear combination of feature bases, and the coefficients of the feature base of each axis are determined;
    (2)融合权重,利用每轴特征基的系数,基于方差贡献率计算每轴特征的融合权重;(2) Fusion weights, using the coefficients of the feature base of each axis, and calculating the fusion weight of each axis feature based on the variance contribution rate;
    (3)三轴特征融合,利用每轴特征对不同动作识别的贡献大小来融合三轴特征,提高对动作识别的识别率。(3) Three-axis feature fusion, using the contribution of each axis feature to different motion recognition to fuse the three-axis feature, improve the recognition rate of motion recognition.
  2. 根据权利要求1所述的用于人体动作识别的三轴特征融合方法,其特征在于,所述步骤(1)中基于特征基的三轴特征表示是将三轴特征表示成特征基的线性组合,由此确定每轴特征基的系数,具体方法如下:The triaxial feature fusion method for human motion recognition according to claim 1, wherein the triaxial feature representation based on the feature base in the step (1) is a linear combination of representing the triaxial feature as a feature base. , thereby determining the coefficient of the characteristic base of each axis, the specific method is as follows:
    从多类动作构成的大量三轴加速度信号样本集中分别提取每轴的时频域特征,构成三轴特征矢量空间,记为F=[Fx,Fy,Fz],Fx,Fy,Fz分别表示x轴、y轴和z轴的特征向量,每一轴的特征向量维数记为m,即三轴特征矢量空间F是一个3×m的矩阵,则三轴特征矢量可以表示成特征基[X1,X2,…,Xm]的线性组合,即:The time-frequency domain features of each axis are extracted from a large number of three-axis acceleration signal samples composed of multiple types of motions to form a three-axis feature vector space, which is denoted as F=[F x , F y , F z ], F x , F y , F z denotes the eigenvectors of the x-axis, the y-axis and the z-axis, respectively, and the eigenvector dimension of each axis is denoted as m, that is, the triaxial feature vector space F is a matrix of 3×m, then the triaxial feature vector can Expressed as a linear combination of the characteristic bases [X 1 , X 2 ,..., X m ], namely:
    Fx=Ax1X1+Ax2X2+…+AxmXmx F x =A x1 X 1 +A x2 X 2 +...+A xm X mx
    Fy=Ay1X1+Ay2X2+…+AymXmy,  (1)F y =A y1 X 1 +A y2 X 2 +...+A ym X my , (1)
    Fz=Az1X1+Az2X2+…+AzmXmz F z =A z1 X 1 +A z2 X 2 +...+A zm X mz
    其中,特征基[X1,X2,…,Xm]可以唯一的表示每一轴的特征向量,Aij是一个3×m的矩阵,表示特征基的系数,i∈{x,y,z},j=[1,2,…,m];εi表示每轴的误差平衡,i∈{x,y,z};Wherein, the characteristic bases [X 1 , X 2 , . . . , X m ] can uniquely represent the eigenvectors of each axis, and A ij is a 3×m matrix representing the coefficients of the feature base, i∈{x, y, z}, j=[1,2,...,m]; ε i represents the error balance for each axis, i∈{x,y,z};
    特征基X=[X1,X2,…,Xm]的线性组合通过稀疏编码重构三轴特征矢量空间F 的代价函数,用矩阵形式表示如下:The linear combination of the eigenbases X = [X 1 , X 2 , ..., X m ] reconstructs the cost function of the triaxial eigenvector space F by sparse coding, expressed in matrix form as follows:
    Figure PCTCN2014092630-appb-100001
    Figure PCTCN2014092630-appb-100001
    用L1范数对基矢量X做稀疏性惩罚,同时对特征基的系数矩阵A用L2范数来约束;为了在0处可导,将公式(2)变成:The base vector X is sparsely penalized by the L 1 norm, and the coefficient matrix A of the feature base is constrained by the L 2 norm; to be derivable at 0, the formula (2) is changed to:
    Figure PCTCN2014092630-appb-100002
    Figure PCTCN2014092630-appb-100002
    对公式(3)执行以下算法就可以确定使J(A,x)最小化的特征基X和特征基的系数矩阵A,其算法包括如下步骤:By performing the following algorithm on the formula (3), the feature base X and the coefficient matrix A of the feature base which minimize J(A, x) can be determined, and the algorithm includes the following steps:
    1)随机初始化A;1) Randomly initialize A;
    2)根据步骤1)给定的A,求解能够最小化J(A,x)的X;2) According to the given A of step 1), solve the X which can minimize J(A, x);
    3)根据步骤2)得到的X,求解能够最小化J(A,x)的A;3) According to the X obtained in step 2), solve the A which can minimize J(A, x);
    4)重复步骤2)、3)直至AX收敛于F。4) Repeat steps 2) and 3) until AX converges to F.
  3. 根据权利要求1所述的用于人体动作识别的三轴特征融合方法,其特征在于,所述步骤(2)融合权重,每轴特征向量的融合权重系数的提取如下:The three-axis feature fusion method for human body motion recognition according to claim 1, wherein the step (2) fuses the weight, and the fusion weight coefficient of each axis feature vector is extracted as follows:
    利用每轴特征基的系数可以计算方差贡献率:The variance contribution rate can be calculated using the coefficients of the characteristic base per axis:
    Figure PCTCN2014092630-appb-100003
    Figure PCTCN2014092630-appb-100003
    其中,
    Figure PCTCN2014092630-appb-100004
    表示每轴特征基系数的均值;
    among them,
    Figure PCTCN2014092630-appb-100004
    Indicates the mean of the characteristic basis coefficients of each axis;
    特征基系数的方差贡献率重新计算如下:The variance contribution rate of the characteristic base coefficient is recalculated as follows:
    Figure PCTCN2014092630-appb-100005
    Figure PCTCN2014092630-appb-100005
    将三轴的特征基系数的方差贡献率进行幅度上的压缩,得到三轴特征的融合权重矩阵,记为W=[Wx,Wy,Wz],Wx,Wy,Wz分别表示x轴、y轴和z轴的特征融合权重,三轴特征的融合权重矩阵W也是一个3×m的矩阵,幅度压缩后的特 征融合权重表示为:The variance contribution rate of the three-axis characteristic base coefficient is compressed in the amplitude to obtain the fusion weight matrix of the three-axis feature, which is denoted as W=[W x , W y , W z ], W x , W y , W z respectively The feature fusion weights representing the x-axis, the y-axis, and the z-axis, and the fusion weight matrix W of the triaxial feature are also a 3×m matrix, and the feature fusion weights after amplitude compression are expressed as:
    Figure PCTCN2014092630-appb-100006
    Figure PCTCN2014092630-appb-100006
    其中,[Wx,Wy,Wz]表示幅度压缩后的特征融合权重。Where [W x , W y , W z ] represents the feature fusion weight after amplitude compression.
  4. 根据权利要求1所述的用于人体动作识别的三轴特征融合方法,其特征在于,所述步骤(3)中的三轴特征融合,利用三轴特征的融合权重获得融合后的特征向量:The three-axis feature fusion method for human body motion recognition according to claim 1, wherein the three-axis feature fusion in the step (3) obtains the merged feature vector by using the fusion weight of the three-axis feature:
    EFF=[Fx,Fy,Fz][Wx,Wy,Wz]T,  (7)EFF=[F x ,F y ,F z ][W x ,W y ,W z ] T , (7)
    其中,EFF表示融合后的特征向量。 Where EFF represents the fused feature vector.
PCT/CN2014/092630 2014-04-29 2014-12-01 Triaxial feature fusion method for human body movement identification WO2015165260A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410179116.6 2014-04-29
CN201410179116.6A CN103984921B (en) 2014-04-29 2014-04-29 A kind of three axle Feature fusions for human action identification

Publications (1)

Publication Number Publication Date
WO2015165260A1 true WO2015165260A1 (en) 2015-11-05

Family

ID=51276883

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/092630 WO2015165260A1 (en) 2014-04-29 2014-12-01 Triaxial feature fusion method for human body movement identification

Country Status (2)

Country Link
CN (1) CN103984921B (en)
WO (1) WO2015165260A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108245172A (en) * 2018-01-10 2018-07-06 山东大学 It is a kind of not by the human posture recognition method of position constraint
CN114404214A (en) * 2020-10-28 2022-04-29 北京机械设备研究所 Exoskeleton gait identification method and device

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984921B (en) * 2014-04-29 2017-06-06 华南理工大学 A kind of three axle Feature fusions for human action identification
CN104899564B (en) * 2015-05-29 2019-01-25 中国科学院上海高等研究院 A kind of human body behavior real-time identification method
CN105868779B (en) * 2016-03-28 2018-12-18 浙江工业大学 A kind of Activity recognition method based on feature enhancing and Decision fusion
CN107145834B (en) * 2017-04-12 2020-06-30 浙江工业大学 Self-adaptive behavior identification method based on physical attributes

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009090584A2 (en) * 2008-01-18 2009-07-23 Koninklijke Philips Electronics N.V. Method and system for activity recognition and its application in fall detection
CN101833667A (en) * 2010-04-21 2010-09-15 中国科学院半导体研究所 Pattern recognition classification method expressed based on grouping sparsity
CN102651072A (en) * 2012-04-06 2012-08-29 浙江大学 Classification method for three-dimensional human motion data
CN103500342A (en) * 2013-09-18 2014-01-08 华南理工大学 Human behavior recognition method based on accelerometer
CN103984921A (en) * 2014-04-29 2014-08-13 华南理工大学 Three-axis feature fusion method used for human movement recognition

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650869B (en) * 2009-09-23 2012-01-04 中国科学院合肥物质科学研究院 Human body tumbling automatic detecting and alarming device and information processing method thereof
CN201829026U (en) * 2010-09-17 2011-05-11 中国科学院深圳先进技术研究院 System for monitoring and alarming fall
JP2013003815A (en) * 2011-06-16 2013-01-07 Aomori Prefectural Industrial Technology Research Center Fall detection device, fall detection unit, fall detection system, and fall detection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009090584A2 (en) * 2008-01-18 2009-07-23 Koninklijke Philips Electronics N.V. Method and system for activity recognition and its application in fall detection
CN101833667A (en) * 2010-04-21 2010-09-15 中国科学院半导体研究所 Pattern recognition classification method expressed based on grouping sparsity
CN102651072A (en) * 2012-04-06 2012-08-29 浙江大学 Classification method for three-dimensional human motion data
CN103500342A (en) * 2013-09-18 2014-01-08 华南理工大学 Human behavior recognition method based on accelerometer
CN103984921A (en) * 2014-04-29 2014-08-13 华南理工大学 Three-axis feature fusion method used for human movement recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HE, ZHENGYU ET AL.: "Activity Recognition from Acceleration Data Based on Discrete Consine Transform and SVM", PROCEEDINGS OF THE 2009 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, 14 October 2009 (2009-10-14), XP031574682 *
XU, CHUANLONG ET AL.: "Activity Recognition Method Based on Three-Dimensional Accelerometer", COMPUTER SYSTEMS & APPLICATIONS, vol. 22, no. 6, 2013, XP055232263 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108245172A (en) * 2018-01-10 2018-07-06 山东大学 It is a kind of not by the human posture recognition method of position constraint
CN114404214A (en) * 2020-10-28 2022-04-29 北京机械设备研究所 Exoskeleton gait identification method and device
CN114404214B (en) * 2020-10-28 2024-02-13 北京机械设备研究所 Exoskeleton gait recognition device

Also Published As

Publication number Publication date
CN103984921A (en) 2014-08-13
CN103984921B (en) 2017-06-06

Similar Documents

Publication Publication Date Title
WO2015165260A1 (en) Triaxial feature fusion method for human body movement identification
Zhu et al. Efficient human activity recognition solving the confusing activities via deep ensemble learning
Henpraserttae et al. Accurate activity recognition using a mobile phone regardless of device orientation and location
Primo et al. Context-aware active authentication using smartphone accelerometer measurements
CN109460734B (en) Video behavior identification method and system based on hierarchical dynamic depth projection difference image representation
Zhang et al. A comprehensive study of smartphone-based indoor activity recognition via Xgboost
Thakur et al. Convae-lstm: Convolutional autoencoder long short-term memory network for smartphone-based human activity recognition
CN105159463B (en) A kind of contactless wearable intelligent ring system and its gesture identification method
CN106210269B (en) Human body action recognition system and method based on smart phone
CN109426781A (en) Construction method, face identification method, device and the equipment of face recognition database
CN106687989A (en) Method and system of facial expression recognition using linear relationships within landmark subsets
CN104298977B (en) A kind of low-rank representation Human bodys' response method constrained based on irrelevance
TW202022716A (en) Clustering result interpretation method and device
CN108629170A (en) Personal identification method and corresponding device, mobile terminal
CN110674875A (en) Pedestrian motion mode identification method based on deep hybrid model
CN111368536A (en) Natural language processing method, apparatus and storage medium therefor
Zhu et al. Deep ensemble learning for human activity recognition using smartphone
Zhou et al. Motion recognition by using a stacked autoencoder-based deep learning algorithm with smart phones
TW202022641A (en) Interpretation method and apparatus for embedding result
CN107015647A (en) User's gender identification method based on smart mobile phone posture behavior big data
Permatasari et al. Inertial sensor fusion for gait recognition with symmetric positive definite Gaussian kernels analysis
Zeng et al. Accelerometer-based gait recognition via deterministic learning
CN115273237B (en) Human body posture and action recognition method based on integrated random configuration neural network
CN104021295B (en) Cluster feature fusion method and device for moving identification
CN108965585B (en) User identity recognition method based on smart phone sensor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14890916

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14890916

Country of ref document: EP

Kind code of ref document: A1