CN106228200B - Action identification method independent of action information acquisition equipment - Google Patents

Action identification method independent of action information acquisition equipment Download PDF

Info

Publication number
CN106228200B
CN106228200B CN201610903076.4A CN201610903076A CN106228200B CN 106228200 B CN106228200 B CN 106228200B CN 201610903076 A CN201610903076 A CN 201610903076A CN 106228200 B CN106228200 B CN 106228200B
Authority
CN
China
Prior art keywords
motion information
motion
different
action
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610903076.4A
Other languages
Chinese (zh)
Other versions
CN106228200A (en
Inventor
李墅娜
陈媛媛
常晓丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North University of China
Original Assignee
North University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North University of China filed Critical North University of China
Priority to CN201610903076.4A priority Critical patent/CN106228200B/en
Publication of CN106228200A publication Critical patent/CN106228200A/en
Application granted granted Critical
Publication of CN106228200B publication Critical patent/CN106228200B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

本发明涉及一种不依赖于运动信息采集设备的动作识别方法,适用于不同的运动信息采集设备;该方法包括两个阶段,分别为模型训练阶段和模型预测阶段,所述模型训练阶段是指建立运动信息与动作之间的映射关系,所述模型预测阶段是指根据采集的运动信息计算出对应的动作类别;本发明将一个动作识别方法在不同运动信息采集终端设备上部署兼容性的问题,具体考虑了不同采样频率、不同佩戴位置、传感器不同精度及灵敏度等因素对动作识别结果的影响;本发明可应用在智能手机、平板电脑、腕带、腕表等内嵌惯性传感器单元如加速度计、陀螺仪或磁力计等的终端设备。

Figure 201610903076

The invention relates to an action recognition method that does not depend on motion information collection equipment, and is suitable for different motion information collection equipment; the method includes two stages, namely a model training stage and a model prediction stage, wherein the model training stage refers to A mapping relationship between motion information and actions is established, and the model prediction stage refers to calculating the corresponding action category according to the collected motion information; the present invention deploys an action recognition method on different motion information collection terminal devices. The problem of compatibility , specifically considering the influence of different sampling frequencies, different wearing positions, different accuracy and sensitivity of sensors on the action recognition results; the present invention can be applied to embedded inertial sensor units such as smart phones, tablet computers, wristbands, wrist watches, etc., such as acceleration terminal equipment such as a gyroscope, gyroscope or magnetometer.

Figure 201610903076

Description

一种不依赖于动作信息采集设备的动作识别方法A motion recognition method that does not depend on motion information collection equipment

技术领域technical field

本发明涉及一种不依赖于运动信息采集设备的动作识别方法,适用于不同的运动信息采集设备,可应用在智能手机、平板电脑、腕带、腕表等内嵌惯性传感器单元如加速度计、陀螺仪或磁力计等的终端设备。The invention relates to an action recognition method that does not depend on motion information collection equipment, is suitable for different motion information collection equipment, and can be applied to embedded inertial sensor units such as accelerometers, smart phones, tablet computers, wristbands, wrist watches, etc. Terminal devices such as gyroscopes or magnetometers.

背景技术Background technique

近年来,随着MEMS技术的发展,越来越多的终端设备(如智能手机、平板电脑、腕带、腕表等)中内嵌各种类型的传感器(如加速度计、陀螺仪、磁力计、红外摄像头等)。相应地,围绕这些传感器的各种应用也层出不穷,譬如在健康医疗领域,包括:肢体动作识别、跌倒检测与报警、心率监测、异常步态分析与量化评估等。In recent years, with the development of MEMS technology, more and more terminal devices (such as smartphones, tablet computers, wristbands, wrist watches, etc.) are embedded with various types of sensors (such as accelerometers, gyroscopes, magnetometers, etc.). , infrared camera, etc.). Correspondingly, various applications around these sensors are emerging, such as in the field of health care, including: body motion recognition, fall detection and alarm, heart rate monitoring, abnormal gait analysis and quantitative evaluation, etc.

然而,目前市场上的各种应用大多仅适用于特定型号(品牌)的终端设备,而无法有效兼容所有类型的终端设备,本质上是因为从不同的终端设备中采集到的信息有所差别。究其原因,大致包含以下几个方面:However, most of the various applications on the market are only applicable to specific models (brands) of terminal equipment, and cannot be effectively compatible with all types of terminal equipment, essentially because the information collected from different terminal equipment is different. The reasons generally include the following aspects:

(1)不同的终端设备内嵌的传感器型号也不同,传感器的灵敏度、精度、检测限等指标也各不相同;(1) The sensor models embedded in different terminal devices are also different, and the sensitivity, accuracy, detection limit and other indicators of the sensor are also different;

(2)不同的终端设备中所设置的传感器采样频率也不同;(2) The sampling frequencies of sensors set in different terminal devices are also different;

(3)若终端设备在采集传感器信息的同时运行其它的应用程序,则传感器的采样频率会发生波动;(3) If the terminal device runs other applications while collecting sensor information, the sampling frequency of the sensor will fluctuate;

(4)终端设备发生摔落等异常情况后,内嵌的传感器会发生漂移问题。(4) After the terminal equipment falls and other abnormal conditions, the embedded sensor will drift.

具体到动作识别应用方面,现有文献已明确指出,当将目前已有的动作识别算法部署到不同的终端设备上时,动作识别的准确率均有所降低。因此,如何设计一款能够不依赖于信息采集设备、适用于各种不同型号终端设备的动作识别方法,是目前亟待解决的一个问题。Specific to the application of action recognition, the existing literature has clearly pointed out that when the existing action recognition algorithms are deployed on different terminal devices, the accuracy of action recognition is reduced to some extent. Therefore, how to design a motion recognition method that can be applied to various types of terminal equipment independently of information collection equipment is an urgent problem to be solved at present.

发明内容SUMMARY OF THE INVENTION

针对现有的动作识别方法普遍存在的依赖于信息采集设备的问题,本发明提出一种不依赖于运动信息采集设备的动作识别方法。该方法首先构造一个标准采样器,将从不同终端设备中采集到的不同采样率的运动信息进行归一化,统一到一个标准的采样频率,然后构造一个聚类器,区分开不同的终端设备佩戴位置,最后构造一个由多个弱分类器组成的集成动作识别方法框架,消除不同终端设备中内置传感器的精度、灵敏度等指标不同带来的影响。Aiming at the common problem of the existing action recognition methods relying on information collection equipment, the present invention proposes an action recognition method that does not depend on motion information collection equipment. The method first constructs a standard sampler, normalizes motion information of different sampling rates collected from different terminal devices, and unifies them to a standard sampling frequency, and then constructs a clusterer to distinguish different terminal devices. Wearing position, and finally construct an integrated action recognition method framework composed of multiple weak classifiers to eliminate the influence of different indicators such as accuracy and sensitivity of built-in sensors in different terminal devices.

为了解决上述技术问题,本发明所采用的技术方案为:In order to solve the above-mentioned technical problems, the technical scheme adopted in the present invention is:

一种不依赖于运动信息采集设备的动作识别方法,包括两个阶段,分别为模型训练阶段和模型预测阶段,所述模型训练阶段是指建立运动信息与动作之间的映射关系,所述模型预测阶段是指根据采集的运动信息计算出对应的动作类别。An action recognition method that does not depend on motion information collection equipment, including two stages, namely a model training stage and a model prediction stage, the model training stage refers to establishing a mapping relationship between motion information and actions, and the model The prediction stage refers to calculating the corresponding action category according to the collected motion information.

优选地,所述模型训练阶段具体包括以下步骤:Preferably, the model training phase specifically includes the following steps:

1)运动信息采集的步骤,将不同的运动信息采集设备佩戴于人体的不同位置,然后记录人体在执行不同动作时的运动信息;1) The step of motion information collection, wear different motion information collection devices on different positions of the human body, and then record the motion information of the human body when performing different actions;

2)采样频率标准化的步骤,利用降采样法对来自不同运动信息采集设备的原始运动信息进行频率归一化;2) The step of normalizing the sampling frequency, using the down-sampling method to normalize the frequency of the original motion information from different motion information collection devices;

3)特征提取与特征选择的步骤,利用时域法、频域法或非线性分析方法对原始运动信息进行特征提取,利用互信息相关法、遗传算法、稀疏优化方法或主成分分析法等对提取出来的特征进行筛选,从而选择出最能表征运动信息特点的特征;3) The steps of feature extraction and feature selection, use time domain method, frequency domain method or nonlinear analysis method to extract the original motion information, and use mutual information correlation method, genetic algorithm, sparse optimization method or principal component analysis method, etc. The extracted features are screened to select the features that best characterize the motion information;

4)终端设备佩戴位置识别聚类的步骤,将上述步骤3)中提取及筛选出的特征,使用有导师学习方法或无导师学习方法构建一个终端设备佩戴位置识别聚类器;4) The step of identifying and clustering the wearing position of the terminal equipment, using the features extracted and screened in the above step 3) to construct a terminal equipment wearing position identification clusterer by using the tutored learning method or the unsupervised learning method;

5)随机森林动作识别模型的步骤,针对每个佩戴位置,利用随机森林方法建立对应的动作识别模型。5) The steps of the random forest action recognition model, for each wearing position, use the random forest method to establish a corresponding action recognition model.

优选地,在步骤1)中,运动信息采集设备包括但不限于智能手机、平板电脑、腕表、手环;运动信息采集设备佩戴位置包括但不限于手腕、前臂、上臂、腰部、大腿、小腿;人体执行的动作包括但不限于:静坐、躺卧、站立、慢行、上楼梯、下楼梯、跑步;采集的运动信息包括但不限于三维空间X、Y、Z轴的加速度、角速度、磁场强度。Preferably, in step 1), the sports information collection device includes but is not limited to a smartphone, a tablet computer, a wrist watch, and a wristband; the wearing position of the sports information collection device includes but is not limited to the wrist, forearm, upper arm, waist, thigh, calf ; Actions performed by the human body include but are not limited to: sitting, lying down, standing, walking slowly, going up stairs, going down stairs, and running; the collected motion information includes but not limited to the acceleration, angular velocity, magnetic field of the three-dimensional space X, Y, and Z axes strength.

优选地,所述步骤2)中的频率归一化是将采样频率高于25Hz的运动信息进行降低采样频率重采样,使得新的运动信息采样频率为25Hz。Preferably, the frequency normalization in the step 2) is to resample the motion information whose sampling frequency is higher than 25 Hz at a lower sampling frequency, so that the new sampling frequency of the motion information is 25 Hz.

优选地,在步骤3)中,利用时域法提取出的特征包括但不限于运动幅度、角度、速度;利用频域法提取出的特征包括但不限于运动频率、能量;利用非线性分析方法提取出的特征包括但不限于近似熵、多尺度熵。Preferably, in step 3), the features extracted by the time domain method include but are not limited to motion amplitude, angle, and speed; the features extracted by the frequency domain method include but are not limited to motion frequency and energy; the nonlinear analysis method is used The extracted features include but are not limited to approximate entropy and multi-scale entropy.

优选地,所述步骤4)中的有导师学习方法包括但不限于神经网络、支持向量、决策树。Preferably, the tutored learning methods in the step 4) include but are not limited to neural networks, support vectors, and decision trees.

优选地,所述步骤4)中的无导师学习方法包括但不限于自组织映射神经网络、距离判别法。Preferably, the unsupervised learning method in the step 4) includes but is not limited to self-organizing mapping neural network and distance discrimination method.

优选地,所述步骤4)中的聚类器是指在动作识别之间先对终端设备的佩戴位置进行识别,然后针对不同的佩戴位置分别建立动作识别模型。Preferably, the clusterer in the step 4) refers to firstly identifying the wearing position of the terminal device between motion recognition, and then establishing motion recognition models for different wearing positions respectively.

优选地,所述模型预测阶段具体为:先将运动信息采集设备佩戴在人体的某个部位,接着采集人体在完成待识别动作过程中的运动信息,然后将该原始信息顺序经过标准采样器、特征提取与特征选择、佩戴位置聚类器、动作识别模型等模块,最后输出最终的动作识别结果。Preferably, the model prediction stage is specifically: first wear the motion information collection device on a certain part of the human body, then collect the motion information of the human body in the process of completing the action to be recognized, and then pass the original information through the standard sampler, Feature extraction and feature selection, wearing position clusterer, action recognition model and other modules, and finally output the final action recognition result.

与现有技术相比,本发明所具有的有益效果为:Compared with the prior art, the present invention has the following beneficial effects:

本发明所提出的方法着重关注一个动作识别方法在不同运动信息采集终端设备(包括但不限于:智能手机、平板电脑、腕表、手环等)上部署的兼容性问题,具体考虑了不同采样频率、不同佩戴位置、传感器不同精度及灵敏度等因素对动作识别结果的影响;该方法具备兼容性强、准确性高等优点,从而可以大大提升动作识别技术在广大具体应用领域中的适用性。The method proposed in the present invention focuses on the compatibility of an action recognition method deployed on different motion information collection terminal devices (including but not limited to: smart phones, tablet computers, wristwatches, wristbands, etc.), and specifically considers different sampling The influence of factors such as frequency, different wearing positions, different accuracy and sensitivity of sensors on the results of motion recognition; this method has the advantages of strong compatibility and high accuracy, which can greatly improve the applicability of motion recognition technology in a wide range of specific applications.

附图说明Description of drawings

图1为本发明的系统框图;1 is a system block diagram of the present invention;

图2为典型运动信息采集设备内置加速度传感器采样频率表;Figure 2 is a sampling frequency table of the built-in acceleration sensor of a typical motion information acquisition device;

图3为利用不同终端设备采集到的相同动作加速度信号;Fig. 3 is the same action acceleration signal collected by different terminal equipment;

图4为利用相同终端设备采集到的不同动作加速度信号;Fig. 4 is the different action acceleration signals collected by the same terminal equipment;

图5为终端设备佩戴位置识别结果表;Fig. 5 is a result table of terminal equipment wearing position identification results;

图6为动作识别准确率对比表。Figure 6 is a comparison table of the accuracy of action recognition.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

如图1所示,一种不依赖于运动信息采集设备的动作识别方法,该动作识别方法属于有导师学习方法,包含模型建立和模型预测两个阶段。As shown in FIG. 1 , an action recognition method that does not depend on a motion information collection device, the action recognition method belongs to a tutored learning method, and includes two stages of model establishment and model prediction.

模型建立阶段主要包括以下几个步骤:The model building stage mainly includes the following steps:

(1)运动信息采集。将不同的运动信息采集设备(譬如:智能手机、平板电脑、腕表、手环等)佩戴于人体的不同位置(譬如:手腕、前臂、上臂、腰部、大腿、小腿等),然后记录人体在执行不同动作(譬如:静坐、躺卧、站立、慢 行、上楼梯、下楼梯、跑步等)时的运动信息(不同终端设备内置的传感器也有所不同,譬如:加速度计、陀螺仪和磁力计等)。(1) Movement information collection. Wear different sports information collection devices (such as smart phones, tablet computers, wristwatches, bracelets, etc.) on different positions of the human body (such as wrist, forearm, upper arm, waist, thigh, calf, etc.), and then record the movement of the human body in Motion information when performing different actions (such as: sitting, lying down, standing, walking slowly, going up stairs, going down stairs, running, etc.) Wait).

(2)采样频率标准化。针对不同的终端设备,其内置的传感器采样频率亦有所区别。图2列出了几款典型的终端设备所支持的加速度传感器最大采样频率,可以看出,从25~200Hz,不同型号之间的差别很大。从Nyquist-Shannon采样定理可知,利用不同采样频率采集到的运动信息,所包含的信息成分也不同,因此首先需要对来自于不同终端设备的信息进行采样频率标准化。常用的方法包括插值法(升采样)和降采样法两种,但是考虑到插值法会人为引入新的误差,同时对于动作识别应用而言,人体通常情况下不会产生大于10Hz的运动信息,因此,在本发明中,采用降采样法,且将采样频率统一为25Hz,即对于采样频率高于25Hz的终端设备而言,需要对其采集到的运动信息进行重采样。(2) Standardize the sampling frequency. For different terminal devices, the sampling frequency of the built-in sensors is also different. Figure 2 lists the maximum sampling frequencies of acceleration sensors supported by several typical terminal devices. It can be seen that from 25 to 200 Hz, there are great differences between different models. From the Nyquist-Shannon sampling theorem, it can be known that motion information collected with different sampling frequencies contains different information components. Therefore, it is necessary to standardize the sampling frequency of information from different terminal devices first. The commonly used methods include interpolation method (up-sampling) and down-sampling method, but considering that the interpolation method will artificially introduce new errors, and for motion recognition applications, the human body usually does not generate motion information greater than 10Hz. Therefore, in the present invention, the down-sampling method is adopted, and the sampling frequency is unified to 25 Hz, that is, for the terminal equipment whose sampling frequency is higher than 25 Hz, the collected motion information needs to be re-sampled.

(3)特征提取与特征选择。由于在人体完成一个有效动作周期内所采集到的运动信息通常包含较多的数据点,直接对这些原始数据进行分析较为困难。因此,需要对原始信息进行特征提取,常用的特征提取方法包括但不限于以下方法:时域法(运动幅度、角度、速度等)、频域法(运动频率、能量等)、非线性分析方法(近似熵、多尺度熵等)。同时,由于许多终端设备中通常内置多种类型的传感器(加速度计、陀螺仪、磁力计等),且为多轴(两轴或三轴)传感器,导致可以提取出的特征维度也通常较高。因此,在实际的应用中,若无法确定哪些特征最能表征各个动作的特点,经常需要进行特征选择(特征降维)。常用的特征选择方法包括但不限于以下方法:互信息相关法、遗传算法、稀疏优化方法、主成分分析法等。(3) Feature extraction and feature selection. Since the motion information collected during an effective action cycle of the human body usually contains many data points, it is difficult to directly analyze these raw data. Therefore, it is necessary to perform feature extraction on the original information. Commonly used feature extraction methods include but are not limited to the following methods: time domain method (movement amplitude, angle, speed, etc.), frequency domain method (movement frequency, energy, etc.), nonlinear analysis method (Approximate entropy, multiscale entropy, etc.). At the same time, since many types of sensors (accelerometers, gyroscopes, magnetometers, etc.) are usually built in many terminal devices, and they are multi-axis (two-axis or three-axis) sensors, the feature dimensions that can be extracted are usually high. . Therefore, in practical applications, if it is impossible to determine which features can best characterize each action, feature selection (feature reduction) is often required. Commonly used feature selection methods include but are not limited to the following methods: mutual information correlation method, genetic algorithm, sparse optimization method, principal component analysis method, etc.

(4)终端设备佩戴位置识别聚类。传统的动作识别方法大多仅针对终端设备佩戴在相同位置时的情况,因此当终端设备佩戴在其它位置时,动作识别的准确度就会大大降低。因此,为了使得动作识别方法能够兼容不同的佩戴位置,本发明构造一个聚类器,在动作识别之间先对终端设备的佩戴位置进行识别,然后针对不同的佩戴位置(手腕、前臂、上臂、腰部、大腿、小腿等)分别建立动作识别模型。常用的聚类器构造方法包括但不限于以下两类方法:有导师学习法(神经网络、支持向量机、决策树等)和无导师学习法(自组织映射神经网络、距离判别法等)。(4) Identification and clustering of terminal equipment wearing position. Most of the traditional action recognition methods are only aimed at the situation when the terminal device is worn in the same position, so when the terminal device is worn in other positions, the accuracy of the action recognition will be greatly reduced. Therefore, in order to make the action recognition method compatible with different wearing positions, the present invention constructs a clusterer, which first identifies the wearing position of the terminal device between action recognition, and then targets different wearing positions (wrist, forearm, upper arm, waist, thigh, calf, etc.) to establish action recognition models respectively. Commonly used clusterer construction methods include but are not limited to the following two types of methods: tutored learning methods (neural networks, support vector machines, decision trees, etc.) and unsupervised learning methods (self-organizing mapping neural networks, distance discrimination methods, etc.).

(5)随机森林动作识别模型。针对每个佩戴位置,建立对应的动作识别模型。为了消除不同终端设备内置传感器精度、灵敏度等性能的不同所带来的影响,本发明构造一个集成多个弱分类器的随机森林动作识别模型。(5) Random forest action recognition model. For each wearing position, a corresponding action recognition model is established. In order to eliminate the influence of different performances such as built-in sensors in different terminal devices, the present invention constructs a random forest action recognition model integrating multiple weak classifiers.

在模型预测阶段,首先将运动信息采集设备佩戴在人体的某个部位,接着采集人体在完成待识别动作过程中的运动信息,然后将该原始信息顺序经过标准采样器、特征提取与特征选择、佩戴位置聚类器、动作识别模型等模块,最后输出最终的动作识别结果。In the model prediction stage, the motion information collection device is firstly worn on a certain part of the human body, and then the motion information of the human body in the process of completing the action to be recognized is collected, and then the original information is sequentially passed through the standard sampler, feature extraction and feature selection, Wear position clusterer, action recognition model and other modules, and finally output the final action recognition result.

以下通过实施例结合图3至图6对本发明进行进一步分析。The present invention will be further analyzed below through examples in conjunction with FIG. 3 to FIG. 6 .

本实施例包含了3种不同的运动信息采集设备:Samsung Galaxy Gear、HTCDesire、Xsens惯性传感器单元,对应的内置加速度传感器最高采样频率分别为100Hz、50Hz和200Hz;本实施例包括5种不同的动作:静坐、站立、躺卧、上楼梯、下楼梯;同时,上述3种运动信息采集设备的佩戴位置也各不相同,Samsung Galaxy Gear佩戴在手腕处,HTC Desire佩戴在上臂处,Xsens惯性传感器单元佩戴在后腰处。This embodiment includes 3 different motion information collection devices: Samsung Galaxy Gear, HTCDesire, and Xsens inertial sensor units. The maximum sampling frequencies of the corresponding built-in acceleration sensors are 100Hz, 50Hz, and 200Hz, respectively; this embodiment includes 5 different actions : Sitting, standing, lying down, going up the stairs, going down the stairs; at the same time, the wearing positions of the above three kinds of motion information collection devices are also different, Samsung Galaxy Gear is worn on the wrist, HTC Desire is worn on the upper arm, Xsens inertial sensor unit Wear on the back waist.

首先,让受试人员将各个设备佩戴于对应的位置,然后依次完成上述5种不同的动作,每个设备每个动作重复10次。实验过程中记录下的部分运动信息如图3和图4所示,从图中可以看出,不同设备佩戴于不同位置时所采集到的同一动作所对应的运动信息差别较大,同时,同一设备所采集到的不同动作运动信息也存在明显差异。First, let the subjects wear each device in the corresponding position, and then complete the above 5 different actions in turn, repeating each action 10 times for each device. Part of the motion information recorded during the experiment is shown in Figures 3 and 4. It can be seen from the figures that the motion information corresponding to the same action collected by different devices when worn in different positions is quite different. There are also obvious differences in different motion information collected by the device.

其次,利用降采样方法对采集到的运动信息进行重采样,以实现所有终端设备所对应的采集信息频率为25Hz。Secondly, the collected motion information is resampled by using the down-sampling method, so that the frequency of collected information corresponding to all terminal devices is 25Hz.

然后,利用时域法提取出各个运动信息所对应的特征,具体而言,包括:三维空间X、Y和Z方向的运动幅值、速度和角度等。Then, the feature corresponding to each motion information is extracted by using the time domain method, specifically, including: motion amplitude, speed and angle in the directions of X, Y and Z in the three-dimensional space.

接着,利用支持向量机构建一个终端设备佩戴位置识别聚类器。在本实施例中,在每个佩戴位置处均采集到50个运动信号,其中随机选取40个样本用于训练,剩余10个样本用于测试。即对于整个数据集而言,共包含150个样本,其中训练集包含120个样本,测试集包含30个样本。测试集对应的识别结果如图5所示,可以看出,通过所构建的识别聚类器,可以较好地识别出终端设备所佩戴的位置。Next, use the support vector machine to construct a terminal equipment wearing position recognition clusterer. In this embodiment, 50 motion signals are collected at each wearing position, of which 40 samples are randomly selected for training and the remaining 10 samples are used for testing. That is, for the entire data set, there are 150 samples in total, of which the training set contains 120 samples and the test set contains 30 samples. The identification results corresponding to the test set are shown in Figure 5. It can be seen that the position worn by the terminal device can be better identified through the constructed identification clusterer.

最后,针对每个佩戴位置,利用随机森林方法建立动作识别模型,每个随机森林包含50~100棵决策树,最终的识别结果汇总采用投票法实现。本实施例的识别结果与仅用单个运动信息采集设备的数据所建立的动作识别方法得到的识别准确率对比如图6所示,可以看出,若仅用从单个运动信息设备采集到的数据建立动作识别模型,仅适用于相同的终端设备。若将该模型应用于其他的终端设备,则动作识别的准确率会明显降低。相反,采用本发明方法所建立的动作识别模型,则可以很好地兼容各个不同的终端设备。究其原因,是因为本发明方法在建模过程中,综合了来自于所有终端设备的传感器信息,且在传统动作识别方法的基础上增加了标准采样器、终端设备佩戴位置识别聚类、随机森林多弱分类器集成等模块,可以有效地消除不同终端设备采样频率不同、内置传感器精度和灵敏度不同等带来的影响。Finally, for each wearing position, a random forest method is used to establish an action recognition model. Each random forest contains 50 to 100 decision trees, and the final recognition results are summarized by voting method. The comparison between the recognition results of this embodiment and the recognition accuracy obtained by the motion recognition method established only with the data of a single motion information collection device is shown in Figure 6. It can be seen that if only the data collected from a single motion information equipment is used Build an action recognition model that is only applicable to the same terminal device. If the model is applied to other terminal devices, the accuracy of action recognition will be significantly reduced. On the contrary, the action recognition model established by the method of the present invention can be well compatible with different terminal devices. The reason is that the method of the present invention integrates sensor information from all terminal devices in the modeling process, and adds standard samplers, terminal device wearing position recognition clustering, random Modules such as forest multi-weak classifier integration can effectively eliminate the effects of different sampling frequencies of different terminal devices, and different built-in sensor accuracy and sensitivity.

上面结合本发明的较佳实施例作了详细说明,但是本发明并不限于上述实施例,在本领域普通技术人员所具备的知识范围内,还可以在不脱离本发明宗旨的前提下作出各种变化,各种变化均应包含在本发明的保护范围之内。The preferred embodiments of the present invention have been described in detail above, but the present invention is not limited to the above-mentioned embodiments, and within the scope of knowledge possessed by those of ordinary skill in the art, various aspects can also be made without departing from the purpose of the present invention. Various changes should be included within the protection scope of the present invention.

Claims (6)

1. A motion recognition method independent of motion information acquisition equipment is characterized by comprising the following steps: the method comprises two stages, namely a model training stage and a model prediction stage, wherein the model training stage is used for establishing a mapping relation between motion information and motions, and the model prediction stage is used for calculating corresponding motion categories according to the collected motion information;
the model training phase specifically comprises the following steps:
1) acquiring motion information, namely wearing different motion information acquisition equipment at different positions of a human body, and then recording the motion information of the human body when different actions are executed;
2) the step of sampling frequency standardization, namely, carrying out frequency normalization on original motion information from different motion information acquisition equipment by using a down-sampling method; the frequency normalization is to perform sampling frequency reduction resampling on the motion information with the sampling frequency higher than 25Hz so that the sampling frequency of the new motion information is 25 Hz;
3) extracting the characteristics of the original motion information by a time domain method, a frequency domain method or a nonlinear analysis method, and screening the extracted characteristics by a mutual information correlation method, a genetic algorithm, a sparse optimization method or a principal component analysis method, so as to select the characteristics which can represent the characteristics of the motion information most;
4) identifying and clustering the wearing positions of the terminal equipment, namely constructing a terminal equipment wearing position identifying and clustering device by using a method of leading a teacher or a method of no leading a teacher according to the characteristics extracted and screened in the step 3); the clustering device is used for firstly identifying the wearing position of the terminal equipment during action identification and then respectively establishing action identification models aiming at different wearing positions;
5) and a step of random forest action recognition model, namely establishing a corresponding action recognition model by using a random forest method aiming at each wearing position.
2. The motion recognition method independent of a motion information collection device according to claim 1, characterized in that: in the step 1), the motion information acquisition equipment comprises a smart phone, a tablet personal computer, a wristwatch and a bracelet; the wearing position of the motion information acquisition equipment comprises a wrist, a forearm, an upper arm, a waist, a thigh and a shank; the actions performed by the human body include: sitting still, lying, standing, walking slowly, going upstairs, going downstairs, running; the acquired motion information includes acceleration, angular velocity, magnetic field strength of the three-dimensional space X, Y, Z axis.
3. The motion recognition method independent of a motion information collection device according to claim 1, characterized in that: in the step 3), the characteristics extracted by using the time domain method comprise motion amplitude, angle and speed; the features extracted by the frequency domain method comprise motion frequency and energy; the features extracted by the nonlinear analysis method comprise approximate entropy and multi-scale entropy.
4. The motion recognition method independent of a motion information collection device according to claim 1, characterized in that: the instructor learning method in the step 4) comprises a neural network, a support vector and a decision tree.
5. The motion recognition method independent of a motion information collection device according to claim 1, characterized in that: the instructor-free learning method in the step 4) comprises a self-organizing mapping neural network and a distance discrimination method.
6. The motion recognition method independent of motion information collection equipment according to claim 1, wherein the model prediction stage is specifically: the motion information recognition method comprises the steps of firstly wearing motion information acquisition equipment on a certain part of a human body, then acquiring motion information of the human body in the process of finishing a motion to be recognized, then sequentially passing the motion information through a standard sampler, feature extraction and feature selection, a wearing position clustering device and a motion recognition model, and finally outputting a final motion recognition result.
CN201610903076.4A 2016-10-17 2016-10-17 Action identification method independent of action information acquisition equipment Active CN106228200B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610903076.4A CN106228200B (en) 2016-10-17 2016-10-17 Action identification method independent of action information acquisition equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610903076.4A CN106228200B (en) 2016-10-17 2016-10-17 Action identification method independent of action information acquisition equipment

Publications (2)

Publication Number Publication Date
CN106228200A CN106228200A (en) 2016-12-14
CN106228200B true CN106228200B (en) 2020-01-14

Family

ID=58077158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610903076.4A Active CN106228200B (en) 2016-10-17 2016-10-17 Action identification method independent of action information acquisition equipment

Country Status (1)

Country Link
CN (1) CN106228200B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874874A (en) * 2017-02-16 2017-06-20 南方科技大学 Motion state identification method and device
CN107016686A (en) * 2017-04-05 2017-08-04 江苏德长医疗科技有限公司 Three-dimensional gait and motion analysis system
CN108734055B (en) * 2017-04-17 2021-03-26 杭州海康威视数字技术股份有限公司 Abnormal person detection method, device and system
CN107316052A (en) * 2017-05-24 2017-11-03 中国科学院计算技术研究所 A kind of robust Activity recognition method and system based on inexpensive sensor
CN107742070B (en) * 2017-06-23 2020-11-24 中南大学 A method and system for motion recognition and privacy protection based on acceleration data
CN107358210B (en) * 2017-07-17 2020-05-15 广州中医药大学 Human body action recognition method and device
CN108710822B (en) * 2018-04-04 2022-05-13 燕山大学 Personnel falling detection system based on infrared array sensor
CN108550385B (en) * 2018-04-13 2021-03-09 北京健康有益科技有限公司 Exercise scheme recommendation method and device and storage medium
CN108968918A (en) * 2018-06-28 2018-12-11 北京航空航天大学 The wearable auxiliary screening equipment of early stage Parkinson
CN109100537B (en) * 2018-07-19 2021-04-20 百度在线网络技术(北京)有限公司 Motion detection method, apparatus, device, and medium
CN109190762B (en) * 2018-07-26 2022-06-07 北京工业大学 Mobile terminal information acquisition system
CN109635638B (en) * 2018-10-31 2021-03-09 中国科学院计算技术研究所 Feature extraction method and system, identification method and system for human motion
CN110689041A (en) * 2019-08-20 2020-01-14 陈羽旻 Multi-target behavior action recognition and prediction method, electronic equipment and storage medium
CN111221419A (en) * 2020-01-13 2020-06-02 武汉大学 An Array-Type Flexible Capacitive Electronic Skin for Human Motion Intention Sensing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104434119A (en) * 2013-09-20 2015-03-25 卡西欧计算机株式会社 Body information obtaining device and body information obtaining method
CN105046215A (en) * 2015-07-07 2015-11-11 中国科学院上海高等研究院 Posture and behavior identification method without influences of individual wearing positions and wearing modes

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104434119A (en) * 2013-09-20 2015-03-25 卡西欧计算机株式会社 Body information obtaining device and body information obtaining method
CN105046215A (en) * 2015-07-07 2015-11-11 中国科学院上海高等研究院 Posture and behavior identification method without influences of individual wearing positions and wearing modes

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于手机和可穿戴设备的用户活动识别问题研究;孙泽浩;《中国优秀博士学位论文全文数据库信息科技辑》;20160915;全文 *
基于旋转模式的移动设备佩戴位置识别方法;时岳 等;《软件学报》;20130815;第1898-1907页 *

Also Published As

Publication number Publication date
CN106228200A (en) 2016-12-14

Similar Documents

Publication Publication Date Title
CN106228200B (en) Action identification method independent of action information acquisition equipment
Johnston et al. Smartwatch-based biometric gait recognition
Kwon et al. Unsupervised learning for human activity recognition using smartphone sensors
CN103970271B (en) The daily routines recognition methods of fusional movement and physiology sensing data
Bennett et al. Inertial measurement unit-based wearable computers for assisted living applications: A signal processing perspective
Wang et al. Detecting user activities with the accelerometer on android smartphones
CN105310696B (en) A kind of fall detection model building method and corresponding fall detection method and device
Nandy et al. Detailed human activity recognition using wearable sensor and smartphones
CN112464738B (en) Improved naive Bayes algorithm user behavior identification method based on mobile phone sensor
CN108958482B (en) Similarity action recognition device and method based on convolutional neural network
Figueira et al. Body location independent activity monitoring
Rasheed et al. Evaluation of human activity recognition and fall detection using android phone
CN106874874A (en) Motion state identification method and device
CN113095379A (en) Human motion state identification method based on wearable six-axis sensing data
CN113642432A (en) Convolutional Neural Network Based on Covariance Matrix Transformation for Human Pose Recognition
Sheng et al. An adaptive time window method for human activity recognition
CN110532898A (en) A kind of physical activity recognition methods based on smart phone Multi-sensor Fusion
Minh et al. Evaluation of smartphone and smartwatch accelerometer data in activity classification
CN202600156U (en) Tumbling detection location system
Fu et al. Ping pong motion recognition based on smart watch
Nguyen et al. The internet-of-things based fall detection using fusion feature
Kongsil et al. Physical activity recognition using streaming data from wrist-worn sensors
Saha et al. Designing device independent two-phase activity recognition framework for smartphones
Jitpattanakul Wearable fall detection based on motion signals using hybrid deep residual neural network
Puvanendran et al. Improved Feature Extraction for Time Series Data Using Sliding Window: A Case Study of Carnatic Tala

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant