CN106228200A - A kind of action identification method not relying on action message collecting device - Google Patents

A kind of action identification method not relying on action message collecting device Download PDF

Info

Publication number
CN106228200A
CN106228200A CN201610903076.4A CN201610903076A CN106228200A CN 106228200 A CN106228200 A CN 106228200A CN 201610903076 A CN201610903076 A CN 201610903076A CN 106228200 A CN106228200 A CN 106228200A
Authority
CN
China
Prior art keywords
action
information acquisition
motion information
feature
acquisition equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610903076.4A
Other languages
Chinese (zh)
Other versions
CN106228200B (en
Inventor
李墅娜
陈媛媛
常晓丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North University of China
Original Assignee
North University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North University of China filed Critical North University of China
Priority to CN201610903076.4A priority Critical patent/CN106228200B/en
Publication of CN106228200A publication Critical patent/CN106228200A/en
Application granted granted Critical
Publication of CN106228200B publication Critical patent/CN106228200B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The present invention relates to a kind of action identification method not relying on motion information acquisition equipment, it is adaptable to different motion information acquisition equipment;The method includes two stages, it is respectively model training stage and model prediction stage, the described model training stage refers to that the mapping relations setting up between movable information and action, described model prediction stage refer to calculate the action classification of correspondence according to the movable information gathered;One action identification method in the problem of different motion information acquisition terminal its upper side administration compatibility, is specifically considered the factor impacts on action recognition result such as different sample frequency, different wearing position, sensor different accuracy and sensitivity by the present invention;The present invention can be applicable to the terminal unit of the embedded inertial sensor unit such as smart mobile phone, panel computer, wrist strap, watch such as accelerometer, gyroscope or magnetometer etc..

Description

A kind of action identification method not relying on action message collecting device
Technical field
The present invention relates to a kind of action identification method not relying on motion information acquisition equipment, it is adaptable to different motions Information collecting device, can be applicable to the embedded inertial sensor unit such as acceleration such as smart mobile phone, panel computer, wrist strap, watch The terminal unit of meter, gyroscope or magnetometer etc..
Background technology
In recent years, along with the development of MEMS technology, increasing terminal unit is (such as smart mobile phone, panel computer, wrist Band, watch etc.) in embedded various types of sensors (as first-class in accelerometer, gyroscope, magnetometer, infrared photography).Accordingly Ground, the various application around these sensors also emerge in an endless stream, and such as in health medical treatment field, including: limb action identification, fall Fall to detect and warning, rhythm of the heart, abnormal gait analysis and quantitative evaluation etc..
But, various application in the market are only applicable to the terminal unit of specific model (brand) mostly, and cannot Effective compatible all types of terminal units, the information being substantially because collecting from different terminal units is the poorest Not.Trace it to its cause, substantially comprise the following aspects:
(1) sensor model number that different terminal units is embedded is the most different, the index such as the sensitivity of sensor, precision, detection limit The most different;
(2) set in different terminal units sensor sample frequency is the most different;
(3) if terminal unit runs other application program, the then sample frequency of sensor while gathering sensor information Can fluctuate;
(4), after terminal unit such as drops at the abnormal conditions, embedded sensor can occur drifting problem.
Specific to action recognition application aspect, existing document explicitly points out, when by current existing action recognition algorithm When being deployed on different terminal units, the accuracy rate of action recognition all decreases.Therefore, how to design and a can disobey Rely in information collecting device, the action identification method that is applicable to various different model terminal unit, be the most urgently to be resolved hurrily one Individual problem.
Summary of the invention
The problem depending on information collecting device generally existed for existing action identification method, the present invention proposes one Plant the action identification method not relying on motion information acquisition equipment.First the method constructs a standard sample device, will never It is normalized with the movable information of the different sample rates collected in terminal unit, the unified sample frequency to a standard, Then one cluster device of structure, distinguishes different terminal unit wearing positions, and finally structure one is by multiple Weak Classifier groups The integrated action identification method framework become, eliminates in different terminal equipment the indexs such as the precision of built-in sensors, sensitivity different The impact brought.
In order to solve above-mentioned technical problem, the technical solution adopted in the present invention is:
A kind of action identification method not relying on motion information acquisition equipment, including two stages, respectively model training rank Section and model prediction stage, the described model training stage refers to the mapping relations setting up between movable information and action, described mould Type forecast period refers to calculate the action classification of correspondence according to the movable information gathered.
Preferably, the described model training stage specifically includes following steps:
1) step of motion information acquisition, is worn on different motion information acquisition equipment the diverse location of human body, then remembers The record human body movable information when performing different action;
2) the standardized step of sample frequency, utilizes down-sampled method to believe the original motion from different motion information collecting device Breath carries out frequency normalization;
3) feature extraction and the step of feature selection, utilize time domain method, frequency domain method or nonlinear analysis method to believe original motion Breath carries out feature extraction, utilizes mutual information method of correlation, genetic algorithm, sparse optimization method or PCA etc. to extracting The feature come is screened, thus selects the feature that can characterize movable information feature;
4) step of terminal unit wearing position identification cluster, by above-mentioned steps 3) the middle feature extracted and filter out, use has Tutor's learning method or build terminal unit wearing position identification cluster device without tutor's learning method;
5) step of random forest action recognition model, for each wearing position, utilizes random forest method to set up correspondence Action recognition model.
Preferably, in step 1), motion information acquisition equipment include but not limited to smart mobile phone, panel computer, watch, Bracelet;Motion information acquisition equipment wearing position includes but not limited to wrist, forearm, upper arm, waist, thigh, shank;Human body is held The action of row includes but not limited to: sits quietly, couch, stand, go slowly, go upstairs, go downstairs, run;The movable information bag gathered Include but be not limited to the acceleration of three dimensions X, Y, Z axis, angular velocity, magnetic field intensity.
Preferably, described step 2) in frequency normalization be that sample frequency is reduced higher than the movable information of 25Hz Sample frequency resampling so that new movable information sample frequency is 25Hz.
Preferably, in step 3), the feature utilizing time domain method to extract includes but not limited to motion amplitude, angle, speed Degree;The feature utilizing frequency domain method to extract includes but not limited to motion frequency, energy;Nonlinear analysis method is utilized to extract Feature includes but not limited to approximate entropy, multi-scale entropy.
Preferably, the supervised learning method in described step 4) includes but not limited to neutral net, supports vector, decision-making Tree.
Preferably, the learning method without tutor in described step 4) include but not limited to self-organizing map neural network, away from From diagnostic method.
Preferably, the cluster device in described step 4) refers to first enter the wearing position of terminal unit between action recognition Row identifies, then sets up action recognition model respectively for different wearing positions.
Preferably, the described model prediction stage particularly as follows: be first worn on certain portion of human body by motion information acquisition equipment Position, then gathers human body movable information in completing course of action to be identified, then this raw information is sequentially passed through standard Sampler, feature extraction and feature selection, the wearing position cluster module such as device, action recognition model, finally export final moving Make recognition result.
Compared with prior art, the present invention is had the beneficial effect that
Method proposed by the invention is paid close attention to emphatically an action identification method and (is included at different motion information acquisition terminal equipment But it is not limited to: smart mobile phone, panel computer, watch, bracelet etc.) the upper compatibility issue disposed, specifically consider different sampling The factor impacts on action recognition result such as frequency, different wearing position, sensor different accuracy and sensitivity;The method possesses Compatible strong, accuracy advantages of higher, such that it is able to be greatly promoted action recognition technology fitting in vast concrete application The property used.
Accompanying drawing explanation
Fig. 1 is the system block diagram of the present invention;
Fig. 2 is typical motion information collecting device built-in acceleration sensor sample frequency meter;
Fig. 3 is the same action acceleration signal utilizing different terminal equipment to collect;
Fig. 4 is the different action acceleration signals utilizing same terminal equipment to collect;
Fig. 5 is terminal unit wearing position recognition result table;
Fig. 6 is action recognition accuracy rate contrast table.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Describe, it is clear that described embodiment is only a part of embodiment of the present invention rather than whole embodiments wholely.Based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under not making creative work premise Embodiment, broadly falls into the scope of protection of the invention.
As it is shown in figure 1, a kind of action identification method not relying on motion information acquisition equipment, this action identification method belongs to In supervised learning method, comprise model and set up and two stages of model prediction.
Modelling phase mainly includes following step:
(1) motion information acquisition.By different motion information acquisition equipment (such as: smart mobile phone, panel computer, watch, bracelet Deng) it is worn on the diverse location (such as: wrist, forearm, upper arm, waist, thigh, shank etc.) of human body, then record human body exists Movable information when performing different action (such as: sit quietly, couch, stand, go slowly, go upstairs, go downstairs, running etc.) is (different The built-in sensor of terminal unit is the most different, such as: accelerometer, gyroscope and magnetometer etc.).
(2) sample frequency standardization.For different terminal units, its built-in sensor sample frequency the most district Not.Fig. 2 lists the acceleration transducer maximum sample frequency that the typical terminal unit of a few money is supported, it can be seen that from 25 ~200Hz, the difference between different model is the biggest.Knowable to Nyquist-Shannon sampling thheorem, utilize different sample frequency The movable information collected, the informational content comprised is the most different, therefore firstly the need of to the letter coming from different terminal equipment Breath carries out sample frequency standardization.Conventional method includes interpolation method (rising sampling) and down-sampled method two kinds, it is contemplated that insert Value method can be artificially introduced new error, and for applying simultaneously for action recognition, human body will not produce under normal circumstances and be more than The movable information of 10Hz, therefore, in the present invention, uses down-sampled method, and is 25Hz by sample frequency unification, i.e. for sampling For frequency is higher than the terminal unit of 25Hz, need its movable information collected is carried out resampling.
(3) feature extraction and feature selection.Due to the motion collected in completing an effective action cycle at human body Information generally comprises more data point, and these initial datas are directly analyzed more difficulty.Accordingly, it would be desirable to original letter Breath carries out feature extraction, and conventional feature extracting method includes but not limited to following methods: time domain method (motion amplitude, angle, speed Degree etc.), frequency domain method (motion frequency, energy etc.), nonlinear analysis method (approximate entropy, multi-scale entropy etc.).Simultaneously as it is many Usual built-in polytype sensor (accelerometer, gyroscope, magnetometer etc.) in terminal unit, and be multiaxis (two axles or Three axles) sensor, cause the characteristic dimension that can extract also the highest.Therefore, in actual application, if cannot determine Which feature can characterize the feature of each action, it is often necessary to carries out feature selection (Feature Dimension Reduction).Conventional feature selection Method includes but not limited to following methods: mutual information method of correlation, genetic algorithm, sparse optimization method, PCA etc..
(4) terminal unit wearing position identification cluster.Traditional action identification method is worn only for terminal unit mostly Situation when same position, therefore when terminal unit is worn on other position, the accuracy of action recognition will drop significantly Low.Therefore so that action identification method can compatible different wearing position, the present invention constructs a cluster device, dynamic First the wearing position of terminal unit is identified between identifying, then for different wearing position (wrist, forearm, on Arm, waist, thigh, shank etc.) set up action recognition model respectively.Conventional cluster device building method includes but not limited to following Two class methods: supervised learning method (neutral net, support vector machine, decision tree etc.) and without tutor's learning method (Self-organizing Maps Neutral net, discriminant by distance etc.).
(5) random forest action recognition model.For each wearing position, set up corresponding action recognition model.In order to Eliminating the impact that the difference of the performances such as different terminal equipment built-in sensors precision, sensitivity is brought, the present invention constructs one The random forest action recognition model of integrated multiple Weak Classifier.
In the model prediction stage, first motion information acquisition equipment is worn on certain position of human body, then gathers people Body movable information in completing course of action to be identified, then sequentially passes through standard sample device by this raw information, feature carries Take and feature selection, the wearing position cluster module such as device, action recognition model, finally export final action recognition result.
Combine Fig. 3 to Fig. 6 by the following examples the present invention is further analyzed.
The present embodiment contains 3 kinds of different motion information acquisition equipment: Samsung Galaxy Gear, HTC Desire, Xsens inertial sensor unit, the corresponding the highest sample frequency of built-in acceleration sensor is respectively 100Hz, 50Hz And 200Hz;The present embodiment includes 5 kinds of different actions: sits quietly, stand, couch, go upstairs, go downstairs;Meanwhile, above-mentioned 3 kinds of fortune The wearing position of dynamic information collecting device is the most different, and Samsung Galaxy Gear is worn at wrist, HTC Desire Being worn at upper arm, Xsens inertial sensor unit is worn at low back.
First, allow subject that each equipment is worn on the position of correspondence, be then sequentially completed above-mentioned 5 kinds of different moving Making, each action of each equipment is repeated 10 times.The componental movement information recorded in experimentation as shown in Figure 3 and Figure 4, from figure In it can be seen that the motion information differences corresponding to same action collected when distinct device is worn on diverse location relatively Greatly, meanwhile, the different action movement information that same equipment is collected there is also notable difference.
Secondly, the down-sampled method movable information to collecting is utilized to carry out resampling, to realize all terminal units institute Corresponding collection information frequency is 25Hz.
Then, utilize time domain method to extract the feature corresponding to each movable information, specifically, including: three dimensions X, Y and the motion amplitude of Z-direction, speed and angle etc..
Then, support vector machine is utilized to build a terminal unit wearing position identification cluster device.In the present embodiment, exist All collect 50 motor messages at each wearing position, wherein randomly select 40 samples for training, remain 10 samples For testing.I.e. for whole data set, comprising 150 samples altogether, wherein training set comprises 120 samples, test set Comprise 30 samples.Recognition result corresponding to test set is as shown in Figure 5, it can be seen that by constructed identification cluster device, can Preferably to identify the position that terminal unit is worn.
Finally, for each wearing position, random forest method is utilized to set up action recognition model, each random forest bag Containing 50~100 decision trees, final recognition result collects employing ballot method and realizes.The recognition result of the present embodiment with only with single The recognition accuracy that the action identification method that the data of individual motion information acquisition equipment are set up obtains contrasts as shown in Figure 6, can To find out, if only setting up action recognition model by the data collected from single movable information equipment, it is only applicable to identical end End equipment.If this model is applied to other terminal unit, then the accuracy rate of action recognition can substantially reduce.On the contrary, use The action recognition model that the inventive method is set up, then can be compatible with each different terminal unit.Trace it to its cause, be Because the inventive method is in modeling process, combine the sensor information coming from all terminal units, and in conventional action Standard sample device, terminal unit wearing position identification cluster, the many Weak Classifiers of random forest is added on the basis of recognition methods The module such as integrated, can eliminate that different terminal equipment sample frequency is different, built-in sensors accuracy and sensitivity is different effectively Etc. the impact brought.
It is explained in detail above in conjunction with presently preferred embodiments of the present invention, but the present invention is not limited to above-described embodiment, In the ken that those of ordinary skill in the art are possessed, it is also possible to make each on the premise of without departing from present inventive concept Planting change, various changes should be included within the scope of the present invention.

Claims (9)

1. the action identification method not relying on motion information acquisition equipment, it is characterised in that: include two stages, respectively For model training stage and model prediction stage, the described model training stage refers to the mapping setting up between movable information and action Relation, the described model prediction stage refer to according to gather movable information calculate correspondence action classification.
A kind of action identification method not relying on motion information acquisition equipment the most according to claim 1, its feature exists Following steps are specifically included in, described model training stage:
1) step of motion information acquisition, is worn on different motion information acquisition equipment the diverse location of human body, then remembers The record human body movable information when performing different action;
2) the standardized step of sample frequency, utilizes down-sampled method to believe the original motion from different motion information collecting device Breath carries out frequency normalization;
3) feature extraction and the step of feature selection, utilize time domain method, frequency domain method or nonlinear analysis method to believe original motion Breath carries out feature extraction, utilizes mutual information method of correlation, genetic algorithm, sparse optimization method or PCA etc. to extracting The feature come is screened, thus selects the feature that can characterize movable information feature;
4) step of terminal unit wearing position identification cluster, by above-mentioned steps 3) the middle feature extracted and filter out, use has Tutor's learning method or build terminal unit wearing position identification cluster device without tutor's learning method;
5) step of random forest action recognition model, for each wearing position, utilizes random forest method to set up correspondence Action recognition model.
A kind of action identification method not relying on motion information acquisition equipment the most according to claim 2, its feature exists In: in step 1), motion information acquisition equipment includes but not limited to smart mobile phone, panel computer, watch, bracelet;Motion letter Breath collecting device wearing position includes but not limited to wrist, forearm, upper arm, waist, thigh, shank;The action bag that human body performs Include but be not limited to: sit quietly, couch, stand, go slowly, go upstairs, go downstairs, run;The movable information gathered includes but not limited to The acceleration of three dimensions X, Y, Z axis, angular velocity, magnetic field intensity.
A kind of action identification method not relying on motion information acquisition equipment the most according to claim 2, its feature exists In described step 2) in frequency normalization be the sample frequency movable information higher than 25Hz to be carried out reducing sample frequency heavily adopt Sample so that new movable information sample frequency is 25Hz.
A kind of action identification method not relying on motion information acquisition equipment the most according to claim 2, its feature exists In: in step 3), the feature utilizing time domain method to extract includes but not limited to motion amplitude, angle, speed;Utilize frequency domain method The feature extracted includes but not limited to motion frequency, energy;The feature utilizing nonlinear analysis method to extract include but not It is limited to approximate entropy, multi-scale entropy.
A kind of action identification method not relying on motion information acquisition equipment the most according to claim 2, its feature exists In: the supervised learning method in described step 4) includes but not limited to neutral net, supports vector, decision tree.
A kind of action identification method not relying on motion information acquisition equipment the most according to claim 2, its feature exists In: the learning method without tutor in described step 4) includes but not limited to self-organizing map neural network, discriminant by distance.
A kind of action identification method not relying on motion information acquisition equipment the most according to claim 2, its feature exists In: the cluster device in described step 4) refers to first be identified the wearing position of terminal unit, then between action recognition Action recognition model is set up respectively for different wearing positions.
A kind of action identification method not relying on motion information acquisition equipment the most according to claim 1, its feature exists In, the described model prediction stage, particularly as follows: motion information acquisition equipment is first worn on certain position of human body, then gathers people Body movable information in completing course of action to be identified, then sequentially passes through standard sample device by this raw information, feature carries Take and feature selection, the wearing position cluster module such as device, action recognition model, finally export final action recognition result.
CN201610903076.4A 2016-10-17 2016-10-17 Action identification method independent of action information acquisition equipment Active CN106228200B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610903076.4A CN106228200B (en) 2016-10-17 2016-10-17 Action identification method independent of action information acquisition equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610903076.4A CN106228200B (en) 2016-10-17 2016-10-17 Action identification method independent of action information acquisition equipment

Publications (2)

Publication Number Publication Date
CN106228200A true CN106228200A (en) 2016-12-14
CN106228200B CN106228200B (en) 2020-01-14

Family

ID=58077158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610903076.4A Active CN106228200B (en) 2016-10-17 2016-10-17 Action identification method independent of action information acquisition equipment

Country Status (1)

Country Link
CN (1) CN106228200B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874874A (en) * 2017-02-16 2017-06-20 南方科技大学 Motion state identification method and device
CN107016686A (en) * 2017-04-05 2017-08-04 江苏德长医疗科技有限公司 Three-dimensional gait and motion analysis system
CN107316052A (en) * 2017-05-24 2017-11-03 中国科学院计算技术研究所 A kind of robust Activity recognition method and system based on inexpensive sensor
CN107358210A (en) * 2017-07-17 2017-11-17 广州中医药大学 Human motion recognition method and device
CN107742070A (en) * 2017-06-23 2018-02-27 中南大学 A kind of method and system of action recognition and secret protection based on acceleration information
CN108550385A (en) * 2018-04-13 2018-09-18 北京健康有益科技有限公司 A kind of motion scheme recommends method, apparatus and storage medium
CN108710822A (en) * 2018-04-04 2018-10-26 燕山大学 Personnel falling detection system based on infrared array sensor
CN108734055A (en) * 2017-04-17 2018-11-02 杭州海康威视数字技术股份有限公司 A kind of exception personnel detection method, apparatus and system
CN108968918A (en) * 2018-06-28 2018-12-11 北京航空航天大学 The wearable auxiliary screening equipment of early stage Parkinson
CN109100537A (en) * 2018-07-19 2018-12-28 百度在线网络技术(北京)有限公司 Method for testing motion, device, equipment and medium
CN109190762A (en) * 2018-07-26 2019-01-11 北京工业大学 Upper limb gesture recognition algorithms based on genetic algorithm encoding
CN109635638A (en) * 2018-10-31 2019-04-16 中国科学院计算技术研究所 For the feature extracting method and system of human motion, recognition methods and system
CN110689041A (en) * 2019-08-20 2020-01-14 陈羽旻 Multi-target behavior action recognition and prediction method, electronic equipment and storage medium
CN111221419A (en) * 2020-01-13 2020-06-02 武汉大学 Array type flexible capacitor electronic skin for sensing human motion intention

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104434119A (en) * 2013-09-20 2015-03-25 卡西欧计算机株式会社 Body information obtaining device and body information obtaining method
CN105046215A (en) * 2015-07-07 2015-11-11 中国科学院上海高等研究院 Posture and behavior identification method without influences of individual wearing positions and wearing modes

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104434119A (en) * 2013-09-20 2015-03-25 卡西欧计算机株式会社 Body information obtaining device and body information obtaining method
CN105046215A (en) * 2015-07-07 2015-11-11 中国科学院上海高等研究院 Posture and behavior identification method without influences of individual wearing positions and wearing modes

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
侯祖贵: "基于惯性传感器的人体动作分析与识别", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
孙泽浩: "基于手机和可穿戴设备的用户活动识别问题研究", 《中国优秀博士学位论文全文数据库信息科技辑》 *
时岳 等: "基于旋转模式的移动设备佩戴位置识别方法", 《软件学报》 *
王海宁: "《基于多通道生理信号的情绪识别技术研究》", 31 August 2016 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874874A (en) * 2017-02-16 2017-06-20 南方科技大学 Motion state identification method and device
CN107016686A (en) * 2017-04-05 2017-08-04 江苏德长医疗科技有限公司 Three-dimensional gait and motion analysis system
CN108734055A (en) * 2017-04-17 2018-11-02 杭州海康威视数字技术股份有限公司 A kind of exception personnel detection method, apparatus and system
CN108734055B (en) * 2017-04-17 2021-03-26 杭州海康威视数字技术股份有限公司 Abnormal person detection method, device and system
CN107316052A (en) * 2017-05-24 2017-11-03 中国科学院计算技术研究所 A kind of robust Activity recognition method and system based on inexpensive sensor
CN107742070A (en) * 2017-06-23 2018-02-27 中南大学 A kind of method and system of action recognition and secret protection based on acceleration information
CN107742070B (en) * 2017-06-23 2020-11-24 中南大学 Method and system for motion recognition and privacy protection based on acceleration data
CN107358210B (en) * 2017-07-17 2020-05-15 广州中医药大学 Human body action recognition method and device
CN107358210A (en) * 2017-07-17 2017-11-17 广州中医药大学 Human motion recognition method and device
CN108710822A (en) * 2018-04-04 2018-10-26 燕山大学 Personnel falling detection system based on infrared array sensor
CN108710822B (en) * 2018-04-04 2022-05-13 燕山大学 Personnel falling detection system based on infrared array sensor
CN108550385A (en) * 2018-04-13 2018-09-18 北京健康有益科技有限公司 A kind of motion scheme recommends method, apparatus and storage medium
CN108550385B (en) * 2018-04-13 2021-03-09 北京健康有益科技有限公司 Exercise scheme recommendation method and device and storage medium
CN108968918A (en) * 2018-06-28 2018-12-11 北京航空航天大学 The wearable auxiliary screening equipment of early stage Parkinson
CN109100537A (en) * 2018-07-19 2018-12-28 百度在线网络技术(北京)有限公司 Method for testing motion, device, equipment and medium
CN109100537B (en) * 2018-07-19 2021-04-20 百度在线网络技术(北京)有限公司 Motion detection method, apparatus, device, and medium
US10993079B2 (en) 2018-07-19 2021-04-27 Baidu Online Network Technology (Beijing) Co., Ltd. Motion detection method, device, and medium
CN109190762A (en) * 2018-07-26 2019-01-11 北京工业大学 Upper limb gesture recognition algorithms based on genetic algorithm encoding
CN109190762B (en) * 2018-07-26 2022-06-07 北京工业大学 Mobile terminal information acquisition system
CN109635638A (en) * 2018-10-31 2019-04-16 中国科学院计算技术研究所 For the feature extracting method and system of human motion, recognition methods and system
CN109635638B (en) * 2018-10-31 2021-03-09 中国科学院计算技术研究所 Feature extraction method and system and recognition method and system for human body motion
CN110689041A (en) * 2019-08-20 2020-01-14 陈羽旻 Multi-target behavior action recognition and prediction method, electronic equipment and storage medium
CN111221419A (en) * 2020-01-13 2020-06-02 武汉大学 Array type flexible capacitor electronic skin for sensing human motion intention

Also Published As

Publication number Publication date
CN106228200B (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN106228200A (en) A kind of action identification method not relying on action message collecting device
CN105678222B (en) A kind of mobile device-based Human bodys' response method
CN106971059A (en) A kind of wearable device based on the adaptive health monitoring of neutral net
CN108245172B (en) Human body posture recognition method free of position constraint
CN108958482B (en) Similarity action recognition device and method based on convolutional neural network
CN106096662A (en) Human motion state identification based on acceleration transducer
CN103970271A (en) Daily activity identifying method with exercising and physiology sensing data fused
CN103400123A (en) Gait type identification method based on three-axis acceleration sensor and neural network
CN104586402B (en) A kind of feature extracting method of physical activity
CN110532898A (en) A kind of physical activity recognition methods based on smart phone Multi-sensor Fusion
CN112464738B (en) Improved naive Bayes algorithm user behavior identification method based on mobile phone sensor
CN111401435B (en) Human body motion mode identification method based on motion bracelet
CN106910314A (en) A kind of personalized fall detection method based on the bodily form
CN108958474A (en) A kind of action recognition multi-sensor data fusion method based on Error weight
Sheng et al. An adaptive time window method for human activity recognition
CN108008151A (en) A kind of moving state identification method and system based on 3-axis acceleration sensor
CN109805935A (en) A kind of intelligent waistband based on artificial intelligence hierarchical layered motion recognition method
CN106643722A (en) Method for pet movement identification based on triaxial accelerometer
CN103785157A (en) Human body motion type identification accuracy improving method
WO2022100187A1 (en) Mobile terminal-based method for identifying and monitoring emotions of user
CN103267652A (en) Intelligent online diagnosis method for early failures of equipment
CN106503667B (en) A kind of fall detection method based on WISP and pattern-recognition
CN109271889A (en) A kind of action identification method based on the double-deck LSTM neural network
Cao et al. ActiRecognizer: Design and implementation of a real-time human activity recognition system
Fu et al. Ping pong motion recognition based on smart watch

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant