CN109993037A - Action identification method, device, wearable device and computer readable storage medium - Google Patents

Action identification method, device, wearable device and computer readable storage medium Download PDF

Info

Publication number
CN109993037A
CN109993037A CN201810000873.0A CN201810000873A CN109993037A CN 109993037 A CN109993037 A CN 109993037A CN 201810000873 A CN201810000873 A CN 201810000873A CN 109993037 A CN109993037 A CN 109993037A
Authority
CN
China
Prior art keywords
data
action
human body
body posture
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810000873.0A
Other languages
Chinese (zh)
Other versions
CN109993037B (en
Inventor
李杨
马丽秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201810000873.0A priority Critical patent/CN109993037B/en
Publication of CN109993037A publication Critical patent/CN109993037A/en
Application granted granted Critical
Publication of CN109993037B publication Critical patent/CN109993037B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention provides a kind of action identification method, device, wearable device and computer readable storage mediums, wherein action identification method includes: the human body attitude data for being spaced preset time and obtaining wearable device user collected;Motion characteristic is extracted from human body attitude data;According to the first deliberate action knowledge, action recognition parameter and motion characteristic, identify the corresponding movement of human body attitude data for static movement or dynamic action;If the corresponding movement of human body attitude data acts to be static, according to the second deliberate action knowledge and motion characteristic, the corresponding movement of identification human body attitude data is sat for static standing or static state;If the corresponding movement of human body attitude data is dynamic action, according to third deliberate action knowledge and motion characteristic, the corresponding movement of human body attitude data is identified to walk or running;Action recognition parameter is the parameter after the static data initialization to be stood still according to user.This programme action recognition is at low cost, complexity is low, generality is high.

Description

Action recognition method and device, wearable device and computer-readable storage medium
Technical Field
The invention relates to the technical field of internet of things, in particular to a motion recognition method, a motion recognition device, wearable equipment and a computer-readable storage medium.
Background
Human action recognition is an important part of numerous intelligent systems related to people, human posture data is monitored in real time through a portable posture sensor, human posture characteristics are extracted, and human action states are recognized, so that follow-up control actions in an intelligent control system are triggered, and more intelligent equipment and environment are realized.
However, the wearable device for motion recognition can only achieve functions of data acquisition and data transmission on the wearable side, and no matter data are transmitted to a console or a cloud for calculation, higher requirements and restrictions are imposed on a use scene, and the calculation amount and delay time of a control decision system are increased.
Secondly, the motion recognition algorithm based on data training needs enough user data to continuously improve the accuracy of the model, the recognition model trained through data of partial wearers is difficult to be universally applied to general users, and due to the constraints of user privacy and user difference, enough labeled samples are difficult to collect for training. In addition, the user experience degree of the wearable device for motion recognition is poor at present, data information lost by data discretization is made up by adding sensors in most algorithms, the redundancy degree of the sensors is high, the algorithm calculation amount and the storage amount are large, and the complexity degree of a motion recognition system is greatly increased.
That is, the existing motion recognition algorithm needs enough labeled data to be trained so as to construct an algorithm model, and particularly, in order to solve the problem of difference of user data, a large number of difference samples need to be added to the training algorithm, so that labor cost is increased, and motion recognition cannot be guaranteed to be suitable for all wearers.
In addition, the training model based on data requires enough storage space to store data and algorithms, and occupies a large amount of computing resources and computing time to complete the algorithm process.
In addition, the existing wearable equipment is poor in comfort level and intelligentization. Due to the complexity of the algorithm, the wearable device can only realize data acquisition on the wearing side, and the data processing needs to be transmitted to other computing devices, so that the computing cost and the storage cost of the control end are increased, the requirement on data transmission is increased, and the trend of the current Internet of things edge computing is violated.
Secondly, the current algorithm carries out fragmentation processing and simple feature extraction on data, a large amount of information in the data is lost, the information amount is increased by generally adopting a mode of increasing a sensor, so that serious sensor redundancy is caused, a wearer needs to wear a large amount of sensors, especially, the wearing position of the sensors is strictly limited, and the comfort level of the wearable equipment is greatly reduced.
In conclusion, the existing action recognition algorithm and the wearable device are not suitable for the actual scene of the internet of things.
Disclosure of Invention
The invention aims to provide a motion recognition method, a motion recognition device, wearable equipment and a computer readable storage medium, and solves the problems of high cost, high complexity and poor universality of a motion recognition scheme in the prior art.
In order to solve the above technical problem, an embodiment of the present invention provides a motion recognition method, including:
acquiring human body posture data of a user, which is acquired by wearing equipment, at intervals of preset time;
extracting action features from the human body posture data;
recognizing the action corresponding to the human body posture data as a static action or a dynamic action according to first preset action knowledge, action recognition parameters and the action characteristics;
if the action corresponding to the human body posture data is a static action, identifying that the action corresponding to the human body posture data is a static standing or a static sitting according to second preset action knowledge and the action characteristics;
if the action corresponding to the human body posture data is a dynamic action, identifying the action corresponding to the human body posture data as walking or running according to third preset action knowledge and the action characteristics;
the motion identification parameters are initialized according to static data of a user standing still.
Optionally, the body posture data includes three-axis acceleration data of a body posture of the user;
the step of extracting motion features from the human body posture data comprises:
and compressing the triaxial acceleration data into one-dimensional feature vector data to obtain action features.
Optionally, before obtaining the human body posture data of the user collected by the wearable device at preset intervals, the motion recognition method further includes:
acquiring static data of a user when the user stands still and collected by the wearable device;
adjusting the static data according to the static acceleration data;
initializing the action identification parameters according to the adjusted static data;
wherein the static acceleration data is used to eliminate data generated by shaking when the user is standing still.
Optionally, the static data includes triaxial acceleration data when the user is stationary;
the step of initializing the motion recognition parameters according to the adjusted static data comprises:
the triaxial acceleration in the adjusted static data is set as Ax1=[acc11,...,acc1n],Ax2=[acc21,...,acc2n],Ax3=[acc31,...,acc3n];acc1nRepresenting the nth acceleration value of the first axis, acc2nRepresenting the nth acceleration value, acc, in the second axial direction3nRepresenting a third axial nth acceleration value; the following formula is utilized:
airepresenting the initial acceleration of the ith axial direction; acc (acrylic acid)ijAxial acceleration data representing the ith axial jth group in the static data,representing the average value of the axial acceleration of the ith axial direction in the static data, and n representing the total number of the axial acceleration data, β representing preset experience parameters, and generated by corresponding physical knowledge learning;
compressing the three-axis acceleration in the adjusted static data into one-dimensional characteristic vector data to obtain an initialized action identification parameter aDS=[a1,a2,a3]。
Optionally, the step of compressing the triaxial acceleration data into one-dimensional feature vector data to obtain motion features includes:
setting the three-axis acceleration in the human body posture data as: representing the mth acceleration value in the first axis,representing the second axial mth acceleration value,represents the mth acceleration value of the third axis; the following formula is utilized:
birepresenting the target acceleration of the ith axial direction;axial acceleration data of the ith axial jth group in the human body posture data,representing the average value of the axial acceleration of the ith axial direction in the human body posture data, wherein m represents the total number of the axial acceleration data;
compressing the three-axis acceleration in the human body posture data into one-dimensional characteristic vector data to obtain a first target vector [ b1,b2,b3]As a first action characteristic;
the step of recognizing the motion corresponding to the human body posture data as a static motion or a dynamic motion according to the first preset motion knowledge, the motion recognition parameters and the motion characteristics comprises:
determining, according to the first preset action knowledge, if at least one b existsiSatisfy bi<aiIf the corresponding action of the human body posture data is static action, otherwise, the corresponding action of the human body posture data is static actionThen it is a dynamic action.
Optionally, the step of compressing the triaxial acceleration data into one-dimensional feature vector data to obtain motion features includes:
setting the three-axis acceleration in the human body posture data as: represents the mth acceleration value of the x axis,represents the mth acceleration value of the y axis,represents the mth acceleration value of the z axis;
using the formula:compressing the three-axis acceleration in the human body posture data into one-dimensional feature vector data, and determining a first limb angle α of the user as a second action feature;
using the formula:compressing the three-axis acceleration in the human body posture data into one-dimensional feature vector data, and determining a second limb angle gamma of the user as a third action feature;
wherein,represents the average value of the axial acceleration of the x-axis direction in the human body posture data,represents the average value of the axial acceleration of the y-axis direction in the human body posture data,an average value of axial acceleration in the z-axis direction in the human body posture data is represented;
the step of identifying that the motion corresponding to the human body posture data is static standing or static sitting according to the second preset motion knowledge and the motion characteristics comprises the following steps of:
and determining that the motion corresponding to the human body posture data is static standing if α is larger than a set first threshold value and is static sitting if gamma is larger than a set second threshold value according to second preset motion knowledge.
Optionally, the human body posture data includes spindle angular velocity data of a human body posture of the user;
the step of extracting motion features from the human body posture data comprises:
obtaining main shaft angle data according to the main shaft angular velocity data;
and obtaining action characteristics according to the main shaft angle data.
Optionally, the step of obtaining the motion characteristics according to the spindle angle data includes:
acquiring the number of wave crests and wave troughs existing in the main shaft angle data, recording the number as theta, and taking the theta as a fourth action characteristic;
the step of identifying the motion corresponding to the human body posture data as walking or running according to the third preset motion knowledge and the motion characteristics comprises the following steps:
determining according to third preset action knowledge, if theta is equal to 1, the action corresponding to the human body posture data is walking; and if theta is greater than or equal to 2, the action corresponding to the human body posture data is running.
An embodiment of the present invention further provides a motion recognition apparatus, including:
the first acquisition module is used for acquiring human body posture data of a user, which is acquired by the wearable device, at intervals of preset time;
the first extraction module is used for extracting action characteristics from the human body posture data;
the first identification module is used for identifying the action corresponding to the human body posture data as a static action or a dynamic action according to first preset action knowledge, action identification parameters and the action characteristics;
the second identification module is used for identifying that the action corresponding to the human body posture data is static standing or static sitting according to second preset action knowledge and the action characteristics if the action corresponding to the human body posture data is static action;
the third identification module is used for identifying the action corresponding to the human body posture data as walking or running according to third preset action knowledge and the action characteristics if the action corresponding to the human body posture data is a dynamic action;
the motion identification parameters are initialized according to static data of a user standing still.
Optionally, the body posture data includes three-axis acceleration data of a body posture of the user;
the first extraction module comprises:
and the first processing submodule is used for compressing the triaxial acceleration data into one-dimensional feature vector data to obtain action features.
Optionally, the motion recognition apparatus further includes:
the second acquisition module is used for acquiring static data of the user when the user stands still, which is acquired by the wearable device, before acquiring the human body posture data of the user, which is acquired by the wearable device, at preset intervals;
the first processing module is used for adjusting the static data according to the static acceleration data;
the second processing module is used for initializing the action identification parameters according to the adjusted static data;
wherein the static acceleration data is used to eliminate data generated by shaking when the user is standing still.
Optionally, the static data includes triaxial acceleration data when the user is stationary;
the second processing module comprises:
a second processing submodule for setting the triaxial acceleration in the adjusted static data as Ax1=[acc11,...,acc1n],Ax2=[acc21,...,acc2n],Ax3=[acc31,...,acc3n];acc1nRepresenting the nth acceleration value of the first axis, acc2nRepresenting the nth acceleration value, acc, in the second axial direction3nRepresenting a third axial nth acceleration value; the following formula is utilized:
airepresenting the initial acceleration of the ith axial direction; acc (acrylic acid)ijAxial acceleration data representing the ith axial jth group in the static data,representing the average value of the axial acceleration of the ith axial direction in the static data, and n representing the total number of the axial acceleration data, β representing preset experience parameters, and generated by corresponding physical knowledge learning;
adjusted static dataCompressing the three-axis acceleration into one-dimensional feature vector data to obtain initialized motion identification parameter aDS=[a1,a2,a3]。
Optionally, the first processing sub-module includes:
a first processing unit, configured to set the three-axis acceleration in the human body posture data as: representing the mth acceleration value in the first axis,representing the second axial mth acceleration value,represents the mth acceleration value of the third axis; the following formula is utilized:
birepresenting the target acceleration of the ith axial direction;axial acceleration data of the ith axial jth group in the human body posture data,representing the average value of the axial acceleration of the ith axial direction in the human body posture data, wherein m represents the total number of the axial acceleration data;
compressing the three-axis acceleration in the human body posture data into one-dimensional characteristic vector data to obtain a first target vector [ b1,b2,b3]As a first action characteristic;
the first identification module comprises:
a first identification submodule for determining, based on the knowledge of the first predetermined action, if there is at least one biSatisfy bi<aiAnd if not, the motion corresponding to the human body posture data is a dynamic motion.
Optionally, the first processing sub-module includes:
a second processing unit, configured to set the three-axis acceleration in the human body posture data as: represents the mth acceleration value of the x axis,represents the mth acceleration value of the y axis,represents the mth acceleration value of the z axis;
using the formula:compressing the three-axis acceleration in the human body posture data into one-dimensional feature vector data, and determining a first limb angle α of the user as a second action feature;
using the formula:compressing the three-axis acceleration in the human body posture data into one-dimensional feature vector data, and determining a second limb angle gamma of the user as a third action feature;
wherein,represents the average value of the axial acceleration of the x-axis direction in the human body posture data,represents the average value of the axial acceleration of the y-axis direction in the human body posture data,an average value of axial acceleration in the z-axis direction in the human body posture data is represented;
the second identification module comprises:
and the second identification submodule is used for determining according to second preset action knowledge that if α is greater than a set first threshold value, the action corresponding to the human body posture data is static standing, and if gamma is greater than a set second threshold value, the action corresponding to the human body posture data is static sitting.
Optionally, the human body posture data includes spindle angular velocity data of a human body posture of the user;
the first extraction module comprises:
the third processing submodule is used for obtaining main shaft angle data according to the main shaft angular speed data;
and the fourth processing submodule is used for obtaining action characteristics according to the main shaft angle data.
Optionally, the fourth processing sub-module includes:
the first obtaining unit is used for obtaining the number of wave crests and wave troughs in the main shaft angle data, recording the number as theta, and taking the theta as a fourth action characteristic;
the third identification module comprises:
the third recognition submodule is used for determining according to third preset action knowledge, and if theta is equal to 1, the action corresponding to the human body posture data is walking; and if theta is greater than or equal to 2, the action corresponding to the human body posture data is running.
The embodiment of the invention also provides the wearable device, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor; the processor implements the above-described motion recognition method when executing the program.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the above-mentioned motion recognition method.
The technical scheme of the invention has the following beneficial effects:
in the scheme, the motion recognition method acquires the human body posture data of the user, which is acquired by the wearable device, through the interval preset time; extracting action features from the human body posture data; recognizing the action corresponding to the human body posture data as a static action or a dynamic action according to first preset action knowledge, action recognition parameters and the action characteristics; if the action corresponding to the human body posture data is a static action, identifying that the action corresponding to the human body posture data is a static standing or a static sitting according to second preset action knowledge and the action characteristics; if the action corresponding to the human body posture data is a dynamic action, identifying the action corresponding to the human body posture data as walking or running according to third preset action knowledge and the action characteristics; the motion identification parameters are initialized according to static data of a user standing still;
the motion knowledge can be added into a motion recognition algorithm, so that the requirements of the number of sensors and the wearing positions of the sensors are effectively reduced, and the collected original data is directly used without being subjected to complex data processing; after the algorithm is simplified, the method can be realized on a low-cost computing chip, so that the wearable device can realize intellectualization at a wearing end, an identification result is directly generated, the instability and the time delay of data transmission are eliminated, and the marginalized requirement of computing of the Internet of things is met; accurate action recognition is performed under the conditions that computing resources are limited and information interaction is performed without depending on a cloud, the algorithm can be updated and perfected through networking, the implementation cost and the complexity are reduced, and the scheme can be applied to general people and improves the universality.
Drawings
FIG. 1 is a schematic flow chart illustrating a method for recognizing actions according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of static data collection according to an embodiment of the present invention;
FIG. 3 is a static data diagram according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a decision tree model according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of gyroscope drive shaft angle data when a user walks according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of gyroscope drive shaft angle data during user standing according to an embodiment of the present invention;
FIG. 7 is an exploded view of a user walking gesture according to an embodiment of the present invention;
FIG. 8 is a schematic view of a wearable device frame according to an embodiment of the invention;
FIG. 9 is a flowchart illustrating a specific application of the motion recognition method according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of an embodiment of a motion recognition apparatus;
fig. 11 is a schematic structural diagram of a wearable device according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
Aiming at the problems of high cost, high complexity and poor universality of an action recognition scheme in the prior art, the invention provides an action recognition method, as shown in figure 1, which comprises the following steps:
step 11: acquiring human body posture data of a user, which is acquired by wearing equipment, at intervals of preset time;
step 12: extracting action features from the human body posture data;
step 13: recognizing the action corresponding to the human body posture data as a static action or a dynamic action according to first preset action knowledge, action recognition parameters and the action characteristics;
step 14: if the action corresponding to the human body posture data is a static action, identifying that the action corresponding to the human body posture data is a static standing or a static sitting according to second preset action knowledge and the action characteristics;
step 15: if the action corresponding to the human body posture data is a dynamic action, identifying the action corresponding to the human body posture data as walking or running according to third preset action knowledge and the action characteristics;
the motion identification parameters are initialized according to static data of a user standing still.
The initialized action recognition parameters are different from person to person, so that personalized setting for different users is realized, and the recognition result is more accurate. The preset time may preferably be 1 s.
The action recognition method provided by the embodiment of the invention obtains the human body posture data of the user collected by the wearable device at intervals of preset time; extracting action features from the human body posture data; recognizing the action corresponding to the human body posture data as a static action or a dynamic action according to first preset action knowledge, action recognition parameters and the action characteristics; if the action corresponding to the human body posture data is a static action, identifying that the action corresponding to the human body posture data is a static standing or a static sitting according to second preset action knowledge and the action characteristics; if the action corresponding to the human body posture data is a dynamic action, identifying the action corresponding to the human body posture data as walking or running according to third preset action knowledge and the action characteristics; the motion identification parameters are initialized according to static data of a user standing still;
the motion knowledge can be added into a motion recognition algorithm, so that the requirements of the number of sensors and the wearing positions of the sensors are effectively reduced, and the collected original data is directly used without being subjected to complex data processing; after the algorithm is simplified, the method can be realized on a low-cost computing chip, so that the wearable device can realize intellectualization at a wearing end, an identification result is directly generated, the instability and the time delay of data transmission are eliminated, and the marginalized requirement of computing of the Internet of things is met; accurate action recognition is performed under the conditions that computing resources are limited and information interaction is performed without depending on a cloud, the algorithm can be updated and perfected through networking, the implementation cost and the complexity are reduced, and the scheme can be applied to general people and improves the universality.
Wherein the body pose data comprises three-axis acceleration data of a body pose of a user;
correspondingly, the step of extracting the motion characteristics from the human body posture data comprises the following steps: and compressing the triaxial acceleration data into one-dimensional feature vector data to obtain action features.
Further, before acquiring the human body posture data of the user collected by the wearable device at preset intervals, the motion recognition method further includes: acquiring static data of a user when the user stands still and collected by the wearable device; adjusting the static data according to the static acceleration data; initializing the action identification parameters according to the adjusted static data; wherein the static acceleration data is used to eliminate data generated by shaking when the user is standing still.
In consideration of practical application, in this embodiment, the time of the static data acquired by the wearable device when the user stands still may be 3 s.
Specifically, the static data includes triaxial acceleration data when the user is stationary; the step of initializing the motion recognition parameters according to the adjusted static data comprises:
the triaxial acceleration in the adjusted static data is set as Ax1=[acc11,...,acc1n],Ax2=[acc21,...,acc2n],Ax3=[acc31,...,acc3n];acc1nRepresenting the nth acceleration value of the first axis, acc2nRepresenting the nth acceleration value, acc, in the second axial direction3nRepresenting a third axial nth acceleration value; the following formula is utilized:
airepresenting the initial acceleration of the ith axial direction; acc (acrylic acid)ijAxial acceleration data representing the ith axial jth group in the static data,representing the average value of the axial acceleration of the ith axial direction in the static data, and n representing the total number of the axial acceleration data, β representing preset experience parameters, and generated by corresponding physical knowledge learning;
compressing the three-axis acceleration in the adjusted static data into one-dimensional characteristic vector data to obtain an initialized action identification parameter aDS=[a1,a2,a3]。
AX1Representing a first set of axis accelerations in the static data; AX2Representing a set of second axis accelerations in the static data; AX3Representing a third set of axis accelerations in the static data.
Compressing the triaxial acceleration data into one-dimensional feature vector data to obtain action features, wherein the step of obtaining the action features comprises the following steps: setting the three-axis acceleration in the human body posture data as: representing the mth acceleration value in the first axis,representing the second axial mth acceleration value,represents the mth acceleration value of the third axis; the following formula is utilized:
birepresenting the target acceleration of the ith axial direction;axial acceleration data of the ith axial jth group in the human body posture data,representing the average value of the axial acceleration of the ith axial direction in the human body posture data, wherein m represents the total number of the axial acceleration data;
compressing the three-axis acceleration in the human body posture data into one-dimensional characteristic vector data to obtain a first target vector [ b1,b2,b3]As a first action characteristic;
correspondingly, the action recognition is carried out according to the first preset action knowledgeThe step of identifying whether the motion corresponding to the human body posture data is a static motion or a dynamic motion comprises the following steps: determining, according to the first preset action knowledge, if at least one b existsiSatisfy bi<aiAnd if not, the motion corresponding to the human body posture data is a dynamic motion.
Each one of biA corresponding to axial directioniAnd comparing to judge whether the motion corresponding to the human body posture data is static motion or dynamic motion. BX1Representing a first set of axis accelerations in the body pose data; BX2Representing a set of second axis accelerations in the body pose data; BX3Representing a third set of axis accelerations in the body pose data.
Compressing the triaxial acceleration data into one-dimensional feature vector data to obtain action features, wherein the step of obtaining the action features comprises the following steps: setting the three-axis acceleration in the human body posture data as: represents the mth acceleration value of the x axis,represents the mth acceleration value of the y axis,represents the mth acceleration value of the z axis;
using the formula:compressing the three-axis acceleration in the human body posture data into one-dimensional feature vector data, and determining a first limb angle α of the user as a second limb angleAn action characteristic;
using the formula:compressing the three-axis acceleration in the human body posture data into one-dimensional feature vector data, and determining a second limb angle gamma of the user as a third action feature;
wherein,represents the average value of the axial acceleration of the x-axis direction in the human body posture data,represents the average value of the axial acceleration of the y-axis direction in the human body posture data,average value of axial acceleration representing z-axis direction in body posture data: (Calculated by the acceleration data of the x-axis,calculated by the acceleration data of the y axis,calculated from z-axis acceleration data);
correspondingly, the step of identifying whether the motion corresponding to the human body posture data is static standing or static sitting according to the second preset motion knowledge and the motion characteristics comprises the steps of determining whether the motion corresponding to the human body posture data is static standing if α is larger than a set first threshold value or determining whether the motion corresponding to the human body posture data is static sitting if gamma is larger than a set second threshold value according to the second preset motion knowledge.
According to the y-axis plusThe cosine of the included angle between the velocity and the acceleration of gravity may be set to 0.8 for both the first threshold and the second threshold, but not limited thereto. BX1Representing a set of X-axis accelerations in the human body posture data; BX2Representing a set of Y-axis accelerations in the human body posture data; BX3Representing a set of Z-axis accelerations in the body pose data.
Wherein the human body posture data comprises spindle angular velocity data of a human body posture of a user; the step of extracting motion features from the human body posture data comprises: obtaining main shaft angle data according to the main shaft angular velocity data; and obtaining action characteristics according to the main shaft angle data.
Specifically, the step of obtaining the motion characteristics according to the spindle angle data includes: acquiring the number of wave crests and wave troughs existing in the main shaft angle data, recording the number as theta, and taking the theta as a fourth action characteristic;
correspondingly, the step of identifying the motion corresponding to the human body posture data as walking or running according to the third preset motion knowledge and the motion characteristics comprises: determining according to third preset action knowledge, if theta is equal to 1, the action corresponding to the human body posture data is walking; and if theta is greater than or equal to 2, the action corresponding to the human body posture data is running.
The above description is only given about static (sitting, standing) and dynamic (running, walking) example identification, and the scheme can be also applied to identifying static actions of a user such as lying posture, kneeling posture, drinking water and the like, and also can be applied to identifying dynamic actions of the user such as riding, running stairs and the like, and can be realized by acquiring corresponding sensing data only according to related action knowledge (physical knowledge), and is not repeated herein.
The following further describes the motion recognition method provided by the embodiment of the present invention.
In view of the above technical problems, embodiments of the present invention provide an action recognition method, which can effectively reduce the number of sensors and the requirements of sensor wearing positions by adding action knowledge to an action recognition algorithm, and after the algorithm is simplified, the algorithm can be implemented on a low-cost computing chip, so that wearable equipment is intelligentized at a wearing end, a recognition result is directly generated, instability and time delay of data transmission are eliminated, and the requirement of computing marginalization of the internet of things is met.
The embodiment of the invention can specifically acquire data in real time based on the motion sensor worn on the body, wherein the data comprises acceleration data, gyroscope data and the like, and motion feature extraction and motion recognition are carried out by an ATmega328P chip (8-bit microcontroller, 16MHz computing speed, 2KB random access memory and 32KB flash memory capacity) connected with the sensor with 1 second as a time period, so that a motion recognition algorithm and equipment under the condition of limited computing resources are finally realized. The motion recognition algorithm in the embodiment of the invention does not need to use training data to train a model, and completely automatically recognizes the motion by combining related motion postures, and the motion recognition method and the wearable device can be suitable for general people, and when the motion recognition algorithm is used, a user only needs to wear the device at a specified position of a body and stand still in place for 3 seconds to initialize the device.
Firstly, the inertial measurement unit IMU can acquire motion data such as acceleration and angular velocity in real time, the data acquisition frequency can be about 20Hz, and the algorithm in the embodiment of the invention can be directly used without performing complex data processing on the acquired original data; secondly, extracting the characteristics of the data in each period according to the motion rule and by combining the relevant action characteristics, and identifying the action by using the characteristics; and finally, constructing a decision tree algorithm by combining the characteristics of each action and the action characteristics acquired through the sensor data, and writing the algorithm into the chip.
The method and the equipment can almost aim at all users, accurate action recognition can be carried out under the conditions that computing resources are limited and information interaction is carried out without depending on a cloud, and the equipment can update and perfect an algorithm through networking.
Specifically, the invention provides a motion feature-based adaptive motion recognition method and a wearable device with limited computing resources, wherein motion features are extracted by acquiring posture sensing data of a human body, and a motion posture executed by a user through the wearable device is recognized under limited computing resources, and the method comprises the following operations:
1. before identification, a user wears the equipment as required and stands still for 3 seconds to initialize the model, and relevant parameters (action identification parameters) in the classification model are adjusted according to collected static data of the user, so that individuation of action identification is realized;
2. a sensor in the equipment acquires posture sensing data of a human body at the frequency of about 20HZ, wherein the posture sensing data comprises three-axis acceleration and a deflection angle of a main movable axis. Segmenting the human body posture data, and storing a human body action data sequence within 1 second;
3. and extracting action characteristics through the action sequence, constructing a decision tree model according to the identified human body action knowledge, and classifying the action characteristics acquired within the action period of 1 second.
Correspondingly, a set of action recognition devices under limited resources is provided, wherein the modules related to the action recognition algorithm comprise:
and the sensor data acquisition module is used for acquiring human body posture data, including three-axis acceleration and a main operating axis angle.
And the action recognition module is used for extracting the posture data characteristics, initializing a decision tree model according to the body characteristics of different users and classifying the action data.
Wherein, in operation 1, because different users 'body data difference is great, make the data that pass back have very big influence to the parameter of model, in order to strengthen the use experience, require the wearer to place certain position with the sensor according to certain gesture (for example tie up the sensor to the wearer knee top, as shown in fig. 2), user initialization's process reduces as far as possible, only need gather the static data in user 3s when equipment starts, adjust sensor direction data (static data) through static acceleration data, eliminate user's general shake of body, and update the parameter in the action recognition model (action recognition parameter), make classification model can be adaptive to user and user's environment.
In operation 2, the user motion data including the three-axis acceleration and the main axis ANGLE are collected at a frequency of 20HZ, the human motion data sequence in a time period of 1 second is stored, and the motion data actually collected is as shown in fig. 3 (ACC _ X represents static data in the X-axis direction, ACC _ Y represents static data in the Y-axis direction, ACC _ Z represents static data in the Z-axis direction, and ANGLE _ main axis represents static data of the main axis of the gyroscope). Under the condition of limited resources, the data acquisition frequency and the fragment length are selected to meet the requirements of completely recording the change process of a single action and also meet the limitation of storage space. Multiple experiments prove that the action repetition period of basic actions of the human body is shorter than 1 second, and the 20HZ sampling frequency can basically present action rules and action characteristics of all basic actions. The attitude data generally includes three-axis acceleration, three-axis angular velocity and three-axis magnetometer data, and through analysis of a large number of basic motions, the three-axis acceleration can be used for distinguishing whether a user behavior belongs to a dynamic behavior or a static behavior, an acceleration axis where gravity is located can be used for distinguishing a specific static behavior, and an angle characteristic can be used for distinguishing a specific dynamic behavior. Because the action of the wearer is independent of the orientation, most basic actions only have one driving shaft, the angle data of the driving shaft and the assistance of the three-shaft acceleration data can simplify the data, and the performance requirement of the sensor and the storage requirement of a processing system are reduced. The RAM only needs to reserve about 32KB of storage space for a single sensor, storing the current 1 second of raw data.
In operation 3, the human body action rules and the action characteristics are analyzed through corresponding algorithms, and a series of basic actions such as standing, sitting, walking, running and the like are considered in the embodiment of the invention. Because the action data of different users have large difference and are difficult to be used for action recognition of new users through data migration or model migration, the embodiment of the invention raises the action data to an action knowledge level, does not need to collect a large number of samples to train the model, and can also overcome the problem of difference presented on the user data. The summary and summarization of motion knowledge can simplify the calculation process by constructing features with higher resolution, and the motion knowledge has higher universality than motion data, and is suitable for users with larger difference. In addition, a simplified decision tree model can be constructed through action knowledge, the judgment condition of each branch point is action distinguishing knowledge between two types of actions with the largest difference under the same large class, and action classification can be better realized by replacing data comparison with knowledge, which is specifically shown in fig. 4.
Taking four actions of standing, sitting still, walking and running which can be recognized based on a single sensor as an example, the processing flow of the action knowledge algorithm is simply described.
Here, the algorithm generates the recognition vector (motion recognition parameter) a required for knowledge 1 from the user static data obtained at the initialization stage of operation 1DS=[a1,a2,a3]Let Ax here1=[acc11,...,acc1n],Ax2=[acc21,...,acc2n],Ax3=[acc31,...,acc3n]The triaxial acceleration data recorded for the initialization phase, respectively, are then calculated by the following formula:
calculating to obtain;
β is an empirical parameter generated by learning corresponding physical knowledge, and the meaning of other parameters can be referred to the above description and will not be described herein.
First, based on knowledge 1 (first preset motion knowledge), the algorithm further determines dynamic motions (walking, running) and static motions (standing, sitting) by using the three-axis acceleration data calculation. Therefore, the recorded triaxial acceleration data in the unit period is recorded in the action recognition process By the following calculation:
obtain the vector [ b1,b2,b3]For the meaning of the parameters, refer to the above description, and are not repeated herein.
If at least one b is presentiSo that b isi<aiThen the algorithm recognizes the unit cycle motion as static motion and vice versa as dynamic motion.
For the discrimination of static motion, here the algorithm uses knowledge 2 (second preset motion knowledge). The algorithm further divides the static action by identifying the difference of included angles of acceleration on three coordinate axes in the static action. In the operation 1, the wearing direction of the sensor is required to enable the y axis of the sensor to coincide with the gravity acceleration, and for the three-axis acceleration data in the unit period, the algorithm calculatesDetermining the limb angle of the wearer, hereCalculated from the y-axis acceleration data, and considering the static motion as standing if α is greater than a threshold set by knowledge, and in sitting posture, the gravity acceleration is coincident with the z-axis, and the algorithm calculates the three-axis acceleration data in unit periodDetermining the limb angle of the wearer, hereFrom z-axis acceleration dataThe calculation determines that if γ is greater than a threshold value set by knowledge, the static movement is considered as sitting. The meaning of the parameters in the formula can be referred to the above description, and the details are not repeated herein.
For the discrimination of the dynamic motion, the algorithm uses knowledge 3 (third preset motion knowledge). The algorithm further identifies the gyroscope angle data on the drive shaft during dynamic motion, and fig. 5 and 6 are gyroscope drive shaft angle data during walking and standing, respectively. When walking, the angle data of the driving shaft has obvious characteristics, and the shape presented by the angle data when walking is similar to a sine wave, and peaks and troughs exist. With reference to fig. 7, walking can be finely decomposed into several postures, and the angle data corresponding to the several postures in fig. 7 are respectively represented as a trough, a middle, a peak, a middle, and a trough. The algorithm judges and identifies walking or running by calculating whether the gyroscope angle data has wave crests or wave troughs in a unit period and the number of the wave crests and the wave troughs. Firstly, calculating the number of wave crests and wave troughs in the angle of a driving shaft of the gyroscope in a unit period by combining knowledge 3, marking the number as theta, and identifying the dynamic action as walking if the theta is equal to 1; if θ is greater than or equal to 2, it can be further inferred that the dynamic motion is running. In combination with the human walking law, one peak or trough can represent one step, and knowledge 3 can be simply understood as judging whether to walk or run by calculating the number of steps of the human in one second.
In the device, the sensor data acquisition module is used for acquiring human body posture data, comprises three-axis acceleration and a main operating axis angle (X-axis gyroscope-main shaft angular velocity), and can be placed at key parts of human body multi-rigid-body models such as human neck, trunk, limbs and the like. The typical human body multi-rigid-body model comprises 11 key positions, and after multiple experiments, the action recognition can be carried out on the action data collected by a single sensor placed at the thigh position for 4 actions related to the invention without a large number of sensors worn by a user.
Under the condition of reducing the number of sensors as much as possible, the invention can also identify a plurality of static actions such as lying posture, kneeling posture, drinking water and the like and a plurality of dynamic actions such as riding, climbing stairs and the like, reduce the requirement of training data through motion knowledge and reduce the redundancy of the sensors. The combination of the above contents shows that the algorithm is to fuse the motion knowledge into the motion recognition system, so that the model development period based on data is effectively shortened, and the requirements of the motion recognition model on data and computing resources are reduced.
In the action recognition module, according to the analysis of the data and the algorithm, the action recognition method in the embodiment of the invention does not need to collect the labeled action data in advance, the classification algorithm is simplified, and the operation processing module can meet the processing requirement only by adopting an ATmega328p microcontroller (an operation clock 16MHZ, a 2KB RAM and a32 KB flash). The algorithm can be self-adaptive to action recognition of different users, the algorithm and data are not required to be corrected and adjusted by an additional cloud or a controller, the whole action recognition module can be integrated on the wearable device, recognition results can be obtained at the device end, and the recognition results can be transmitted to other devices to be subjected to subsequent processing as required.
Fig. 8 shows a block diagram of a wearable device according to the present invention, which includes: the system comprises a three-axis accelerometer, a driving shaft gyroscope, a battery, a reset circuit, an ATmega328p, a control module, a display module, a WiFi module and a decision-making system (which can be used for an adult to make a decision on a child).
The execution flow of the scheme provided by the embodiment of the present invention may be specifically shown in fig. 9, and includes:
step 91: collecting data of 3s of standing of a wearer, calculating motion characteristic related parameters, and constructing an individualized motion recognition model;
and step 92: acquiring posture data of a wearer at a frequency of about 20Hz, and updating and storing sensor data within 1 s;
step 93: constructing motion recognition characteristics based on motion knowledge, and compressing partial sensor data in 1s into one-dimensional characteristic vectors;
step 94: and classifying the obtained action characteristic vectors based on the decision tree model, and identifying the current action state.
As can be seen from the above, the scheme provided by the embodiment of the present invention obtains data in real time based on the motion sensor worn on the body, wherein the data includes acceleration data, gyroscope data, and the like, and performs motion feature extraction and motion recognition with a time period of 1 second by using the ATmega328P chip (8-bit microcontroller, 16mhz calculation speed, 2KB random access memory, and 32KB flash memory capacity) connected to the sensor, thereby finally realizing a motion recognition algorithm and device under the condition of limited calculation resources.
Firstly, the embodiment of the invention excavates each action physical law of human body and integrates the action physical law into a decision tree model, thus omitting the process of using training data to learn the model; secondly, the wearable equipment is very simplified, the whole algorithm can complete calculation processing in a single chip with low cost and return a result in real time, and the whole equipment can be normally used under the condition that the calculation resources are limited or data interaction with a cloud end cannot be carried out; thirdly, compared with the existing scheme, the algorithm and the data processing process in the invention are greatly different, and the selected action characteristics are also different; finally, the invention is directed to the general user population, and can be used immediately for new users using the device without cumbersome data entry or initialization steps. The method and the equipment can almost aim at all users, accurate action recognition can be carried out under the conditions that computing resources are limited and information interaction is carried out without depending on a cloud, and the equipment can update and perfect an algorithm through networking.
In summary, the adaptive motion recognition method based on motion characteristics and the wearable device provided by the embodiment of the invention can overcome the defect that the existing motion recognition method cannot be realized on a KB-level single chip microcomputer (when the computing capability is limited, the existing method is greatly influenced), and the method provided by the invention has the advantages that the motion recognition algorithm constructed by combining the motion law does not need to input training data in advance for algorithm learning, and does not need too many sensors and image data, so that the algorithm complexity is greatly reduced; the scheme provided by the embodiment of the invention is suitable for general people, complex data correction is not needed when the user uses the scheme, and the user experience is natural.
An embodiment of the present invention further provides a motion recognition apparatus, as shown in fig. 10, including:
the first acquisition module 101 is used for acquiring human body posture data of a user, which is acquired by the wearable device, at intervals of preset time;
a first extraction module 102, configured to extract motion features from the human body posture data;
the first identification module 103 is configured to identify, according to a first preset action knowledge, an action identification parameter and the action feature, that an action corresponding to the human body posture data is a static action or a dynamic action;
the second identification module 104 is configured to identify that the motion corresponding to the human body posture data is static standing or static sitting according to second preset motion knowledge and the motion characteristics if the motion corresponding to the human body posture data is static motion;
a third identification module 105, configured to identify, according to third preset motion knowledge and the motion characteristics, that the motion corresponding to the human body posture data is walking or running if the motion corresponding to the human body posture data is a dynamic motion;
the motion identification parameters are initialized according to static data of a user standing still.
The action recognition device provided by the embodiment of the invention acquires the human body posture data of the user, which is acquired by the wearable device, at intervals of preset time; extracting action features from the human body posture data; recognizing the action corresponding to the human body posture data as a static action or a dynamic action according to first preset action knowledge, action recognition parameters and the action characteristics; if the action corresponding to the human body posture data is a static action, identifying that the action corresponding to the human body posture data is a static standing or a static sitting according to second preset action knowledge and the action characteristics; if the action corresponding to the human body posture data is a dynamic action, identifying the action corresponding to the human body posture data as walking or running according to third preset action knowledge and the action characteristics; the motion identification parameters are initialized according to static data of a user standing still;
the motion knowledge can be added into a motion recognition algorithm, so that the requirements of the number of sensors and the wearing positions of the sensors are effectively reduced, and the collected original data is directly used without being subjected to complex data processing; after the algorithm is simplified, the method can be realized on a low-cost computing chip, so that the wearable device can realize intellectualization at a wearing end, an identification result is directly generated, the instability and the time delay of data transmission are eliminated, and the marginalized requirement of computing of the Internet of things is met; accurate action recognition is performed under the conditions that computing resources are limited and information interaction is performed without depending on a cloud, the algorithm can be updated and perfected through networking, the implementation cost and the complexity are reduced, and the scheme can be applied to general people and improves the universality.
Wherein the body pose data comprises three-axis acceleration data of a body pose of a user;
correspondingly, the first extraction module comprises:
and the first processing submodule is used for compressing the triaxial acceleration data into one-dimensional feature vector data to obtain action features.
Further, the motion recognition apparatus further includes: the second acquisition module is used for acquiring static data of the user when the user stands still, which is acquired by the wearable device, before acquiring the human body posture data of the user, which is acquired by the wearable device, at preset intervals; the first processing module is used for adjusting the static data according to the static acceleration data; the second processing module is used for initializing the action identification parameters according to the adjusted static data; wherein the static acceleration data is used to eliminate data generated by shaking when the user is standing still.
Specifically, the static data includes triaxial acceleration data when the user is stationary; the second processing module comprises: a second processing submodule for setting the triaxial acceleration in the adjusted static data as Ax1=[acc11,...,acc1n],Ax2=[acc21,...,acc2n],Ax3=[acc31,...,acc3n];acc1nRepresenting the nth acceleration value of the first axis, acc2nRepresenting the nth acceleration value, acc, in the second axial direction3nRepresenting a third axial nth acceleration value; the following formula is utilized:
airepresenting the initial acceleration of the ith axial direction; acc (acrylic acid)ijAxial acceleration data representing the ith axial jth group in the static data,representing the average value of the axial acceleration of the ith axial direction in the static data, and n representing the total number of the axial acceleration data, β representing preset experience parameters, and generated by corresponding physical knowledge learning;
compressing the three-axis acceleration in the adjusted static data into one-dimensional characteristic vector data to obtain an initialized action identification parameter aDS=[a1,a2,a3]。
Wherein the first processing sub-module comprises: a first processing unit, configured to set the three-axis acceleration in the human body posture data as: representing the mth acceleration value in the first axis,representing the second axial mth acceleration value,represents the mth acceleration value of the third axis; the following formula is utilized:
birepresenting the target acceleration of the ith axial direction;axial acceleration data of the ith axial jth group in the human body posture data,representing the average value of the axial acceleration of the ith axial direction in the human body posture data, wherein m represents the total number of the axial acceleration data;
compressing the three-axis acceleration in the human body posture data into one-dimensional characteristic vector data to obtain a first target vector [ b1,b2,b3]As a first action characteristic;
correspondingly, the first identification module comprises: a first identification submodule for determining, based on the knowledge of the first predetermined action, if there is at least one biSatisfy bi<aiAnd if not, the motion corresponding to the human body posture data is a dynamic motion.
Wherein the first processing sub-module comprises: a second processing unit, configured to set the three-axis acceleration in the human body posture data as: represents the mth acceleration value of the x axis,represents the mth acceleration value of the y axis,represents the mth acceleration value of the z axis;
using the formula:compressing the three-axis acceleration in the human body posture data into one-dimensional feature vector data, and determining a first limb angle α of the user as a second action feature;
using the formula:compressing the three-axis acceleration in the human body posture data into one-dimensional feature vector data, and determining a second limb angle gamma of the user as a third action feature;
wherein,represents the average value of the axial acceleration of the x-axis direction in the human body posture data,represents the average value of the axial acceleration of the y-axis direction in the human body posture data,an average value of axial acceleration in the z-axis direction in the human body posture data is represented;
correspondingly, the second recognition module comprises a second recognition submodule and a second recognition submodule, wherein the second recognition submodule is used for determining according to second preset action knowledge, if α is larger than a set first threshold value, the action corresponding to the human body posture data is static standing, and if gamma is larger than a set second threshold value, the action corresponding to the human body posture data is static sitting.
Wherein the human body posture data comprises spindle angular velocity data of a human body posture of a user; the first extraction module comprises: the third processing submodule is used for obtaining main shaft angle data according to the main shaft angular speed data; and the fourth processing submodule is used for obtaining action characteristics according to the main shaft angle data.
Specifically, the fourth processing sub-module includes: the first obtaining unit is used for obtaining the number of wave crests and wave troughs in the main shaft angle data, recording the number as theta, and taking the theta as a fourth action characteristic;
correspondingly, the third identification module comprises: the third recognition submodule is used for determining according to third preset action knowledge, and if theta is equal to 1, the action corresponding to the human body posture data is walking; and if theta is greater than or equal to 2, the action corresponding to the human body posture data is running.
The implementation embodiments of the motion recognition method are all suitable for the embodiment of the motion recognition device, and the same technical effects can be achieved.
An embodiment of the present invention further provides a wearable device, as shown in fig. 11, including a memory 111, a processor 112, and a computer program 113 stored on the memory 111 and executable on the processor 112; the processor 112 implements the above-described motion recognition method when executing the program.
Specifically, the processor implements the following steps when executing the program:
acquiring human body posture data of a user, which is acquired by wearing equipment, at intervals of preset time;
extracting action features from the human body posture data;
recognizing the action corresponding to the human body posture data as a static action or a dynamic action according to first preset action knowledge, action recognition parameters and the action characteristics;
if the action corresponding to the human body posture data is a static action, identifying that the action corresponding to the human body posture data is a static standing or a static sitting according to second preset action knowledge and the action characteristics;
if the action corresponding to the human body posture data is a dynamic action, identifying the action corresponding to the human body posture data as walking or running according to third preset action knowledge and the action characteristics;
the motion identification parameters are initialized according to static data of a user standing still.
The wearable device provided by the embodiment of the invention acquires the human body posture data of the user, which is acquired by the wearable device, at intervals of preset time; extracting action features from the human body posture data; recognizing the action corresponding to the human body posture data as a static action or a dynamic action according to first preset action knowledge, action recognition parameters and the action characteristics; if the action corresponding to the human body posture data is a static action, identifying that the action corresponding to the human body posture data is a static standing or a static sitting according to second preset action knowledge and the action characteristics; if the action corresponding to the human body posture data is a dynamic action, identifying the action corresponding to the human body posture data as walking or running according to third preset action knowledge and the action characteristics; the motion identification parameters are initialized according to static data of a user standing still;
the motion knowledge can be added into a motion recognition algorithm, so that the requirements of the number of sensors and the wearing positions of the sensors are effectively reduced, and the collected original data is directly used without being subjected to complex data processing; after the algorithm is simplified, the method can be realized on a low-cost computing chip, so that the wearable device can realize intellectualization at a wearing end, an identification result is directly generated, the instability and the time delay of data transmission are eliminated, and the marginalized requirement of computing of the Internet of things is met; accurate action recognition is performed under the conditions that computing resources are limited and information interaction is performed without depending on a cloud, the algorithm can be updated and perfected through networking, the implementation cost and the complexity are reduced, and the scheme can be applied to general people and improves the universality.
Wherein the body pose data comprises three-axis acceleration data of a body pose of a user;
correspondingly, the step of extracting the motion characteristics from the human body posture data comprises the following steps: and compressing the triaxial acceleration data into one-dimensional feature vector data to obtain action features.
Further, before acquiring the human body posture data of the user collected by the wearable device at preset intervals, the motion recognition method further includes: acquiring static data of a user when the user stands still and collected by the wearable device; adjusting the static data according to the static acceleration data; initializing the action identification parameters according to the adjusted static data; wherein the static acceleration data is used to eliminate data generated by shaking when the user is standing still.
Specifically, the static data includes triaxial acceleration data when the user is stationary; the step of initializing the motion recognition parameters according to the adjusted static data comprises:
the triaxial acceleration in the adjusted static data is set as Ax1=[acc11,...,acc1n],Ax2=[acc21,...,acc2n],Ax3=[acc31,...,acc3n];acc1nRepresenting the nth acceleration value of the first axis, acc2nRepresenting the nth acceleration value, acc, in the second axial direction3nRepresenting third axial nth accelerationA numerical value; the following formula is utilized:
airepresenting the initial acceleration of the ith axial direction; acc (acrylic acid)ijAxial acceleration data representing the ith axial jth group in the static data,representing the average value of the axial acceleration of the ith axial direction in the static data, and n representing the total number of the axial acceleration data, β representing preset experience parameters, and generated by corresponding physical knowledge learning;
compressing the three-axis acceleration in the adjusted static data into one-dimensional characteristic vector data to obtain an initialized action identification parameter aDS=[a1,a2,a3]。
Compressing the triaxial acceleration data into one-dimensional feature vector data to obtain action features, wherein the step of obtaining the action features comprises the following steps: setting the three-axis acceleration in the human body posture data as: representing the mth acceleration value in the first axis,representing the second axial mth acceleration value,represents the mth acceleration value of the third axis; the following formula is utilized:
birepresenting the target acceleration of the ith axial direction;axial acceleration data of the ith axial jth group in the human body posture data,representing the average value of the axial acceleration of the ith axial direction in the human body posture data, wherein m represents the total number of the axial acceleration data;
compressing the three-axis acceleration in the human body posture data into one-dimensional characteristic vector data to obtain a first target vector [ b1,b2,b3]As a first action characteristic;
correspondingly, the step of recognizing the motion corresponding to the human body posture data as a static motion or a dynamic motion according to the first preset motion knowledge, the motion recognition parameters and the motion characteristics comprises: determining, according to the first preset action knowledge, if at least one b existsiSatisfy bi<aiAnd if not, the motion corresponding to the human body posture data is a dynamic motion.
Compressing the triaxial acceleration data into one-dimensional feature vector data to obtain action features, wherein the step of obtaining the action features comprises the following steps: setting the three-axis acceleration in the human body posture data as: represents the mth acceleration value of the x axis,represents the mth acceleration value of the y axis,represents the mth acceleration value of the z axis;
using the formula:compressing the three-axis acceleration in the human body posture data into one-dimensional feature vector data, and determining a first limb angle α of the user as a second action feature;
using the formula:compressing the three-axis acceleration in the human body posture data into one-dimensional feature vector data, and determining a second limb angle gamma of the user as a third action feature;
wherein,represents the average value of the axial acceleration of the x-axis direction in the human body posture data,represents the average value of the axial acceleration of the y-axis direction in the human body posture data,an average value of axial acceleration in the z-axis direction in the human body posture data is represented;
correspondingly, the step of identifying whether the motion corresponding to the human body posture data is static standing or static sitting according to the second preset motion knowledge and the motion characteristics comprises the steps of determining whether the motion corresponding to the human body posture data is static standing if α is larger than a set first threshold value or determining whether the motion corresponding to the human body posture data is static sitting if gamma is larger than a set second threshold value according to the second preset motion knowledge.
Wherein the human body posture data comprises spindle angular velocity data of a human body posture of a user; the step of extracting motion features from the human body posture data comprises: obtaining main shaft angle data according to the main shaft angular velocity data; and obtaining action characteristics according to the main shaft angle data.
Specifically, the step of obtaining the motion characteristics according to the spindle angle data includes: acquiring the number of wave crests and wave troughs existing in the main shaft angle data, recording the number as theta, and taking the theta as a fourth action characteristic;
correspondingly, the step of identifying the motion corresponding to the human body posture data as walking or running according to the third preset motion knowledge and the motion characteristics comprises: determining according to third preset action knowledge, if theta is equal to 1, the action corresponding to the human body posture data is walking; and if theta is greater than or equal to 2, the action corresponding to the human body posture data is running.
The implementation embodiments of the motion recognition method are all suitable for the embodiment of the wearable device, and the same technical effect can be achieved.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the above-mentioned motion recognition method.
Specifically, the program realizes the following steps when being executed by a processor:
acquiring human body posture data of a user, which is acquired by wearing equipment, at intervals of preset time;
extracting action features from the human body posture data;
recognizing the action corresponding to the human body posture data as a static action or a dynamic action according to first preset action knowledge, action recognition parameters and the action characteristics;
if the action corresponding to the human body posture data is a static action, identifying that the action corresponding to the human body posture data is a static standing or a static sitting according to second preset action knowledge and the action characteristics;
if the action corresponding to the human body posture data is a dynamic action, identifying the action corresponding to the human body posture data as walking or running according to third preset action knowledge and the action characteristics;
the motion identification parameters are initialized according to static data of a user standing still.
The computer program stored on the computer-readable storage medium provided by the embodiment of the invention acquires the human body posture data of the user, which is acquired by the wearable device, through the interval preset time; extracting action features from the human body posture data; recognizing the action corresponding to the human body posture data as a static action or a dynamic action according to first preset action knowledge, action recognition parameters and the action characteristics; if the action corresponding to the human body posture data is a static action, identifying that the action corresponding to the human body posture data is a static standing or a static sitting according to second preset action knowledge and the action characteristics; if the action corresponding to the human body posture data is a dynamic action, identifying the action corresponding to the human body posture data as walking or running according to third preset action knowledge and the action characteristics; the motion identification parameters are initialized according to static data of a user standing still;
the motion knowledge can be added into a motion recognition algorithm, so that the requirements of the number of sensors and the wearing positions of the sensors are effectively reduced, and the collected original data is directly used without being subjected to complex data processing; after the algorithm is simplified, the method can be realized on a low-cost computing chip, so that the wearable device can realize intellectualization at a wearing end, an identification result is directly generated, the instability and the time delay of data transmission are eliminated, and the marginalized requirement of computing of the Internet of things is met; accurate action recognition is performed under the conditions that computing resources are limited and information interaction is performed without depending on a cloud, the algorithm can be updated and perfected through networking, the implementation cost and the complexity are reduced, and the scheme can be applied to general people and improves the universality.
Wherein the body pose data comprises three-axis acceleration data of a body pose of a user;
correspondingly, the step of extracting the motion characteristics from the human body posture data comprises the following steps: and compressing the triaxial acceleration data into one-dimensional feature vector data to obtain action features.
Further, before acquiring the human body posture data of the user collected by the wearable device at preset intervals, the motion recognition method further includes: acquiring static data of a user when the user stands still and collected by the wearable device; adjusting the static data according to the static acceleration data; initializing the action identification parameters according to the adjusted static data; wherein the static acceleration data is used to eliminate data generated by shaking when the user is standing still.
Specifically, the static data includes triaxial acceleration data when the user is stationary; the step of initializing the motion recognition parameters according to the adjusted static data comprises:
the triaxial acceleration in the adjusted static data is set as Ax1=[acc11,...,acc1n],Ax2=[acc21,...,acc2n],Ax3=[acc31,...,acc3n];acc1nRepresenting the nth acceleration value of the first axis, acc2nRepresenting the nth acceleration value, acc, in the second axial direction3nRepresenting a third axial nth acceleration value; the following formula is utilized:
airepresenting the initial acceleration of the ith axial direction; acc (acrylic acid)ijAxial acceleration data representing the ith axial jth group in the static data,representing the average value of the axial acceleration of the ith axial direction in the static data, and n representing the total number of the axial acceleration data, β representing preset experience parameters, and generated by corresponding physical knowledge learning;
compressing the three-axis acceleration in the adjusted static data into one-dimensional characteristic vector data to obtain an initialized action identification parameter aDS=[a1,a2,a3]。
Compressing the triaxial acceleration data into one-dimensional feature vector data to obtain action features, wherein the step of obtaining the action features comprises the following steps: setting the three-axis acceleration in the human body posture data as: representing the mth acceleration value in the first axis,representing the second axial mth acceleration value,represents the mth acceleration value of the third axis; the following formula is utilized:
birepresenting the target acceleration of the ith axial direction;axial acceleration data of the ith axial jth group in the human body posture data,representing the average value of the axial acceleration of the ith axial direction in the human body posture data, wherein m represents the total number of the axial acceleration data;
compressing the three-axis acceleration in the human body posture data into one-dimensional characteristic vector data to obtain a first target vector [ b1,b2,b3]As a first action characteristic;
correspondingly, the step of recognizing the motion corresponding to the human body posture data as a static motion or a dynamic motion according to the first preset motion knowledge, the motion recognition parameters and the motion characteristics comprises: determining, according to the first preset action knowledge, if at least one b existsiSatisfy bi<aiAnd if not, the motion corresponding to the human body posture data is a dynamic motion.
Compressing the triaxial acceleration data into one-dimensional feature vector data to obtain action features, wherein the step of obtaining the action features comprises the following steps: setting the three-axis acceleration in the human body posture data as: represents the mth acceleration value of the x axis,represents the mth acceleration value of the y axis,represents the mth acceleration value of the z axis;
using the formula:compressing the three-axis acceleration in the human body posture data into one-dimensional feature vector data, and determining a first limb angle α of the user as a second action feature;
using the formula:compressing the three-axis acceleration in the human body posture data into one-dimensional feature vector data, and determining a second limb angle gamma of the user as a third action feature;
wherein,represents the average value of the axial acceleration of the x-axis direction in the human body posture data,represents the average value of the axial acceleration of the y-axis direction in the human body posture data,an average value of axial acceleration in the z-axis direction in the human body posture data is represented;
correspondingly, the step of identifying whether the motion corresponding to the human body posture data is static standing or static sitting according to the second preset motion knowledge and the motion characteristics comprises the steps of determining whether the motion corresponding to the human body posture data is static standing if α is larger than a set first threshold value or determining whether the motion corresponding to the human body posture data is static sitting if gamma is larger than a set second threshold value according to the second preset motion knowledge.
Wherein the human body posture data comprises spindle angular velocity data of a human body posture of a user; the step of extracting motion features from the human body posture data comprises: obtaining main shaft angle data according to the main shaft angular velocity data; and obtaining action characteristics according to the main shaft angle data.
Specifically, the step of obtaining the motion characteristics according to the spindle angle data includes: acquiring the number of wave crests and wave troughs existing in the main shaft angle data, recording the number as theta, and taking the theta as a fourth action characteristic;
correspondingly, the step of identifying the motion corresponding to the human body posture data as walking or running according to the third preset motion knowledge and the motion characteristics comprises: determining according to third preset action knowledge, if theta is equal to 1, the action corresponding to the human body posture data is walking; and if theta is greater than or equal to 2, the action corresponding to the human body posture data is running.
The implementation embodiments of the motion recognition method are all applicable to the embodiment of the computer-readable storage medium, and the same technical effects can be achieved.
It should be noted that many of the functional components described in this specification are referred to as modules/sub-modules/units/sub-units in order to more particularly emphasize their implementation independence.
In embodiments of the present invention, the modules/sub-modules/units/sub-units may be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions which may, for instance, be constructed as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different bits which, when joined logically together, comprise the module and achieve the stated purpose for the module.
Indeed, a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Likewise, operational data may be identified within the modules and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network.
When a module can be implemented by software, considering the level of existing hardware technology, a module implemented by software may build a corresponding hardware circuit to implement a corresponding function, without considering cost, and the hardware circuit may include a conventional Very Large Scale Integration (VLSI) circuit or a gate array and an existing semiconductor such as a logic chip, a transistor, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.
While the preferred embodiments of the present invention have been described, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (11)

1. A motion recognition method, comprising:
acquiring human body posture data of a user, which is acquired by wearing equipment, at intervals of preset time;
extracting action features from the human body posture data;
recognizing the action corresponding to the human body posture data as a static action or a dynamic action according to first preset action knowledge, action recognition parameters and the action characteristics;
if the action corresponding to the human body posture data is a static action, identifying that the action corresponding to the human body posture data is a static standing or a static sitting according to second preset action knowledge and the action characteristics;
if the action corresponding to the human body posture data is a dynamic action, identifying the action corresponding to the human body posture data as walking or running according to third preset action knowledge and the action characteristics;
the motion identification parameters are initialized according to static data of a user standing still.
2. The motion recognition method of claim 1, wherein the body posture data comprises three-axis acceleration data of a body posture of the user;
the step of extracting motion features from the human body posture data comprises:
and compressing the triaxial acceleration data into one-dimensional feature vector data to obtain action features.
3. The motion recognition method according to claim 2, wherein before acquiring the human body posture data of the user collected by the wearable device at intervals of a preset time, the motion recognition method further comprises:
acquiring static data of a user when the user stands still and collected by the wearable device;
adjusting the static data according to the static acceleration data;
initializing the action identification parameters according to the adjusted static data;
wherein the static acceleration data is used to eliminate data generated by shaking when the user is standing still.
4. The motion recognition method according to claim 3, wherein the static data includes triaxial acceleration data when the user is stationary;
the step of initializing the motion recognition parameters according to the adjusted static data comprises:
the triaxial acceleration in the adjusted static data is set as Ax1=[acc11,...,acc1n],Ax2=[acc21,...,acc2n],Ax3=[acc31,...,acc3n];acc1nRepresenting the nth acceleration value of the first axis, acc2nRepresenting the nth acceleration value, acc, in the second axial direction3nRepresenting a third axial nth acceleration value; the following formula is utilized:
i=1,2,3,airepresenting the initial acceleration of the ith axial direction; acc (acrylic acid)ijAxial acceleration data representing the ith axial jth group in the static data,representing the average value of the axial acceleration of the ith axial direction in the static data, and n representing the total number of the axial acceleration data, β representing preset experience parameters, and generated by corresponding physical knowledge learning;
compressing the three-axis acceleration in the adjusted static data into one-dimensional characteristic vector data to obtain an initialized action identification parameter aDS=[a1,a2,a3]。
5. The motion recognition method of claim 4, wherein the step of compressing the three-axis acceleration data into one-dimensional feature vector data to obtain motion features comprises:
setting the three-axis acceleration in the human body posture data as: representing the mth acceleration value in the first axis,representing the second axial mth acceleration value,represents the mth acceleration value of the third axis; the following formula is utilized:
i=1,2,3,birepresenting the target acceleration of the ith axial direction;axial acceleration data of the ith axial jth group in the human body posture data,representing the average value of the axial acceleration of the ith axial direction in the human body posture data, wherein m represents the total number of the axial acceleration data;
compressing the three-axis acceleration in the human body posture data into one-dimensional characteristic vector data to obtain a first target vector [ b1,b2,b3]As a first action characteristic;
the step of recognizing the motion corresponding to the human body posture data as a static motion or a dynamic motion according to the first preset motion knowledge, the motion recognition parameters and the motion characteristics comprises:
determining, according to the first preset action knowledge, if at least one b existsiSatisfy bi<aiAnd if not, the motion corresponding to the human body posture data is a dynamic motion.
6. The motion recognition method of claim 2, wherein the step of compressing the three-axis acceleration data into one-dimensional feature vector data to obtain motion features comprises:
setting the three-axis acceleration in the human body posture data as: represents the mth acceleration value of the x axis,represents the mth acceleration value of the y axis,represents the mth acceleration value of the z axis;
using the formula:compressing the three-axis acceleration in the human body posture data into one-dimensional feature vector data, and determining a first limb angle α of the user as a second action feature;
using the formula:compressing the three-axis acceleration in the human body posture data into one-dimensional feature vector data, and determining a second limb angle gamma of the user as a third action feature;
wherein,represents the average value of the axial acceleration of the x-axis direction in the human body posture data,represents the average value of the axial acceleration of the y-axis direction in the human body posture data,an average value of axial acceleration in the z-axis direction in the human body posture data is represented;
the step of identifying that the motion corresponding to the human body posture data is static standing or static sitting according to the second preset motion knowledge and the motion characteristics comprises the following steps of:
and determining that the motion corresponding to the human body posture data is static standing if α is larger than a set first threshold value and is static sitting if gamma is larger than a set second threshold value according to second preset motion knowledge.
7. The motion recognition method according to claim 1, wherein the human body posture data includes principal axis angular velocity data of a human body posture of a user;
the step of extracting motion features from the human body posture data comprises:
obtaining main shaft angle data according to the main shaft angular velocity data;
and obtaining action characteristics according to the main shaft angle data.
8. The motion recognition method of claim 7, wherein the step of obtaining motion characteristics from the spindle angle data comprises:
acquiring the number of wave crests and wave troughs existing in the main shaft angle data, recording the number as theta, and taking the theta as a fourth action characteristic;
the step of identifying the motion corresponding to the human body posture data as walking or running according to the third preset motion knowledge and the motion characteristics comprises the following steps:
determining according to third preset action knowledge, if theta is equal to 1, the action corresponding to the human body posture data is walking; and if theta is greater than or equal to 2, the action corresponding to the human body posture data is running.
9. An action recognition device, comprising:
the first acquisition module is used for acquiring human body posture data of a user, which is acquired by the wearable device, at intervals of preset time;
the first extraction module is used for extracting action characteristics from the human body posture data;
the first identification module is used for identifying the action corresponding to the human body posture data as a static action or a dynamic action according to first preset action knowledge, action identification parameters and the action characteristics;
the second identification module is used for identifying that the action corresponding to the human body posture data is static standing or static sitting according to second preset action knowledge and the action characteristics if the action corresponding to the human body posture data is static action;
the third identification module is used for identifying the action corresponding to the human body posture data as walking or running according to third preset action knowledge and the action characteristics if the action corresponding to the human body posture data is a dynamic action;
the motion identification parameters are initialized according to static data of a user standing still.
10. A wearable device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor; characterized in that the processor implements the action recognition method according to any one of claims 1 to 8 when executing the program.
11. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the steps of the motion recognition method according to any one of claims 1 to 8.
CN201810000873.0A 2018-01-02 2018-01-02 Action recognition method and device, wearable device and computer-readable storage medium Active CN109993037B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810000873.0A CN109993037B (en) 2018-01-02 2018-01-02 Action recognition method and device, wearable device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810000873.0A CN109993037B (en) 2018-01-02 2018-01-02 Action recognition method and device, wearable device and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN109993037A true CN109993037A (en) 2019-07-09
CN109993037B CN109993037B (en) 2021-08-06

Family

ID=67128817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810000873.0A Active CN109993037B (en) 2018-01-02 2018-01-02 Action recognition method and device, wearable device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN109993037B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866450A (en) * 2019-10-21 2020-03-06 桂林医学院附属医院 Parkinson disease monitoring method and device and storage medium
CN111166340A (en) * 2019-12-31 2020-05-19 石家庄学院 Human body posture real-time identification method based on self-adaptive acceleration signal segmentation

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104713568A (en) * 2015-03-31 2015-06-17 上海帝仪科技有限公司 Gait recognition method and corresponding pedometer
CN105242779A (en) * 2015-09-23 2016-01-13 歌尔声学股份有限公司 Method for identifying user action and intelligent mobile terminal
CN105718857A (en) * 2016-01-13 2016-06-29 兴唐通信科技有限公司 Human body abnormal behavior detection method and system
US9479730B1 (en) * 2014-02-13 2016-10-25 Steelcase, Inc. Inferred activity based conference enhancement method and system
CN106203484A (en) * 2016-06-29 2016-12-07 北京工业大学 A kind of human motion state sorting technique based on classification layering
US20170161380A1 (en) * 2015-12-04 2017-06-08 Chiun Mai Communication Systems, Inc. Server and music service providing system and method
CN106951852A (en) * 2017-03-15 2017-07-14 深圳汇创联合自动化控制有限公司 A kind of effective Human bodys' response system
CN107085246A (en) * 2017-05-11 2017-08-22 深圳合优科技有限公司 A kind of human motion recognition method and device based on MEMS
CN107220617A (en) * 2017-05-25 2017-09-29 哈尔滨工业大学 Human body attitude identifying system and method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9479730B1 (en) * 2014-02-13 2016-10-25 Steelcase, Inc. Inferred activity based conference enhancement method and system
CN104713568A (en) * 2015-03-31 2015-06-17 上海帝仪科技有限公司 Gait recognition method and corresponding pedometer
CN105242779A (en) * 2015-09-23 2016-01-13 歌尔声学股份有限公司 Method for identifying user action and intelligent mobile terminal
US20170161380A1 (en) * 2015-12-04 2017-06-08 Chiun Mai Communication Systems, Inc. Server and music service providing system and method
CN105718857A (en) * 2016-01-13 2016-06-29 兴唐通信科技有限公司 Human body abnormal behavior detection method and system
CN106203484A (en) * 2016-06-29 2016-12-07 北京工业大学 A kind of human motion state sorting technique based on classification layering
CN106951852A (en) * 2017-03-15 2017-07-14 深圳汇创联合自动化控制有限公司 A kind of effective Human bodys' response system
CN107085246A (en) * 2017-05-11 2017-08-22 深圳合优科技有限公司 A kind of human motion recognition method and device based on MEMS
CN107220617A (en) * 2017-05-25 2017-09-29 哈尔滨工业大学 Human body attitude identifying system and method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LEVI MALOTT: "《Detecting Self-harming Activities with Wearable Devices》", 《2015 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATION WORKSHOPS (PERCOM WORKSHOPS)》 *
MUHAMMAD ARIF ET AL.: "《Physical Activities Monitoring Using Wearable Acceleration Sensors Attached to the Body》", 《PLOS ONE》 *
王壮: "《可穿戴设备中的人体姿态识别方法》", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
蔡靖 等: "《基于人体传感和 Android 技术的运动监测系统设计与实现》", 《微电子技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866450A (en) * 2019-10-21 2020-03-06 桂林医学院附属医院 Parkinson disease monitoring method and device and storage medium
CN111166340A (en) * 2019-12-31 2020-05-19 石家庄学院 Human body posture real-time identification method based on self-adaptive acceleration signal segmentation

Also Published As

Publication number Publication date
CN109993037B (en) 2021-08-06

Similar Documents

Publication Publication Date Title
Ehatisham-Ul-Haq et al. Robust human activity recognition using multimodal feature-level fusion
Paul et al. An effective approach for human activity recognition on smartphone
CN112906604B (en) Behavior recognition method, device and system based on skeleton and RGB frame fusion
CN109685037B (en) Real-time action recognition method and device and electronic equipment
WO2020221307A1 (en) Method and device for tracking moving object
Thiemjarus et al. A study on instance-based learning with reduced training prototypes for device-context-independent activity recognition on a mobile phone
Banjarey et al. A survey on human activity recognition using sensors and deep learning methods
WO2021258333A1 (en) Gait abnormality early identification and risk early-warning method and apparatus
Hasan et al. Robust pose-based human fall detection using recurrent neural network
CN113066001A (en) Image processing method and related equipment
CN110327050B (en) Embedded intelligent detection method for falling state of person for wearable equipment
CN112200074A (en) Attitude comparison method and terminal
CN109993037B (en) Action recognition method and device, wearable device and computer-readable storage medium
Ponce et al. Sensor location analysis and minimal deployment for fall detection system
Jebali et al. Vision-based continuous sign language recognition using multimodal sensor fusion
CN112115790A (en) Face recognition method and device, readable storage medium and electronic equipment
CN114241597A (en) Posture recognition method and related equipment thereof
Hajjej et al. Deep human motion detection and multi-features analysis for smart healthcare learning tools
CN115862130B (en) Behavior recognition method based on human body posture and trunk sports field thereof
Khartheesvar et al. Automatic Indian sign language recognition using MediaPipe holistic and LSTM network
CN116246343A (en) Light human body behavior recognition method and device
CN115761885A (en) Behavior identification method for synchronous and cross-domain asynchronous fusion drive
CN114495272A (en) Motion recognition method, motion recognition device, storage medium, and computer apparatus
Alhersh et al. Action recognition using local visual descriptors and inertial data
KR20210091033A (en) Electronic device for estimating object information and generating virtual object and method for operating the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant