CN117407778A - Human body action recognition and prediction method - Google Patents

Human body action recognition and prediction method Download PDF

Info

Publication number
CN117407778A
CN117407778A CN202311343162.0A CN202311343162A CN117407778A CN 117407778 A CN117407778 A CN 117407778A CN 202311343162 A CN202311343162 A CN 202311343162A CN 117407778 A CN117407778 A CN 117407778A
Authority
CN
China
Prior art keywords
phase
action
motion data
human body
foot motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311343162.0A
Other languages
Chinese (zh)
Inventor
方方
秦吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Donghua University
Original Assignee
Donghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Donghua University filed Critical Donghua University
Priority to CN202311343162.0A priority Critical patent/CN117407778A/en
Publication of CN117407778A publication Critical patent/CN117407778A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods

Abstract

The invention relates to a human body action recognition and prediction method, which comprises the following steps: collecting foot acceleration data in real time; performing step phase division based on the foot motion data, and constructing a step phase feature vector according to the foot motion data of each step phase; putting the step feature vector into a step classification model for classification and identification to obtain a step classification result; and predicting whether the next action is a target action or not based on the step classification result. The invention can accurately predict the target action and accelerate the starting speed of the response device.

Description

Human body action recognition and prediction method
Technical Field
The invention relates to the field of human body motion recognition, in particular to a human body motion recognition prediction method.
Background
The present common motion prediction or recognition method is mainly based on image information, for example, a document CN 115393964A discloses a BlazePose-based fitness motion recognition method and device, and the invention carries out human body posture estimation on an image based on a BlazePose lightweight convolutional neural network to obtain the joint point position information of a user in the image when the user carries out fitness; converting joint point position information into feature vectors, and extracting current motion categories based on KNN classification; judging whether the gesture of the user in the current motion category is accurate or not through angle analysis and a distance threshold value, and feeding back action improvement suggestions according to the result. Document CN115273244a discloses a human motion recognition method and system based on a graph neural network, which processes an input video through a pre-trained human motion recognition network based on the graph neural network, and outputs motion classification, wherein the human motion recognition network comprises a 2D joint recognition network, a 3D joint recognition network and a fully-connected classification layer. The 2D feature extraction and joint point recognition of the data are realized by utilizing a downsampling layer with short connection and a corresponding upsampling layer, and the output is used as the skeleton diagram input of the graphic neural network, so that the accurate 3D action is output, and the action classification corresponding to the action sequence recognition is realized. Document 108284444A discloses a multi-mode human body action prediction method based on Tc-Promps algorithm under human-computer cooperation, wherein a prediction model is mainly divided into two modules of offline training and online prediction, and the offline module trains human body action samples to obtain a characteristic weight vector w and probability distribution of the characterization of a certain type of action skills; the on-line module observes human body actions through visual information and performs on-line recognition and rolling prediction on the human body actions.
The machine learning-based action recognition algorithm classifies the data first and then transmits the classification result to the terminal equipment for response action. However, in the movement, the motion of the human body sometimes happens rapidly, if the motion is completed after the data of one motion is completely collected and analyzed, the response time for the motion is missed, if the recognition and analysis are performed at the beginning of the movement, the terminal equipment needs to be driven to start or complete the response motion before the motion is completed, and the response difficulty is still great in the time. The motion prediction algorithm based on the image needs external equipment such as a camera to acquire image data for analysis, and cannot be realized in many scenes.
Disclosure of Invention
The invention aims to provide a human body action recognition and prediction method which can accurately predict the next action of a human body.
The technical scheme adopted for solving the technical problems is as follows: the method for identifying and predicting the human body actions comprises the following steps:
collecting foot motion data in real time;
performing step phase division based on the foot motion data, and constructing a step phase feature vector according to the foot motion data of each step phase;
putting the step feature vector into a step classification model for classification and identification to obtain a step classification result;
and predicting whether the next action is a target action or not based on the step classification result.
Further, the foot motion data includes foot acceleration data including a lateral axis acceleration, a longitudinal axis acceleration, and a vertical axis acceleration, the lateral axis acceleration being an acceleration in a lateral movement direction of the human body, the longitudinal axis acceleration being an acceleration in a forward direction of the human body, and the vertical axis acceleration being an acceleration in a vertical direction of the human body.
Further, the step phase feature vector comprises a horizontal axis acceleration average value, a vertical axis acceleration average value and a vertical axis acceleration average value.
Further, the step phase division based on the foot motion data, and constructing a step phase feature vector according to the foot motion data of each step phase, includes:
setting a phase reference period, wherein the phase reference period is estimated time required by a human body to finish a phase;
judging whether the current walking phase is finished or not based on the periodic change rule of the foot motion data in the walking phase reference period, dividing the foot motion data obtained in the current walking phase reference period into one walking phase if the current walking phase is finished, constructing the walking phase feature vector, starting the next walking phase reference period, and otherwise, continuously collecting the foot motion data;
and in the step phase reference period, if the current step phase is not judged to be finished all the time, dividing the foot motion data obtained in the whole step phase reference period into one step phase, and constructing the step phase feature vector.
Further, when the human body is travelling, the step of judging whether the current step is finished based on the periodic change rule of the foot motion data comprises the following steps:
setting a positive reference peak value and a negative reference peak value of the longitudinal axis acceleration;
and when the longitudinal axis acceleration reaches a positive reference peak value and a negative reference peak value and returns to the initial acceleration, the human body is considered to finish one step phase.
Further, the foot motion data may also include foot pressure data.
Further, when the human body is travelling, the step of judging whether the current step is finished based on the periodic change rule of the foot motion data comprises the following steps:
setting a positive reference peak value and a negative reference peak value of the longitudinal axis acceleration;
and when the longitudinal axis acceleration reaches a positive reference peak value and a negative reference peak value, returning to the initial acceleration, and the foot pressure data become larger and smaller, the human body is considered to finish a walking phase.
Further, the step classification result includes a target action, a preparation action, a non-target action, the preparation action being a previous step or a combination of previous steps of the target action.
Further, the predicting whether the next action is the target action based on the step phase classification result includes:
if the current step is the non-preparation action, returning to the step of collecting foot motion data in real time;
if the current step is the preparation action and the preparation action comprises one step, predicting the next step as the target action;
if the current step is the preparation action and the preparation action comprises a plurality of step phases, judging whether the current step is the last step phase in the preparation action, if so, predicting the next step as the target action, otherwise, recording the current step phase and returning to the step of collecting foot motion data in real time.
Further, the step-phase classification model is constructed based on a support vector machine.
Further, the method also comprises the step of starting response action based on the step phase classification result.
Advantageous effects
Due to the adoption of the technical scheme, compared with the prior art, the invention has the following advantages and positive effects: according to the human motion characteristics, the human motion is divided into unsynchronized phases for analysis, the previous step phase of the target motion is taken as a preparation motion, and the preparation motion is identified by utilizing the classification model so as to predict the target motion instead of identifying the target motion which is in progress or is completed. When the preparation action is finished, the recognition process of the preparation action is finished at the same time, and when the target action is started, the response is started, so that the response device can predict the target action earlier, and more time is provided for the preparation of the subsequent corresponding action.
Drawings
FIG. 1 is an exploded view of a motion phase of a human body in an embodiment of the present invention;
FIG. 2 is a flow chart of an embodiment of the present invention;
FIG. 3 is a flow chart of motion prediction and response in an embodiment of the invention;
fig. 4 is a schematic diagram of foot data of a sudden stop motion of a human body in an embodiment of the present invention.
Detailed Description
The invention will be further illustrated with reference to specific examples. It is to be understood that these examples are illustrative of the present invention and are not intended to limit the scope of the present invention. Further, it is understood that various changes and modifications may be made by those skilled in the art after reading the teachings of the present invention, and such equivalents are intended to fall within the scope of the claims appended hereto.
The embodiment of the invention relates to a human body motion recognition and prediction method, which is characterized in that as shown in fig. 2, a sensor is used for collecting acceleration data and plantar pressure data of a foot, and a controller reads and processes the collected data. And firstly, carrying out smooth filtering on the data to obtain preprocessed data. Because each step of human body movement can be roughly regarded as a periodic movement, the human body movement can be divided according to the periodic characteristics of data, and the characteristic vector of each step can be put into a trained classification model for classification recognition. As shown in fig. 1, each step is divided into a target action, a non-target action, and a preparation action according to requirements, and the preparation action may be one step or a combination of steps, and the preparation action is the target action after completion.
The prediction response process is shown in fig. 3, if the classification result is a non-preparation action, namely a target action or a non-target action, the action prediction process is ended, the response is not started, and the incoming step phase data is re-transmitted for classification recognition. If the classification result is a preparation action or a part of the preparation action, the step phase is started to be recorded, and if a single step phase or a combination of a plurality of step phases is identified as the preparation action, the next step phase is predicted to be a target action, and a start response signal is output to drive the response. The response action can be any action aiming at the target action, can be record or feedback of the wearable device, and can also be on or off of the function of the external device. The target action can be any action in any movement, and the invention can be used in various movements only by constructing a model after collecting corresponding data.
Taking a sudden stop motion of basketball as an example, taking a stop motion as a target motion, a preparation motion is taken as a step before stopping, and a running motion is taken as a non-target motion.
Firstly, foot pressure and foot acceleration data of corresponding actions are required to be acquired for data processing and modeling. In the data processing stage, a sliding average filtering method is used for preprocessing sensor data, and step-by-step division is carried out according to the periodicity of human body movement after the processing is completed.
The preprocessed data are shown in fig. 4, wherein the abscissa in the upper graph is a sampling point, the ordinate is acceleration, and the abscissa in the lower graph is a sampling point, and the ordinate is a pressure value. With the landing of the foot of the subject, the foot is supported, kicked and swung, and besides the pressure of the foot on the inner side of the front sole is increased and then reduced, the acceleration in the xyz direction also changes. The data periodicity is better but the data volume is bigger, and the advancing direction of running is taken as to the y direction, and the runner can exert a force to ground when the foot is kicked over the ground, receives the feedback of the reaction force on ground, provides the energy for the start, so plantar pressure can begin to increase, and the acceleration in the y direction also can begin to increase. After the foot is started, the foot is emptied, and the plantar pressure is maintained in a relatively stable state, mainly from the pressure of the foot and the sole regulated by the inclusion. And after emptying, the acceleration in the y direction will start to decrease. When the subject's foot lands, plantar pressure begins to increase, while acceleration in the y-direction begins to increase, entering the next cycle. During running, the acceleration in the y direction fluctuates in a small range during the time from the foot landing to the time of leaving the ground again. The acceleration in other directions also has certain periodicity, and the acceleration in the z direction can be reduced when the foot is lifted off the ground, can be increased when the foot falls on the ground, and can generate fluctuation within a certain range during landing and vacation. According to the characteristics, after the step phases are divided, the mean value (x.mean (), y.mean (), z.mean ()) of acceleration in the xyz three directions is constructed as a feature vector, and the feature vector is put into a step phase classification model constructed based on a Support Vector Machine (SVM) for classification.
The data acquired in real time during the movement process is divided into steps according to the periodicity. Taking the action of the human body when advancing forward as an example, firstly presetting a reference period of a walking phase and a reference peak value of acceleration in the y direction according to experience, setting the current period, and if the acceleration reaches the reference peak value once in the positive direction and the negative direction respectively and returns to the vicinity of an initial value, considering that the human body completes one walking phase. In order to obtain a more accurate result, auxiliary judgment can be performed by utilizing plantar pressure data, and if the acceleration reaches a reference peak value once in the positive and negative directions in the current period and returns to the vicinity of an initial value, and the foot pressure data becomes larger and smaller, the human body is considered to finish a step phase.
If the system always determines that the phase is not completed in the period, the data in the whole phase reference period is divided as one phase. And constructing a feature vector according to the divided step phases, and transmitting the feature vector into a model for recognition, wherein if the feature vector is recognized as a preparation action, namely, a step before sudden stop, the next step is considered as a sudden stop action, a response activity is started, and meanwhile, a human body enters the sudden stop action, so that the specific activity can be performed at the beginning of the sudden stop action, and the sudden stop action is not missed.
The following is a specific prediction result, 30% of all the existing samples at present are used as identified samples, the remaining 70% of the existing samples are used for carrying out modeling design algorithm, the overall identification accuracy of the algorithm is 88.9%, and the pre-scram action identification accuracy is 73.1%.
For example, in the running shooting action of football, taking the shooting action as a target action, taking the previous step of the shooting action as a preparation action, taking the running action of the movement front section as a non-target action, according to the method of the invention, after the previous step of the shooting action is completed, the preparation action is identified, the next step is predicted to be the shooting action, and a response signal is sent, so that other equipment can be controlled to react to the shooting action.
In some more regular movements, such as three-level long jump, a regular stepping motion before the jump can be taken as a preparation motion, i.e. the preparation motion is taken as a combination of several steps, and after the preparation motion is identified, the preparation motion responds to the jump motion.
In some specific scenes, such as daily walking, a walking phase of normal walking can be used as a non-target action, a step before some emergency conditions, such as falling and the like, are used as a preparation action, the next falling is predicted after the emergency conditions are identified, and external equipment can be started for prevention and treatment when the falling action starts.

Claims (10)

1. The human body action recognition and prediction method is characterized by comprising the following steps of:
collecting foot motion data in real time;
performing step phase division based on the foot motion data, and constructing a step phase feature vector according to the foot motion data of each step phase;
putting the step feature vector into a step classification model for classification and identification to obtain a step classification result;
and predicting whether the next action is a target action or not based on the step classification result.
2. The method of claim 1, wherein the foot motion data comprises foot acceleration data comprising a lateral axis acceleration, a longitudinal axis acceleration, and a vertical axis acceleration, the lateral axis acceleration being an acceleration in a lateral direction of the human body, the longitudinal axis acceleration being an acceleration in a forward direction of the human body, the vertical axis acceleration being an acceleration in a vertical direction of the human body, the step feature vector comprising a lateral axis acceleration average, a longitudinal axis acceleration average, and a vertical axis acceleration average.
3. The method of claim 2, wherein the step phase partitioning based on the foot motion data and constructing step phase feature vectors from the foot motion data for each step phase comprises:
setting a phase reference period, wherein the phase reference period is estimated time required by a human body to finish a phase;
judging whether the current walking phase is finished or not based on the periodic change rule of the foot motion data in the walking phase reference period, if so, dividing the foot motion data obtained in the current walking phase reference period into one walking phase, constructing the walking phase feature vector, and starting the next walking phase reference period, otherwise, continuing to acquire the foot motion data;
and in the step phase reference period, if the current step phase is not judged to be finished all the time, dividing the foot motion data obtained in the whole step phase reference period into one step phase, and constructing the step phase feature vector.
4. A method according to claim 3, wherein said determining whether the current phase is completed based on the periodic variation law of the foot motion data while the human body is traveling comprises:
setting a positive reference peak value and a negative reference peak value of the longitudinal axis acceleration;
and when the longitudinal axis acceleration reaches a positive reference peak value and a negative reference peak value and returns to the initial acceleration, the human body is considered to finish one step phase.
5. A method according to claim 3, wherein the foot motion data further comprises foot pressure data.
6. The method of claim 5, wherein determining whether the current phase is completed based on the periodic variation law of the foot motion data while the human body is traveling, comprises:
setting a positive reference peak value and a negative reference peak value of the longitudinal axis acceleration;
and when the longitudinal axis acceleration reaches a positive reference peak value and a negative reference peak value, returning to the initial acceleration, and the foot pressure data become larger and smaller, the human body is considered to finish a walking phase.
7. The method of claim 1, wherein the step classification result comprises a target action, a preparatory action, a non-target action, the preparatory action being a previous step or a combination of previous steps of the target action.
8. The method of claim 7, wherein predicting whether a next action is a target action based on the step-phase classification result comprises:
if the current step is the non-preparation action, returning to the step of collecting foot motion data in real time;
if the current step is the preparation action and the preparation action comprises one step, predicting the next step as the target action;
if the current step is the preparation action and the preparation action comprises a plurality of step phases, judging whether the current step is the last step phase in the preparation action, if so, predicting the next step as the target action, otherwise, recording the current step phase and returning to the step of collecting foot motion data in real time.
9. The method of claim 1, wherein the step-phase classification model is constructed based on a support vector machine.
10. The method of claim 1, further comprising the step of initiating a responsive action based on the step phase classification result.
CN202311343162.0A 2023-10-17 2023-10-17 Human body action recognition and prediction method Pending CN117407778A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311343162.0A CN117407778A (en) 2023-10-17 2023-10-17 Human body action recognition and prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311343162.0A CN117407778A (en) 2023-10-17 2023-10-17 Human body action recognition and prediction method

Publications (1)

Publication Number Publication Date
CN117407778A true CN117407778A (en) 2024-01-16

Family

ID=89495553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311343162.0A Pending CN117407778A (en) 2023-10-17 2023-10-17 Human body action recognition and prediction method

Country Status (1)

Country Link
CN (1) CN117407778A (en)

Similar Documents

Publication Publication Date Title
CN108256433B (en) Motion attitude assessment method and system
Piana et al. Real-time automatic emotion recognition from body gestures
CN110874578B (en) Unmanned aerial vehicle visual angle vehicle recognition tracking method based on reinforcement learning
Elmezain et al. A hidden markov model-based continuous gesture recognition system for hand motion trajectory
JP7146247B2 (en) Motion recognition method and device
Jensen et al. Classification of kinematic swimming data with emphasis on resource consumption
CN103679203A (en) Robot system and method for detecting human face and recognizing emotion
Hasan et al. Robust pose-based human fall detection using recurrent neural network
CN105912142A (en) Step recording and behavior identification method based on acceleration sensor
CN106123911A (en) A kind of based on acceleration sensor with the step recording method of angular-rate sensor
CN111516700A (en) Driver distraction fine-granularity monitoring method and system
US20200250490A1 (en) Machine learning device, robot system, and machine learning method
CN111950393A (en) Time sequence action fragment segmentation method based on boundary search agent
CN114783611A (en) Neural recovered action detecting system based on artificial intelligence
CN111898420A (en) Lip language recognition system
CN112115790A (en) Face recognition method and device, readable storage medium and electronic equipment
CN117407778A (en) Human body action recognition and prediction method
CN112698660B (en) Driving behavior visual perception device and method based on 9-axis sensor
Liang et al. Teacher assistant-based knowledge distillation extracting multi-level features on single channel sleep EEG
CN112906673A (en) Lower limb movement intention prediction method based on attention mechanism
CN112015261A (en) Intelligent terminal driving mode identification method
CN113705339B (en) Cross-user human behavior recognition method based on antagonism domain adaptation strategy
CN113457108B (en) Cognitive characterization-based exercise performance improving method and device
Rusydi et al. Facial Features Extraction Based on Distance and Area of Points for Expression Recognition
CN114926887A (en) Face recognition method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination