CN109886123A - A kind of method and terminal identifying human action - Google Patents

A kind of method and terminal identifying human action Download PDF

Info

Publication number
CN109886123A
CN109886123A CN201910063605.8A CN201910063605A CN109886123A CN 109886123 A CN109886123 A CN 109886123A CN 201910063605 A CN201910063605 A CN 201910063605A CN 109886123 A CN109886123 A CN 109886123A
Authority
CN
China
Prior art keywords
human action
target body
data
body movement
human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910063605.8A
Other languages
Chinese (zh)
Other versions
CN109886123B (en
Inventor
王健宗
彭俊清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910063605.8A priority Critical patent/CN109886123B/en
Publication of CN109886123A publication Critical patent/CN109886123A/en
Application granted granted Critical
Publication of CN109886123B publication Critical patent/CN109886123B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The present invention is suitable for field of computer technology, provides a kind of method and terminal for identifying human action, this method comprises: obtaining the human action data of target body movement to be sorted;Wherein, during the human action data is included in the completion target body movement, the rotation attitude data for the skeleton being each monitored;The human action data is imported preset human action disaggregated model to handle, obtains the classification results of the target body movement;Class label belonging to the target body movement is exported based on the classification results.The embodiment of the present invention, based on during completing target body movement, each the rotation attitude data of monitored skeleton, act target body and carry out global analysis, classification belonging to human action can be accurately identified, the accuracy of the classification results of human action is improved.

Description

A kind of method and terminal identifying human action
Technical field
The invention belongs to field of computer technology more particularly to a kind of methods and terminal for identifying human action.
Background technique
With the development of human body movement data acquisition technique, the research of human motion is in medical rehabilitation, training, virtual There is very big application spaces for reality, human-computer interaction, video display and game etc., therefore the research of human motion is increasingly It is concerned by people.Human motion can specifically be expressed as the movement of the various pieces of human body in the 3 d space, and human body is dynamic Make specifically to can be regarded as a complete independent movement segment during human motion, for example, " jump " movement by bending the knee, The a series of human action composition such as takeoff, land.
In the prior art, in order to solve the problems, such as that human motion classification proposes more classification method, but it is existing Classification method carries out independent analysis processing only for each movement segment in human motion, and it is very quasi- for leading to classification results not Really.
Summary of the invention
In view of this, the embodiment of the invention provides a kind of methods and terminal for identifying human action, to solve existing skill In art, because carrying out independent analysis processing for each movement segment in human motion, leading to classification results not is very accurately Problem.
The first aspect of the embodiment of the present invention provides a kind of method for identifying human action, comprising:
Obtain the human action data of target body movement to be sorted;Wherein, the human action data has been included in During target body movement, the rotation attitude data for the skeleton being each monitored;
The human action data is imported preset human action disaggregated model to handle, obtains the target body The classification results of movement;Wherein, the human action disaggregated model is using machine learning algorithm to the training of sample human action Collection is trained to obtain, and in the training process, the input of the human action disaggregated model is sample human action training The corresponding human action data of each human action is concentrated, the output of the human action disaggregated model is the human action pair The classification results answered;
Class label belonging to the target body movement is exported based on the classification results.
The second aspect of the embodiment of the present invention provides a kind of terminal, comprising:
Acquiring unit, for obtaining the human action data of target body movement to be sorted;Wherein, the human action During data are included in the completion target body movement, the rotation attitude data for the skeleton being each monitored;
Recognition unit is handled for the human action data to be imported preset human action disaggregated model, is obtained The classification results acted to the target body;Wherein, the human action disaggregated model is using machine learning algorithm to sample This human body action training collection is trained to obtain, and in the training process, the input of the human action disaggregated model is the sample This human body action training concentrates the corresponding human action data of each human action, and the output of the human action disaggregated model is The corresponding classification results of the human action;
Output unit, for exporting class label belonging to the target body movement based on the classification results.
The third aspect of the embodiment of the present invention provides a kind of terminal, including memory, processor and is stored in described In memory and the computer program that can run on the processor, the processor are realized when executing the computer program Following steps:
Obtain the human action data of target body movement to be sorted;Wherein, the human action data has been included in During target body movement, the rotation attitude data for the skeleton being each monitored;
The human action data is imported preset human action disaggregated model to handle, obtains the target body The classification results of movement;Wherein, the human action disaggregated model is using machine learning algorithm to the training of sample human action Collection is trained to obtain, and in the training process, the input of the human action disaggregated model is sample human action training The corresponding human action data of each human action is concentrated, the output of the human action disaggregated model is the human action pair The classification results answered;
Class label belonging to the target body movement is exported based on the classification results.
The third aspect of the embodiment of the present invention provides a kind of computer readable storage medium, the computer-readable storage Media storage has computer program, and the computer program performs the steps of when being executed by processor
Obtain the human action data of target body movement to be sorted;Wherein, the human action data has been included in During target body movement, the rotation attitude data for the skeleton being each monitored;
The human action data is imported preset human action disaggregated model to handle, obtains the target body The classification results of movement;Wherein, the human action disaggregated model is using machine learning algorithm to the training of sample human action Collection is trained to obtain, and in the training process, the input of the human action disaggregated model is sample human action training The corresponding human action data of each human action is concentrated, the output of the human action disaggregated model is the human action pair The classification results answered;
Class label belonging to the target body movement is exported based on the classification results.
Implement a kind of method for identifying human action and terminal provided in an embodiment of the present invention to have the advantages that
The embodiment of the present invention, by using preset human action disaggregated model to the people of target body movement to be sorted Body action data is handled, and the classification results of target body movement are obtained.Target is completed since human action data is included in During human action, each the rotation attitude data of monitored skeleton therefore can be based in completion target person During body acts, each the rotation attitude data of monitored skeleton, act target body and carry out global analysis, The method for determining human action classification results by carrying out independent analysis to human action segment in compared to the prior art, energy Classification belonging to human action is enough accurately identified, the accuracy of classification is improved.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only of the invention some Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these Attached drawing obtains other attached drawings.
Fig. 1 is a kind of implementation flow chart of the method for identification human action that one embodiment of the invention provides;
Fig. 2 be another embodiment of the present invention provides a kind of identification human action method implementation flow chart;
Fig. 3 is a kind of schematic diagram for terminal that one embodiment of the invention provides;
Fig. 4 be another embodiment of the present invention provides a kind of terminal schematic diagram.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Referring to Figure 1, Fig. 1 is a kind of implementation flow chart of method for identifying human action provided in an embodiment of the present invention. The executing subject that the method for human action is identified in the present embodiment is terminal.Terminal includes but is not limited to smart phone, plate electricity The mobile terminals such as brain, wearable device can also be desktop computer etc..The method of identification human action as shown in the figure can wrap It includes:
S101: the human action data of target body movement to be sorted is obtained;Wherein, the human action data includes During completing target body movement, the rotation attitude data for the skeleton being each monitored.
Terminal can obtain the human action data of target body movement to be sorted, also available preparatory detection in real time The human action data for the target body movement to be sorted arrived.Wherein, terminal can be based on being set to monitored human body bone The data that the sensor or wearable device of the corresponding site of bone monitor obtain the human action data of target body movement.
Human action may include that one or at least two continuous transfers are made.Human action include but is not limited to jump, It runs;Wherein, jump may include point movement such as bend the knee, takeoff, landing.
The human action data of human action includes human body in the process for completing all transfers work that human body movement includes In, the rotation attitude data of monitored skeleton.The skeletal sites of monitored skeleton can specifically include left hand, Left upper arm, lower-left arm, the right hand, right upper arm, bottom right arm, left foot, upper left leg, left lower leg, right crus of diaphragm, upper right leg, bottom right leg, head, Neck, waist, buttocks and throat etc., that is, include 17 human body skeletal sites, and the quantity of skeletal sites can carry out according to demand Increase, it is not limited here.
When human action includes that at least two continuous transfers are made, the human action data of human body movement includes at least Two transfers make corresponding human action data, and at least two transfers make corresponding human action data according to complete At the sequencing arrangement for the time that each transfer is made.Each transfer, which makees corresponding human action data, can be represented by vectors, At this point, the corresponding human action data of human action by least two in chronological sequence tactic vector form.For example, people Body action data can be expressed as X=(X1,X2,...,Xt,...,XT), wherein XtIt represents and completes the in some human action During t transfer is made, the rotation attitude data at all monitored skeleton positions.Wherein, t, T are positive integer.
Since human body is when completing certain human action, the skeleton for completing human body movement can be moved, therefore, can Tying up sensor or wearable device at the monitored corresponding position of skeleton, thus the human body that monitoring is each monitored The rotation attitude data of bone, and then obtain the human action data of target body movement to be sorted.
Further, target body movement is made of at least two continuous transfers;The human body of target body movement is dynamic Making data includes that at least two transfers make corresponding rotation attitude data.
Further, S101 may include S1011~S1012, specific as follows:
S1011: it obtains during completing target body movement, the collected each human body bone of sensor The corresponding characteristic of bone;The characteristic includes translational movement and angle rotation amount.
Specifically, terminal obtains each sensor of setting or the letter such as the real time position of wearable device and the direction of motion Breath, and determines its corresponding translational movement according to the real time position of each sensor or wearable device, according to each sensor or The direction of motion of wearable device determines its corresponding angle rotation amount.
S1012: being based on the corresponding characteristic of all skeletons, determines that the human body of the target body movement is dynamic Make data.
Translational movement and angle rotation amount of the terminal based on each sensor or wearable device, generate each sensor or The corresponding rotation attitude data of wearable device.The rotation attitude data can specifically be expressed as quaternary number, for example, it is assumed that rotation The quaternary number of attitude data is qt, then qt=(w, v)=w+xi+yj+zk, wherein w real part, v are imaginary part, the void of v The component of number part specifically includes tri- components of i, j and k, and i2=j2=k2=-1.Each transfer makees corresponding rotation attitude Data areWherein what M was indicated is the number at skeleton position, which can be according to demand It is set.
X=(X is expressed as since target body acts corresponding rotation attitude data1,X2,...,Xt,...,XT), and it is every A transfer makees corresponding rotation attitude dataEach transfer is made corresponding rotation appearance by terminal State data are arranged according to the sequencing for completing the time that each transfer is made, and the human body that target body movement can be obtained is dynamic Make data.
S102: the human action data is imported into preset human action disaggregated model and is handled, the mesh is obtained Mark the classification results of human action;Wherein, the human action disaggregated model is dynamic to sample human body using machine learning algorithm It is trained to obtain as training set, in the training process, the input of the human action disaggregated model is dynamic for the sample human body Make the corresponding human action data of each human action in training set, the output of the human action disaggregated model is the human body Act corresponding classification results.
Target body is acted corresponding human action data and imports trained human action disaggregated model in advance by terminal, And corresponding human action data is acted to target body by human body classification of motion model and is analyzed and processed, extract human body The feature vector of action data, and classification belonging to the prediction target body movement of the feature vector based on human action data, and Export the classification results of target body movement.Wherein, the feature vector of human action data is corresponding to target body movement Human action data carries out the feature vector obtained after depth characteristic extraction.
Classification belonging to the classification results mark target body movement of target body movement, human action disaggregated model is to adopt Sample human action training set is trained to obtain with machine learning algorithm.In the training process, human action disaggregated model Input be the corresponding human action data of each human action in sample human action training set, human action disaggregated model Output is the corresponding classification results of human action.Sample human action training set includes the human action of preset quantity, everyone Preset class label is arranged in body movement.
The method that human action disaggregated model is trained are as follows: obtain for being trained to human action disaggregated model Sample human action training set, and the sample data in sample human action training set is divided into training set and test set.Its In, the corresponding human action data of a human action is a sample data, and every sample data includes that a human body is dynamic The human actions label such as make the tag along sort of corresponding human action data and human body movement, such as jump, run.By training set In every sample data in include human action data and the corresponding tag along sort of human action data as human action The input of disaggregated model, using the classification results of the corresponding prediction of human action data as the output of human motion disaggregated model, Human motion disaggregated model is trained.
For the human action disaggregated model after training, terminal also needs to test to human action disaggregated model.Its In, the method verified to human action disaggregated model is specifically as follows: including by every sample data in test set Human action data is input to the human motion disaggregated model after training, the classification results predicted;By the way that the human body is moved Verification is compared in the classification results preset tag along sort corresponding with human body action data for making the corresponding prediction of data;When When the accuracy rate of the classification results of the corresponding prediction of human action data in test set reaches preset value, then illustrate that verification is logical It crosses, human action disaggregated model training is completed, and human action disaggregated model can be used to execute the human body classification of motion at this time;It is no Then, then it also needs to continue that human action disaggregated model is trained and is verified.
Wherein, the human action disaggregated model in the present embodiment may include input layer, hidden layer, logistic regression layer and defeated Layer out.Wherein, input layer includes at least one input layer, for dynamic from the external target body to be sorted for receiving input The human action data of work.Hidden layer includes more than two hidden layer nodes, the human action for acting to target body Data are handled, and the characteristic information of the human action data of target body movement is extracted;Wherein, the quantity of hidden layer can be 2.The characteristic information for the human action data that logistic regression layer is used to act target body is analyzed and is handled, output layer Including an output node layer, it to be used for output category result.
Further, when target body movement is made of at least two continuous transfers, in order to improve classification results Accuracy, S102 specifically may include S1021~S1023, specific as follows:
S1021: the human action data is imported into preset human action disaggregated model and is handled, the people is obtained The corresponding first eigenvector of body action data.
Terminal imports preset from the human action data that the input layer of human action disaggregated model acts target body Human action disaggregated model, and the human action data that target body acts is transmitted to the first hidden layer and second is hidden Layer;It is handled by the human action data that the first hidden layer acts target body, extracts the human body of target body movement The corresponding first eigenvector h of action data1
Specifically, it is handled by the human action data that the first hidden layer acts target body, extracts target person The corresponding first eigenvector h of human action data of body movement1Realization process it is as follows:
It is handled by the human action data that the first hidden layer acts target body, obtains target body movement The corresponding first analysis vector H of human action data1, it is based on formulaTo the first analysis vector H1It averages Processing obtains the corresponding first eigenvector h of human action data of target body movement1
S1022: based at the time of getting each transfer and make corresponding human action data, by from evening to early suitable Sequence is made corresponding human action data to all transfers and is ranked up, obtains reversed human action data;And described in determining The corresponding second feature vector of reversed human action data;The dimension of the first eigenvector and the second feature vector It is identical.
The human action data that terminal acts target body imports after preset human action disaggregated model, based on obtaining At the time of getting each transfer and make corresponding human action data, by the institute for including is acted to target body from evening to early sequence There is transfer to make corresponding human action data to be ranked up, obtains reversed human action data.For example, when target body movement pair The human action data answered is X=(X1,X2,...,Xt,...,XT) when, target body acts corresponding reversed human action number According to for X '=(XT,...,Xt,...X2,X1)。
Reversed human action data is inputted the second hidden layer and handled by terminal, and it is corresponding to extract reversed human action data Second feature vector h2
Wherein, first eigenvector h1With second feature vector h2For the identical feature vector of dimension.First analysis vector h1 Can be speciallySecond feature vector h2Can be
Specifically, it is handled by the human action data that the second hidden layer acts target body, extracts target person The corresponding second feature vector h of human action data of body movement2Realization process it is as follows:
It is handled by the human action data that the second hidden layer acts target body, obtains target body movement The corresponding second analysis vector H of human action data2, it is based on formulaTo the second analysis vector H2It averages Processing obtains the corresponding second feature vector h of human action data of target body movement2.Wherein, the first analysis vector H1With And second analysis vector H2Vector dimension it is identical as the dimension of time series data.
S1023: being based on the first eigenvector and the second feature vector, determines the target body movement Classification results.
The first eigenvector h that terminal exports the first hidden layer1And second hidden layer output second feature vector h2 Equal input logic returns layer, thus by logistic regression layer to first eigenvector h1And second feature vector h2It is analyzed, Determine the classification results of target body movement.
Specifically, terminal can prestore the classification knot of all human actions after the completion of to human action disaggregated model training Fruit and the corresponding characteristic matching vector of each classification results.
Terminal is by first eigenvector h1And second feature vector h2When input logic returns layer, pass through logistic regression Layer is by first eigenvector h1And second feature vector h2It is combined to obtain target feature vector;And by target feature vector Carry out with all characteristic matching vectors for prestoring apart from matching, select the characteristic matching most like with target feature vector to Amount, and the most like corresponding classification results of characteristic matching vector are exported to output layer.Wherein, apart from matched realization side Method specifically can match formula according to distanceCalculate distance value.h'1.h'2For target feature vector and The product of arbitrary characteristics matching vector, target feature vector is determined by the value of cos (θ) and arbitrary characteristics matching vector is It is no similar, wherein -1≤cos (θ)≤1, and when the value of cos (θ) is bigger, two vectors are more similar, to select and target spy Levy the most like characteristic matching vector of vector.
It needs to correspond to multiple transfers in target body movement by human action disaggregated model in this present embodiment Rotation attitude data handled, the sequencing made therefore, it is necessary to consider each transfer.Input all of the first hidden layer The human action data that transfer is made be by the positive time (by complete time that transfer is made after arrive first) sequence arranges, input the The human action data that all transfers of two hidden layers are made is by reversed time (by the time for completing transfer work from rear to elder generation) It is tactic, the first eigenvector h based on the output of the first hidden layer in this way1And second hidden layer output second feature Vector h2Determining classification results can eliminate influence of the time factor to classification results, improve the accuracy of classification results.
S103: class label belonging to the target body movement is exported based on the classification results.
Classification belonging to classification results mark target body movement due to target body movement, it is based on target person Class label belonging to target body movement can be obtained in the classification results of body movement, and exports classification belonging to target body movement Label.
The embodiment of the present invention, by using preset human action disaggregated model to the people of target body movement to be sorted Body action data is handled, and the classification results of target body movement are obtained.Target is completed since human action data is included in During human action, each the rotation attitude data of monitored skeleton therefore can be based in completion target person During body acts, each the rotation attitude data of monitored skeleton, act target body and carry out global analysis, The method for determining human action classification results by carrying out independent analysis to human action segment in compared to the prior art, energy Classification belonging to human action is enough accurately identified, the accuracy of classification is improved.
Terminal can act the rotation attitude data that each transfer for including is made based on target body, act to target body Global analysis is carried out, and during the treatment, it is contemplated that the sequencing that multiple transfers in target body movement are made, energy Influence of the time factor to classification results is enough eliminated, the accuracy of classification results can be further increased.
Refer to Fig. 2, Fig. 2 be another embodiment of the present invention provides a kind of identification human action method implementation process Figure.S201~S203 is identical as S101~S103 in a upper embodiment in the present embodiment, referring specifically in a upper embodiment The associated description of S101~S103, does not repeat herein.The difference of the present embodiment embodiment corresponding with Fig. 1 is, the present embodiment The method of middle identification human action further includes S204 after S203: based on class label adjustment human motion training It draws.
For example, terminal is based on target body movement and corresponds to when target body movement is from the user for carrying out rehabilitation training Classification results predict whether the current physical condition of the user meets the expected results of the rehabilitation training of current generation, current rank The type for the human action that the expected results identity user of the rehabilitation training of section can be completed.When the current physical condition of user not When meeting the expected results of the rehabilitation training of current generation, corresponding classification results and current generation are acted based on target body Rehabilitation training expected results, adjust human motion drill program.
Wherein, when target body acts the completion of human action corresponding to the tag along sort of corresponding classification results mark When difficulty is lower than expected results, the human action of adjustable training is to reduce the difficulty of training action or reinforce training strength Deng;When the completion difficulty that target body acts human action corresponding to the tag along sort of corresponding classification results mark is higher than in advance When phase result, the higher human action of difficulty can be completed with training user.
The embodiment of the present invention can assist adjustment human motion drill program, auxiliary phase by human action classification results Pass personnel formulate more reasonable effective drill program.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit It is fixed.
Referring to Fig. 3, Fig. 3 is a kind of schematic diagram for terminal that one embodiment of the invention provides.The each unit that terminal includes For executing each step in the corresponding embodiment of FIG. 1 to FIG. 2.Referring specifically in the corresponding embodiment of FIG. 1 to FIG. 2 Associated description.For ease of description, only the parts related to this embodiment are shown.Referring to Fig. 3, terminal 3 includes:
Acquiring unit 310, for obtaining the human action data of target body movement to be sorted;Wherein, the human body During action data is included in the completion target body movement, the rotation attitude number for the skeleton being each monitored According to;
Recognition unit 320 is handled for the human action data to be imported preset human action disaggregated model, Obtain the classification results of the target body movement;Wherein, the human action disaggregated model is using machine learning algorithm pair Sample human action training set is trained to obtain, and in the training process, the input of the human action disaggregated model is described The corresponding human action data of each human action, the output of the human action disaggregated model in sample human action training set For the corresponding classification results of the human action;
Output unit 330, for exporting class label belonging to the target body movement based on the classification results.
Further, acquiring unit 310 is specifically used for:
It obtains during completing target body movement, the collected each skeleton of sensor is corresponding Characteristic;The characteristic includes translational movement and angle rotation amount;
Based on the corresponding characteristic of all skeletons, the human action number of the target body movement is determined According to.
Further, the target body movement is made of at least two continuous transfers;The human action data Make corresponding rotation attitude data including at least two transfers.
Further, when target body movement is made of at least two continuous transfers, recognition unit 320 includes:
First processing units, for importing the human action data at preset human action disaggregated model Reason, obtains the corresponding first eigenvector of the human action data;
The second processing unit is pressed at the time of for based on getting each transfer and making corresponding human action data From evening to early sequence, corresponding human action data is made to all transfers and is ranked up, reversed human action number is obtained According to;And determine the corresponding second feature vector of the reversed human action data;The first eigenvector and described second The dimension of feature vector is identical;
Determination unit determines the target person for being based on the first eigenvector and the second feature vector The classification results of body movement.
Optionally, terminal can also include:
Adjustment unit, for adjusting human motion drill program based on the class label.
Fig. 4 be another embodiment of the present invention provides a kind of terminal schematic diagram.As shown in figure 4, the terminal 4 of the embodiment Include: processor 40, memory 41 and is stored in the calculating that can be run in the memory 41 and on the processor 40 Machine program 42.The processor 40 realizes the side of the identification human action of above-mentioned each terminal when executing the computer program 42 Step in method embodiment, such as S101 shown in FIG. 1 to S103.Alternatively, the processor 40 executes the computer program The function of each unit in above-mentioned each Installation practice, such as the function of unit 310 to 330 shown in Fig. 3 are realized when 42.
Illustratively, the computer program 42 can be divided into one or more units, one or more of Unit is stored in the memory 41, and is executed by the processor 40, to complete the present invention.One or more of lists Member can be the series of computation machine program instruction section that can complete specific function, and the instruction segment is for describing the computer journey Implementation procedure of the sequence 42 in the terminal 4.For example, the computer program 42 can be divided into acquiring unit, identification list Member and output unit, each unit concrete function are as described above.
The terminal may include, but be not limited only to, processor 40, memory 41.It will be understood by those skilled in the art that figure 4 be only the example of terminal 4, and the not restriction of structure paired terminal 4 may include components more more or fewer than diagram, or Combine certain components or different components, for example, the terminal can also include input-output equipment, network access equipment, Bus etc..
Alleged processor 40 can be central processing unit (Central Processing Unit, CPU), can also be Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field- Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor Deng.
The memory 41 can be the internal storage unit of the terminal 4, such as the hard disk or memory of terminal 4.It is described Memory 41 is also possible to the external storage terminal of the terminal 4, such as the plug-in type hard disk being equipped in the terminal 4, intelligence Storage card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash Card) Deng.Further, the memory 41 can also both including the terminal 4 internal storage unit and also including external storage end End.The memory 41 is for other programs and data needed for storing the computer program and the terminal.It is described to deposit Reservoir 41 can be also used for temporarily storing the data that has exported or will export.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although referring to aforementioned reality Applying example, invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified Or replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution should all It is included within protection scope of the present invention.

Claims (10)

1. a kind of method for identifying human action characterized by comprising
Obtain the human action data of target body movement to be sorted;Wherein, the human action data, which is included in, completes institute During stating target body movement, the rotation attitude data for the skeleton being each monitored;
The human action data is imported preset human action disaggregated model to handle, obtains the target body movement Classification results;Wherein, the human action disaggregated model be using machine learning algorithm to sample human action training set into Row training obtains, and in the training process, the input of the human action disaggregated model is in the sample human action training set The corresponding human action data of each human action, the output of the human action disaggregated model are that the human action is corresponding Classification results;
Class label belonging to the target body movement is exported based on the classification results.
2. the method according to claim 1, wherein the human body for obtaining target body movement to be sorted is dynamic Make data, comprising:
It obtains during completing target body movement, the corresponding spy of the collected each skeleton of sensor Levy data;The characteristic includes translational movement and angle rotation amount;
Based on the corresponding characteristic of all skeletons, the human action data of the target body movement is determined.
3. method according to claim 1 or 2, which is characterized in that the target body movement is continuous by least two Transfer forms;The human action data includes that at least two transfers make corresponding rotation attitude data.
4. according to the method described in claim 3, it is characterized in that, described import preset human body for the human action data Classification of motion model is handled, and the classification results of the target body movement are obtained, comprising:
The human action data is imported preset human action disaggregated model to handle, obtains the human action data Corresponding first eigenvector;
Based at the time of getting each transfer and make corresponding human action data, by from evening to early sequence, to all The transfer is made corresponding human action data and is ranked up, and reversed human action data is obtained;And determine the reversed human body The corresponding second feature vector of action data;The dimension of the first eigenvector and the second feature vector is identical;
Based on the first eigenvector and the second feature vector, the classification results of the target body movement are determined.
5. according to claim 1,2,4 described in any item methods, which is characterized in that described to export institute based on the classification results State target body movement belonging to class label after, further includes:
Human motion drill program is adjusted based on the class label.
6. a kind of terminal characterized by comprising
Acquiring unit, for obtaining the human action data of target body movement to be sorted;Wherein, the human action data It is included in during completing the target body movement, each the rotation attitude data of monitored skeleton;
Recognition unit handles for the human action data to be imported preset human action disaggregated model, obtains institute State the classification results of target body movement;Wherein, the human action disaggregated model is using machine learning algorithm to sample people Body action training collection is trained to obtain, and in the training process, the input of the human action disaggregated model is the sample people Body action training concentrates the corresponding human action data of each human action, and the output of the human action disaggregated model is described The corresponding classification results of human action;
Output unit, for exporting class label belonging to the target body movement based on the classification results.
7. terminal according to claim 6, which is characterized in that the acquiring unit is specifically used for:
It obtains during completing target body movement, the corresponding spy of the collected each skeleton of sensor Levy data;The characteristic includes translational movement and angle rotation amount;
Based on the corresponding characteristic of all skeletons, the human action data of the target body movement is determined.
8. terminal according to claim 6 or 7, which is characterized in that the target body movement is continuous by least two Transfer forms;The human action data includes that at least two transfers make corresponding rotation attitude data;
The recognition unit includes:
First processing units are handled for the human action data to be imported preset human action disaggregated model, are obtained To the corresponding first eigenvector of the human action data;
The second processing unit, at the time of for based on getting each transfer and making corresponding human action data, by from evening To early sequence, corresponding human action data is made to all transfers and is ranked up, reversed human action data is obtained;And Determine the corresponding second feature vector of the reversed human action data;The first eigenvector and the second feature to The dimension of amount is identical;
Determination unit determines that the target body is dynamic for being based on the first eigenvector and the second feature vector The classification results of work.
9. a kind of terminal, which is characterized in that in the memory and can be at the place including memory, processor and storage The computer program run on reason device, the processor realize following steps when executing the computer program:
Obtain the human action data of target body movement to be sorted;Wherein, the human action data, which is included in, completes institute During stating target body movement, the rotation attitude data for the skeleton being each monitored;
The human action data is imported preset human action disaggregated model to handle, obtains the target body movement Classification results;Wherein, the human action disaggregated model be using machine learning algorithm to sample human action training set into Row training obtains, and in the training process, the input of the human action disaggregated model is in the sample human action training set The corresponding human action data of each human action, the output of the human action disaggregated model are that the human action is corresponding Classification results;
Class label belonging to the target body movement is exported based on the classification results.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In when the computer program is executed by processor the step of any one of such as claim 1 to 5 of realization the method.
CN201910063605.8A 2019-01-23 2019-01-23 Method and terminal for identifying human body actions Active CN109886123B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910063605.8A CN109886123B (en) 2019-01-23 2019-01-23 Method and terminal for identifying human body actions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910063605.8A CN109886123B (en) 2019-01-23 2019-01-23 Method and terminal for identifying human body actions

Publications (2)

Publication Number Publication Date
CN109886123A true CN109886123A (en) 2019-06-14
CN109886123B CN109886123B (en) 2023-08-29

Family

ID=66926566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910063605.8A Active CN109886123B (en) 2019-01-23 2019-01-23 Method and terminal for identifying human body actions

Country Status (1)

Country Link
CN (1) CN109886123B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021051579A1 (en) * 2019-09-17 2021-03-25 平安科技(深圳)有限公司 Body pose recognition method, system, and apparatus, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104729507A (en) * 2015-04-13 2015-06-24 大连理工大学 Gait recognition method based on inertial sensor
CN108647644A (en) * 2018-05-11 2018-10-12 山东科技大学 Coal mine based on GMM characterizations blows out unsafe act identification and determination method
CN109101876A (en) * 2018-06-28 2018-12-28 东北电力大学 Human bodys' response method based on long memory network in short-term

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104729507A (en) * 2015-04-13 2015-06-24 大连理工大学 Gait recognition method based on inertial sensor
CN108647644A (en) * 2018-05-11 2018-10-12 山东科技大学 Coal mine based on GMM characterizations blows out unsafe act identification and determination method
CN109101876A (en) * 2018-06-28 2018-12-28 东北电力大学 Human bodys' response method based on long memory network in short-term

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘从文 等: "基于骨骼特征和手部关联物体特征的人的姿态识别", 工业控制计算机, no. 04 *
杨煜 等: "TensorFlow平台上基于LSTM神经网络的人体动作分类", 智能计算机与应用, no. 05 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021051579A1 (en) * 2019-09-17 2021-03-25 平安科技(深圳)有限公司 Body pose recognition method, system, and apparatus, and storage medium

Also Published As

Publication number Publication date
CN109886123B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
Yadav et al. Real-time Yoga recognition using deep learning
Anand Thoutam et al. Yoga pose estimation and feedback generation using deep learning
Nie et al. Human pose estimation with parsing induced learner
CN110020633A (en) Training method, image-recognizing method and the device of gesture recognition model
Li et al. Abnormal sitting posture recognition based on multi-scale spatiotemporal features of skeleton graph
CN104899561A (en) Parallelized human body behavior identification method
CN109815776A (en) Action prompt method and apparatus, storage medium and electronic device
CN111274998A (en) Parkinson's disease finger knocking action identification method and system, storage medium and terminal
CN109753868A (en) Appraisal procedure and device, the Intelligent bracelet of athletic performance
CN110490109A (en) A kind of online human body recovery action identification method based on monocular vision
Frangoudes et al. Assessing human motion during exercise using machine learning: A literature review
CN107851113A (en) Be configured as based on derived from performance sensor unit user perform attribute and realize the framework of automatic classification and/or search to media data, apparatus and method
Liu et al. Trampoline motion decomposition method based on deep learning image recognition
Wengefeld et al. Real-time person orientation estimation using colored pointclouds
CN108985839A (en) Shopping guide method and device in unmanned supermarket based on recognition of face
Yadikar et al. A Review of Knowledge Distillation in Object Detection
Chu et al. [Retracted] Image Recognition of Badminton Swing Motion Based on Single Inertial Sensor
CN109886123A (en) A kind of method and terminal identifying human action
Ekambaram et al. Real-time AI-assisted visual exercise pose correctness during rehabilitation training for musculoskeletal disorder
Lei et al. Multi-skeleton structures graph convolutional network for action quality assessment in long videos
Wei et al. Multiple-branches faster RCNN for human parts detection and pose estimation
He et al. An Expert-Knowledge-Based Graph Convolutional Network for Skeleton-Based Physical Rehabilitation Exercises Assessment
Li et al. Partially occluded skeleton action recognition based on multi-stream fusion graph convolutional networks
Dhanyal et al. Yoga pose annotation and classification by using time-distributed convolutional neural network
CN110458647A (en) Product method for pushing, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant