CN109344692A - A kind of motion quality evaluation method and system - Google Patents

A kind of motion quality evaluation method and system Download PDF

Info

Publication number
CN109344692A
CN109344692A CN201810909854.XA CN201810909854A CN109344692A CN 109344692 A CN109344692 A CN 109344692A CN 201810909854 A CN201810909854 A CN 201810909854A CN 109344692 A CN109344692 A CN 109344692A
Authority
CN
China
Prior art keywords
motion
feature
video
classification
joint points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810909854.XA
Other languages
Chinese (zh)
Other versions
CN109344692B (en
Inventor
雷庆
杜吉祥
张洪博
余思哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaqiao University
Original Assignee
Huaqiao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaqiao University filed Critical Huaqiao University
Priority to CN201810909854.XA priority Critical patent/CN109344692B/en
Publication of CN109344692A publication Critical patent/CN109344692A/en
Application granted granted Critical
Publication of CN109344692B publication Critical patent/CN109344692B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of motion quality evaluation method and system.Wherein, the described method includes: extracting the local motion mode of corporal parts from human joint points motion profile, and it establishes the behavior differentiated to the motor pattern and classifies, and then classified according to the behavior of the foundation differentiated to the motor pattern, it establishes and is displaced associated posture feature expression and Environmental Evaluation Model based on human joint points, and then associated posture feature expression and Environmental Evaluation Model are displaced based on human joint points according to the foundation, quality score is carried out to the human motion in the collected video of camera.It does not need manually to be labeled human body movement data by the above-mentioned means, can be realized, is capable of the evaluation information of accurate feedback human motion quality.

Description

A kind of motion quality evaluation method and system
Technical field
The present invention relates to motion quality evaluation technical field more particularly to a kind of motion quality evaluation method and system.
Background technique
In terms of sports industry, while motion quality evaluation technology can make sportsman be trained under camera The scoring of quality of movement is obtained in real time, and visual feedback data can help sportsman to improve performance.Fortune based on the technology Dynamic auxiliary system can detect the posture of mistake by addition human motion physiology constraint to prevent damage caused by motion process Wound, therefore have a wide range of applications in related fieldss such as athletics and physical training, physical education and athletic rehabilitations.Based on interior The field of video retrieval of appearance, search engine can not only return to video relevant to querying condition according to similarity measure values, also The video retrieved can be ranked up under the auxiliary of motion quality evaluation algorithm, it is more acurrate, more effective to be supplied to user Retrieval experience.
Under the background that computer vision for decades flourishes, Human bodys' response and video based on machine learning Processing technique makes great progress, and how effectively to be evaluated the human motion in video more and more by people Concern.Motion quality evaluation technology, which relies primarily on, at present uses for reference existing Activity recognition, Video segmentation and image/video quality The character representation scheme of the propositions such as assessment extracts human body motion feature data, and establishes corresponding regression model and realize in video Human motion quality evaluated.
The main thought of existing motion quality evaluation scheme is to be mentioned from video using local feature detection algorithm first It takes out significant characteristics point or point of interest represents human motion, then with the feature describing mode of hand-designed to point of interest Spatial character, time response or space-time characterisation are described;The linear of moving-mass scoring is finally established in feature space to return Return model.
However existing motion quality evaluation scheme, the character representation of all sports category and regression forecasting are integrated in one It is optimized in a evaluation function, therefore verification process is just for the exercise data of one to two classifications, when increasing sports category When need the expert of specific area to be labeled data.Furthermore it when different classes of motor pattern is there are when larger difference, adopts The influence that evaluation is easy to be distributed by sample data, which is carried out, with single one evaluation function causes biggish approximate error.And not Similar posture with behavior classification causes biggish deviation even to generate the position feedback of artis and does not conform to there may be obscuring The feedback information of reason.
Summary of the invention
In view of this, can be realized and be not required to it is an object of the invention to propose a kind of motion quality evaluation method and system Manually human body movement data is labeled, be capable of the evaluation information of accurate feedback human motion quality.
According to an aspect of the present invention, a kind of motion quality evaluation method is provided, comprising:
The local motion mode of corporal parts is extracted from human joint points motion profile, and is established to the fortune The behavior classification that dynamic model formula is differentiated;
According to the behavior classification of the foundation differentiated to the motor pattern, establishes and be displaced based on human joint points Associated posture feature indicates and Environmental Evaluation Model;
Being displaced associated posture feature based on human joint points according to the foundation indicates and Environmental Evaluation Model, to taking the photograph As the human motion in collected video carries out quality score.
Wherein, in the local motion mode for extracting corporal parts from human joint points motion profile, and It establishes before the behavior classification differentiated to the motor pattern, further includes:
Using OpenPose Attitude estimation mode, human joint points motion profile is extracted from video.
Wherein, the local motion mode that corporal parts are extracted from human joint points motion profile, and build The vertical behavior differentiated to the motor pattern is classified, comprising:
The local motion mode of corporal parts is extracted from human joint points motion profile, gives the motor pattern Associated motion profile indicates the frame length of video length, by artis in the position in every frame image centered on, extract one The image block of default size, therefore available one of direction is pre- along the time axis in the video of the frame length in the video length If size is multiplied by the space-time cube of the frame length of the video length, and to frame image block every in the space-time cube in space The influence of Gaussian smoothing removal noise point is carried out on domain, and each super-pixel of the space-time cube is calculated using calculation Point is together in series the Central Moment Feature of all super-pixel points of the space-time cube to obtain institute along the Central Moment Feature of time shaft The local motion feature vector of artis is stated, and the movement for being together in series to obtain video for the Central Moment Feature vector of artis is special Sign description, and be behavior using the bag of words feature coding and one-to-many support vector cassification in computer vision Each behavior classification in category set trains a behavior classification, establishes the behavior differentiated to the motor pattern Classification.
Wherein, described to be classified according to the behavior of the foundation differentiated to the motor pattern, it establishes and is based on human body Artis is displaced associated posture feature expression and Environmental Evaluation Model, comprising:
According to the behavior classification of the foundation differentiated to the motor pattern, human joint points displacement association is calculated Posture feature indicate, and training based on Support vector regression mode Environmental Evaluation Model estimation through extraction human body appearance State feature score value is established and is displaced associated posture feature expression and Environmental Evaluation Model based on human joint points.
Wherein, described that associated posture feature expression and quality evaluation are displaced based on human joint points according to the foundation Model carries out quality score to the human motion in the collected video of camera, comprising:
Associated posture feature expression and Environmental Evaluation Model, ground institute are displaced based on human joint points according to the foundation It states the collected video of camera and carries out feature extraction and expression, it is assumed that the obtained motion feature for classification is the first classification Motion feature, for quality evaluation motion feature be the first quality evaluation motion feature, will it is described first classify motion feature It is input in the behavior classification of the foundation differentiated to the motor pattern, exports the first classification motion feature Class categories label selects the quality evaluation function of corresponding class categories to be estimated, chooses score from the class categories Highest video is displaced linked character vector according to the artis, obtains the classification most by dictionary mapping reduction Good artis motion vector calculates the difference of artis position in current video artis position and optimum movement locus, obtains It is fed back to artis error position, quality score is carried out to the human motion in the collected video of the camera and error is anti- Feedback.
According to another aspect of the present invention, a kind of motion quality evaluation system is provided, comprising:
Unit, Environmental Evaluation Model, quality score unit are established in behavior classification;
It is described to establish unit, for extracting the local motion mould of corporal parts from human joint points motion profile Formula, and establish the behavior differentiated to the motor pattern and classify;
The Environmental Evaluation Model, for being classified according to the behavior of the foundation differentiated to the motor pattern, It establishes and is displaced associated posture feature expression and Environmental Evaluation Model based on human joint points;
The quality score unit, for being displaced associated posture feature table based on human joint points according to the foundation Show and Environmental Evaluation Model, quality score is carried out to the human motion in the collected video of camera.
Wherein, the motion quality evaluation system, further includes:
Motion profile extraction unit extracts human synovial for using OpenPose Attitude estimation mode from video Point motion profile.
Wherein, described to establish unit, it is specifically used for:
The local motion mode of corporal parts is extracted from human joint points motion profile, gives the motor pattern Associated motion profile indicates the frame length of video length, by artis in the position in every frame image centered on, extract one The image block of default size, therefore available one of direction is pre- along the time axis in the video of the frame length in the video length If size is multiplied by the space-time cube of the frame length of the video length, and to frame image block every in the space-time cube in space The influence of Gaussian smoothing removal noise point is carried out on domain, and each super-pixel of the space-time cube is calculated using calculation Point is together in series the Central Moment Feature of all super-pixel points of the space-time cube to obtain institute along the Central Moment Feature of time shaft The local motion feature vector of artis is stated, and the movement for being together in series to obtain video for the Central Moment Feature vector of artis is special Sign description, and be behavior using the bag of words feature coding and one-to-many support vector cassification in computer vision Each behavior classification in category set trains a behavior classification, establishes the behavior differentiated to the motor pattern Classification.
Wherein, the Environmental Evaluation Model, is specifically used for:
According to the behavior classification of the foundation differentiated to the motor pattern, human joint points displacement association is calculated Posture feature indicate, and training based on Support vector regression mode Environmental Evaluation Model estimation through extraction human body appearance State feature score value is established and is displaced associated posture feature expression and Environmental Evaluation Model based on human joint points.
Wherein, the quality score unit, is specifically used for:
Associated posture feature expression and Environmental Evaluation Model, ground institute are displaced based on human joint points according to the foundation It states the collected video of camera and carries out feature extraction and expression, it is assumed that the obtained motion feature for classification is the first classification Motion feature, for quality evaluation motion feature be the first quality evaluation motion feature, will it is described first classify motion feature It is input in the behavior classification of the foundation differentiated to the motor pattern, exports the first classification motion feature Class categories label selects the quality evaluation function of corresponding class categories to be estimated, chooses score from the class categories Highest video is displaced linked character vector according to the artis, obtains the classification most by dictionary mapping reduction Good artis motion vector calculates the difference of artis position in current video artis position and optimum movement locus, obtains It is fed back to artis error position, quality score is carried out to the human motion in the collected video of the camera and error is anti- Feedback.
It can be found that above scheme, can be displaced associated posture feature table based on human joint points according to the foundation Show and Environmental Evaluation Model can be realized and not need to the human motion progress quality score in the collected video of camera Manually human body movement data is labeled, is capable of the evaluation information of accurate feedback human motion quality.
Further, above scheme can calculate joint point in current video artis position and optimum movement locus The difference set, obtain artis error position feedback, can be realized accurate feedback human motion quality evaluation information it is same When, feed back human motion artis error position.
Further, above scheme proposes the local motion feature describing mode based on centralized moments of image, can be realized Difference caused by existing scale, appearance and complex background in coping behavior classification.
Further, above scheme proposes artis displacement linked character, can be realized solution merely by artis Position difference carries out the problem that quality evaluation is influenced to cause estimation inaccuracy, error larger by visual angle change.
Further, above scheme can use OpenPose Attitude estimation mode, human synovial is extracted from video Point motion profile, can be realized and extract human joint points motion profile from video.
Detailed description of the invention
Fig. 1 is the flow diagram of one embodiment of motion quality evaluation method of the present invention;
Fig. 2 is the flow diagram of another embodiment of motion quality evaluation method of the present invention;
Fig. 3 is the structural schematic diagram of one embodiment of motion quality evaluation system of the present invention;
Fig. 4 is the structural schematic diagram of another embodiment of motion quality evaluation system of the present invention;
Fig. 5 is the structural schematic diagram of the another embodiment of motion quality evaluation system of the present invention.
Specific embodiment
With reference to the accompanying drawings and examples, the present invention is described in further detail.It is emphasized that following implement Example is merely to illustrate the present invention, but is not defined to the scope of the present invention.Likewise, following embodiment is only portion of the invention Point embodiment and not all embodiments, institute obtained by those of ordinary skill in the art without making creative efforts There are other embodiments, shall fall within the protection scope of the present invention.
The present invention provides a kind of motion quality evaluation method, can be realized and does not need manually to mark human body movement data Note, is capable of the evaluation information of accurate feedback human motion quality.
Referring to Figure 1, Fig. 1 is the flow diagram of one embodiment of motion quality evaluation method of the present invention.It is noted that If having substantially the same as a result, method of the invention is not limited with process sequence shown in FIG. 1.As shown in Figure 1, this method Include the following steps:
S101: the local motion mode of corporal parts, and foundation pair are extracted from human joint points motion profile The behavior classification that the motor pattern is differentiated.
Wherein, the local motion mode of corporal parts is extracted from human joint points motion profile at this, and is built Before the vertical behavior classification differentiated to the motor pattern, can also include:
Using OpenPose Attitude estimation mode, human joint points motion profile is extracted from video.
Wherein, the local motion mode of corporal parts should be extracted from human joint points motion profile, and was established To the behavior classification that the motor pattern is differentiated, may include:
The local motion mode of corporal parts is extracted from human joint points motion profile, is closed to the motor pattern The motion profile of connection indicates the frame length T of video length, by artis in the position in every frame image centered on, extract one it is pre- If the image block of size such as n × n size, therefore the direction along the time axis in the video of the frame length T frame length of the video length An available default size multiplied by the video length frame length, that is, n × n × T size space-time cube, and to the space-time Every frame image block carries out the influence of Gaussian smoothing removal noise point in spatial domain in cube, and is calculated using calculation The each super-pixel point of the space-time cube along time shaft Central Moment Feature, by the center of all super-pixel points of the space-time cube Moment characteristics are together in series to obtain the local motion feature vector of the artis, and the Central Moment Feature vector of artis is connected Obtain motion feature description of video, and special using BOW (Bag Of Words, bag of words) in computer vision Assemble-publish code and one-to-many support vector cassification train behavior point for each behavior classification in behavior category set Class establishes the behavior differentiated to the motor pattern and classifies.
S102: classified according to the behavior of the foundation differentiated to the motor pattern, establish and be based on human synovial point Move associated posture feature expression and Environmental Evaluation Model.
Wherein, this classifies according to the behavior of the foundation differentiated to the motor pattern, establishes and is based on human joint points It is displaced associated posture feature expression and Environmental Evaluation Model, may include:
According to the behavior classification of the foundation differentiated to the motor pattern, calculates human joint points and be displaced associated appearance State character representation, and training returns the quality evaluation of mode based on SVM (Support Vector Machine, support vector machines) Human body attitude feature score value of the model estimation by extraction, foundation are displaced associated posture feature based on human joint points and indicate And Environmental Evaluation Model.
Wherein, this classifies according to the behavior of the foundation differentiated to the motor pattern, calculates human joint points displacement Associated posture feature indicates, and training returns the Environmental Evaluation Model estimation of mode by extracting based on support vector machines Human body attitude feature score value, establish and be displaced associated posture feature based on human joint points and indicate and Environmental Evaluation Model, May include:
Using OpenPose Attitude estimation mode, human joint points motion profile in video is exported, with human buttock center Point is reference point, positional shift of other artis relative to reference point is calculated, after being normalized according to dimensions of human figure size Obtain the motion vector of artis, and using the rarefaction representation mode based on super complete dictionary to the motion vector of the artis into Row coding carries out discrete cosine transform from the feature vector for presetting dimension to this with mapping mode first, extracts present count The transformation coefficient of the low frequency of amount is indicated, and the joint displacements vector after frequency domain changes is together in series, and is used Algorithmic approach learns a super complete dictionary out, using feature vector on the dictionary from artis motion vector set Rarefaction representation coefficient is established to describe the association of the displacement between artis and is based on the associated posture feature table of human joint points displacement Show and Environmental Evaluation Model.
S103: being displaced associated posture feature expression and Environmental Evaluation Model based on human joint points according to the foundation, Quality score is carried out to the human motion in the collected video of camera.
Wherein, associated posture feature expression and Environmental Evaluation Model are displaced based on human joint points according to the foundation, Quality score is carried out to the human motion in the collected video of camera, may include:
Being displaced associated posture feature based on human joint points according to the foundation indicates and Environmental Evaluation Model, ground this take the photograph As a collected video carries out feature extraction and expression, it is assumed that the obtained motion feature for classification is the first classification movement Feature, for quality evaluation motion feature be the first quality evaluation motion feature, by this first classification motion feature be input to In the behavior classification of the foundation differentiated to the motor pattern, the class categories mark of the first classification motion feature is exported Label, select the quality evaluation function of corresponding class categories to be estimated, the video of highest scoring, root are chosen from the class categories It is displaced linked character vector according to the artis, the optimal artis motion vector of the category is obtained by dictionary mapping reduction, The difference for calculating artis position in current video artis position and optimum movement locus, it is anti-to obtain artis error position Feedback carries out quality score to the human motion in the collected video of the camera and error is fed back.
In the present embodiment, by the regression model of the different behavior classifications of training, and the classification mark of classifier output is utilized The corresponding device that returns of label guidance carries out quality evaluation and exports feedback result.Behavior classification uses the part based on centralized moments of image Motion feature and feature coding based on BOW indicate, and establish posture disaggregated model.Quality evaluation is then from artis motion profile In extract displacement relation vector artis kinetic characteristic be described, with the sparse representation method of super complete dictionary study Code book expression and the sparse coding for establishing different behavior classifications, finally train the SVM evaluated moving-mass and return mould Type.The selection that device is returned using the output result guidance of classifier, evaluates the moving-mass of test sample.Furthermore pass through It calculates the difference of high quality motor pattern feature and test sample and provides effective feedback for test sample, this method can also be into Motion quality evaluation problem is extended to a Weakly supervised problem concerning study by one step, solves adopting for the training data for lacking expert's mark Collection.
In the present embodiment, the human motion quality evaluation mode of proposition includes feature extraction based on Attitude estimation, row For classification, quality evaluation and feedback etc..
In the present embodiment, training process may include: to carry out Attitude estimation to collected tape label video first, obtain To the motion profile of human joint points;For two different tasks of behavior classification and quality evaluation, moved respectively from artis Local motion feature is extracted in track carries out Classification and Identification and the progress quality evaluation of joint displacements relation vector feature;Row To be identified by the behavior class label that training maximizes the SVM classifier sample estimates data of class interval;Quality evaluation is then Quality score estimation is carried out to video feature vector using linear regression method;Feedback model is by establishing the high-quality of particular category Motion feature template is measured, and calculates the joint point that the feedback of the difference between test sample and corresponding class template needs to adjust It sets.
In the present embodiment, the human body movement data under real scene is evaluated using the model that training obtains, it is right The collected human motion video of camera carries out characteristic of division respectively and evaluating characteristic extracts, and is classified using trained behavior Device determines the behavior class label of video, is commented according to the moving-mass that the label instructions correspond to the evaluation function output video of classification Point, while calculating artis position difference using feedback function and visualization output is carried out to result.
In the present embodiment, human joint points movement can be extracted from video using OpenPose Attitude estimation mode Track, the OpenPose Attitude estimation mode can use the human body model of 18 artis, such as can be by serial number successively For 0_ nose, 1_ neck, the right shoulder of 2_, the right elbow of 3_, the 4_ right hand, the left shoulder of 5_, the left elbow of 6_, 7_ left hand, the right stern of 8_, the right knee of 9_, 10_ right crus of diaphragm, The left stern of 11_, the left knee of 12_, 13_ left foot, 14_ right eye, 15_ left eye, 16_ auris dextra, the left ear of 17_.It is main for motion quality evaluation Depending on the change in location that each artis of limbs occurs in time, such as it can therefrom select 1~No. 13 artis fortune Dynamic rail mark is analyzed.
It can be found that in the present embodiment, associated posture spy can be displaced based on human joint points according to the foundation Sign indicates and Environmental Evaluation Model, carries out quality score to the human motion in the collected video of camera, can be realized not It needs manually to be labeled human body movement data, is capable of the evaluation information of accurate feedback human motion quality.
Further, in the present embodiment, joint in current video artis position and optimum movement locus can be calculated The difference of point position obtains artis error position feedback, can be realized the evaluation information in accurate feedback human motion quality While, feed back human motion artis error position.
Further, in the present embodiment, the local motion feature describing mode based on centralized moments of image is proposed, it can Realize difference caused by existing scale, appearance and complex background in coping behavior classification.
Further, in the present embodiment, artis displacement linked character is proposed, can be realized solution merely by pass Node location difference carries out the problem that quality evaluation is influenced to cause estimation inaccuracy, error larger by visual angle change.
Fig. 2 is referred to, Fig. 2 is the flow diagram of another embodiment of motion quality evaluation method of the present invention.The present embodiment In, method includes the following steps:
S201: OpenPose Attitude estimation mode is used, human joint points motion profile is extracted from video.
S202: the local motion mode of corporal parts is extracted from human body artis motion profile, and is established The behavior classification that the motor pattern is differentiated.
Can be as above described in S101, therefore not to repeat here.
S203: classified according to the behavior of the foundation differentiated to the motor pattern, establish and be based on human synovial point Move associated posture feature expression and Environmental Evaluation Model.
Can be as above described in S102, therefore not to repeat here.
S204: being displaced associated posture feature expression and Environmental Evaluation Model based on human joint points according to the foundation, Quality score is carried out to the human motion in the collected video of camera.
Can be as above described in S103, therefore not to repeat here.
It can be found that in the present embodiment, OpenPose Attitude estimation mode can be used, human body is extracted from video Artis motion profile can be realized and extract human joint points motion profile from video.
The present invention also provides a kind of motion quality evaluation system, it can be realized and do not need manually to carry out human body movement data Mark, is capable of the evaluation information of accurate feedback human motion quality.
Fig. 3 is referred to, Fig. 3 is the structural schematic diagram of one embodiment of motion quality evaluation system of the present invention.In the present embodiment, The motion quality evaluation system includes that unit 31, quality evaluation unit 32, quality score unit 33 are established in behavior classification.
This establishes unit 31, for extracting the local motion mould of corporal parts from human joint points motion profile Formula, and establish the behavior differentiated to the motor pattern and classify.
The quality evaluation unit 32 is established for being classified according to the behavior of the foundation differentiated to the motor pattern Associated posture feature expression and Environmental Evaluation Model are displaced based on human joint points.
The quality score unit 33, for being displaced associated posture feature expression based on human joint points according to the foundation And Environmental Evaluation Model, quality score is carried out to the human motion in the collected video of camera.
Optionally, this establishes unit 31, can be specifically used for:
The local motion mode of corporal parts is extracted from human joint points motion profile, is closed to the motor pattern The motion profile of connection indicates the frame length T of video length, by artis in the position in every frame image centered on, extract one it is pre- If the image block of size such as n × n size, therefore the direction along the time axis in the video of the frame length T frame length of the video length An available default size multiplied by the video length frame length, that is, n × n × T size space-time cube, and to the space-time Every frame image block carries out the influence of Gaussian smoothing removal noise point in spatial domain in cube, and is calculated using calculation The each super-pixel point of the space-time cube along time shaft Central Moment Feature, by the center of all super-pixel points of the space-time cube Moment characteristics are together in series to obtain the local motion feature vector of the artis, and the Central Moment Feature vector of artis is connected Obtain motion feature description of video, and using bag of words BOW feature coding in computer vision and one-to-many Support vector cassification trains a behavior classification for each behavior classification in behavior category set, establishes to the fortune The behavior classification that dynamic model formula is differentiated.
Optionally, the quality evaluation unit 32, can be specifically used for:
According to the behavior classification of the foundation differentiated to the motor pattern, calculates human joint points and be displaced associated appearance State character representation, and Environmental Evaluation Model estimation of the training based on Support vector regression mode is special by the human body attitude extracted Score value is levied, establishes and is displaced associated posture feature expression and Environmental Evaluation Model based on human joint points.
Optionally, the quality evaluation unit 32, can be specifically used for:
Using OpenPose Attitude estimation mode, human joint points motion profile in video is exported, with human buttock center Point is reference point, positional shift of other artis relative to reference point is calculated, after being normalized according to dimensions of human figure size Obtain the motion vector of artis, and using the rarefaction representation mode based on super complete dictionary to the motion vector of the artis into Row coding carries out discrete cosine transform from the feature vector for presetting dimension to this with mapping mode first, extracts present count The transformation coefficient of the low frequency of amount is indicated, and the joint displacements vector after frequency domain changes is together in series, and is used Algorithmic approach learns a super complete dictionary out, using feature vector on the dictionary from artis motion vector set Rarefaction representation coefficient is established to describe the association of the displacement between artis and is based on the associated posture feature table of human joint points displacement Show and Environmental Evaluation Model.
Optionally, the quality score unit 33, can be specifically used for:
Being displaced associated posture feature based on human joint points according to the foundation indicates and Environmental Evaluation Model, ground this take the photograph As a collected video carries out feature extraction and expression, it is assumed that the obtained motion feature for classification is the first classification movement Feature, for quality evaluation motion feature be the first quality evaluation motion feature, by this first classification motion feature be input to In the behavior classification of the foundation differentiated to the motor pattern, the class categories mark of the first classification motion feature is exported Label, select the quality evaluation function of corresponding class categories to be estimated, the video of highest scoring, root are chosen from the class categories It is displaced linked character vector according to the artis, the optimal artis motion vector of the category is obtained by dictionary mapping reduction, The difference for calculating artis position in current video artis position and optimum movement locus, it is anti-to obtain artis error position Feedback carries out quality score to the human motion in the collected video of the camera and error is fed back.
Fig. 4 is referred to, Fig. 4 is the structural schematic diagram of another embodiment of motion quality evaluation system of the present invention.It is different from One embodiment, motion quality evaluation system 40 described in the present embodiment further include: motion profile extraction unit 41.
The motion profile extraction unit 41 extracts human synovial using OpenPose Attitude estimation mode from video Point motion profile.
Each unit module of the motion quality evaluation system 30/40 can execute corresponding step in above method embodiment respectively Suddenly, therefore each unit module is not repeated herein, refers to the explanation of the above corresponding step.
Fig. 5 is referred to, Fig. 5 is the structural schematic diagram of the another embodiment of motion quality evaluation system of the present invention.The movement matter Each unit module of amount evaluation system can execute respectively corresponds to step in above method embodiment.Related content refers to The detailed description in method is stated, it is no longer superfluous herein to chat.
In the present embodiment, the motion quality evaluation system include: processor 51, the memory coupled with processor 51 52, Modeler 53 and scorer 54.
The processor 51 extracts human joint points movement for using OpenPose Attitude estimation mode from video Track.
The processor 51 is also used to extract the local motion mould of corporal parts from human joint points motion profile Formula, and establish the behavior differentiated to the motor pattern and classify.
The memory 52, the instruction etc. executed for storage program area, processor 51.
The modeler 53 is established for being classified according to the behavior of the foundation differentiated to the motor pattern and is based on people Body artis is displaced associated posture feature expression and Environmental Evaluation Model.
The scorer 54, for being displaced associated posture feature expression and quality based on human joint points according to the foundation Evaluation model carries out quality score to the human motion in the collected video of camera.
Optionally, the processor 51, can be specifically used for:
The local motion mode of corporal parts is extracted from human joint points motion profile, is closed to the motor pattern The motion profile of connection indicates the frame length T of video length, by artis in the position in every frame image centered on, extract one it is pre- If the image block of size such as n × n size, therefore the direction along the time axis in the video of the frame length T frame length of the video length An available default size multiplied by the video length frame length, that is, n × n × T size space-time cube, and to the space-time Every frame image block carries out the influence of Gaussian smoothing removal noise point in spatial domain in cube, and is calculated using calculation The each super-pixel point of the space-time cube along time shaft Central Moment Feature, by the center of all super-pixel points of the space-time cube Moment characteristics are together in series to obtain the local motion feature vector of the artis, and the Central Moment Feature vector of artis is connected Obtain motion feature description of video, and using bag of words BOW feature coding in computer vision and one-to-many Support vector cassification trains a behavior classification for each behavior classification in behavior category set, establishes to the fortune The behavior classification that dynamic model formula is differentiated.
Optionally, the quality evaluation unit 32, can be specifically used for:
According to the behavior classification of the foundation differentiated to the motor pattern, calculates human joint points and be displaced associated appearance State character representation, and Environmental Evaluation Model estimation of the training based on Support vector regression mode is special by the human body attitude extracted Score value is levied, establishes and is displaced associated posture feature expression and Environmental Evaluation Model based on human joint points.
Optionally, the modeler 53, can be specifically used for:
Using OpenPose Attitude estimation mode, human joint points motion profile in video is exported, with human buttock center Point is reference point, positional shift of other artis relative to reference point is calculated, after being normalized according to dimensions of human figure size Obtain the motion vector of artis, and using the rarefaction representation mode based on super complete dictionary to the motion vector of the artis into Row coding carries out discrete cosine transform from the feature vector for presetting dimension to this with mapping mode first, extracts present count The transformation coefficient of the low frequency of amount is indicated, and the joint displacements vector after frequency domain changes is together in series, and is used Algorithmic approach learns a super complete dictionary out, using feature vector on the dictionary from artis motion vector set Rarefaction representation coefficient is established to describe the association of the displacement between artis and is based on the associated posture feature table of human joint points displacement Show and Environmental Evaluation Model.
Optionally, the scorer 54, can be specifically used for:
Being displaced associated posture feature based on human joint points according to the foundation indicates and Environmental Evaluation Model, ground this take the photograph As a collected video carries out feature extraction and expression, it is assumed that the obtained motion feature for classification is the first classification movement Feature, for quality evaluation motion feature be the first quality evaluation motion feature, by this first classification motion feature be input to In the behavior classification of the foundation differentiated to the motor pattern, the class categories mark of the first classification motion feature is exported Label, select the quality evaluation function of corresponding class categories to be estimated, the video of highest scoring, root are chosen from the class categories It is displaced linked character vector according to the artis, the optimal artis motion vector of the category is obtained by dictionary mapping reduction, The difference for calculating artis position in current video artis position and optimum movement locus, it is anti-to obtain artis error position Feedback carries out quality score to the human motion in the collected video of the camera and error is fed back.
It can be found that above scheme, can be displaced associated posture feature table based on human joint points according to the foundation Show and Environmental Evaluation Model can be realized and not need to the human motion progress quality score in the collected video of camera Manually human body movement data is labeled, is capable of the evaluation information of accurate feedback human motion quality.
Further, above scheme can calculate joint point in current video artis position and optimum movement locus The difference set, obtain artis error position feedback, can be realized accurate feedback human motion quality evaluation information it is same When, feed back human motion artis error position.
Further, above scheme proposes the local motion feature describing mode based on centralized moments of image, can be realized Difference caused by existing scale, appearance and complex background in coping behavior classification.
Further, above scheme proposes artis displacement linked character, can be realized solution merely by artis Position difference carries out the problem that quality evaluation is influenced to cause estimation inaccuracy, error larger by visual angle change.
Further, above scheme can use OpenPose Attitude estimation mode, human synovial is extracted from video Point motion profile, can be realized and extract human joint points motion profile from video.
In several embodiments provided by the present invention, it should be understood that disclosed system, device and method can To realize by another way.For example, device embodiments described above are only schematical, for example, module or The division of unit, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units Or component can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, institute Display or the mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, device or unit Indirect coupling or communication connection can be electrical property, mechanical or other forms.
Unit may or may not be physically separated as illustrated by the separation member, shown as a unit Component may or may not be physical unit, it can and it is in one place, or may be distributed over multiple networks On unit.It can select some or all of unit therein according to the actual needs to realize the mesh of present embodiment scheme 's.
In addition, each functional unit in each embodiment of the present invention can integrate in one processing unit, it can also To be that each unit physically exists alone, can also be integrated in one unit with two or more units.It is above-mentioned integrated Unit both can take the form of hardware realization, can also realize in the form of software functional units.
It, can if integrated unit is realized in the form of SFU software functional unit and when sold or used as an independent product To be stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention substantially or Say that all or part of the part that contributes to existing technology or the technical solution can embody in the form of software products Out, which is stored in a storage medium, including some instructions are used so that a computer equipment (can be personal computer, server or the network equipment etc.) or processor (processor) execute each implementation of the present invention The all or part of the steps of methods.And storage medium above-mentioned include: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. it is various It can store the medium of program code.
The foregoing is merely section Examples of the invention, are not intended to limit protection scope of the present invention, all utilizations Equivalent device made by description of the invention and accompanying drawing content or equivalent process transformation are applied directly or indirectly in other correlations Technical field, be included within the scope of the present invention.

Claims (10)

1. a kind of motion quality evaluation method characterized by comprising
The local motion mode of corporal parts is extracted from human joint points motion profile, and is established to the movement mould The behavior classification that formula is differentiated;
According to the behavior classification of the foundation differentiated to the motor pattern, establish based on human joint points displacement association Posture feature indicate and Environmental Evaluation Model;
It is displaced associated posture feature expression and Environmental Evaluation Model based on human joint points according to the foundation, to camera Human motion in collected video carries out quality score.
2. motion quality evaluation method as described in claim 1, which is characterized in that described from human joint points motion profile In extract the local motion modes of corporal parts, and establish the behavior differentiated to the motor pattern and classify it Before, further includes:
Using OpenPose Attitude estimation mode, human joint points motion profile is extracted from video.
3. motion quality evaluation method as claimed in claim 1 or 2, which is characterized in that described to move rail from human joint points The local motion mode of corporal parts is extracted in mark, and is established the behavior differentiated to the motor pattern and classified, Include:
The local motion mode of corporal parts is extracted from human joint points motion profile, is associated with to the motor pattern Motion profile indicate the frame length of video length, by artis in the position in every frame image centered on, extract one it is default The image block of size, therefore available one of direction is default big along the time axis in the video of the frame length in the video length The space-time cube of the small frame length multiplied by the video length, and to frame image block every in the space-time cube in spatial domain The influence of Gaussian smoothing removal noise point is carried out, and the space-time cube each super-pixel point edge is calculated using calculation The Central Moment Feature of time shaft is together in series the Central Moment Feature of all super-pixel points of the space-time cube to obtain the pass The local motion feature vector of node, and the motion feature that the Central Moment Feature vector of artis is together in series to obtain video is retouched Son is stated, and is behavior classification using the bag of words feature coding and one-to-many support vector cassification in computer vision Each behavior classification in set trains a behavior classification, establishes the behavior point differentiated to the motor pattern Class.
4. motion quality evaluation method as claimed in claim 1 or 2, which is characterized in that it is described according to the foundation to institute The behavior classification that motor pattern is differentiated is stated, establishes and is commented based on the associated posture feature expression of human joint points displacement and quality Valence model, comprising:
According to the behavior classification of the foundation differentiated to the motor pattern, calculates human joint points and be displaced associated appearance State character representation, and Environmental Evaluation Model estimation of the training based on Support vector regression mode is special by the human body attitude extracted Score value is levied, establishes and is displaced associated posture feature expression and Environmental Evaluation Model based on human joint points.
5. motion quality evaluation method as claimed in claim 1 or 2, which is characterized in that it is described according to the foundation based on Human joint points are displaced associated posture feature expression and Environmental Evaluation Model, transport to the human body in the collected video of camera It is dynamic to carry out quality score, comprising:
It is displaced associated posture feature expression and Environmental Evaluation Model based on human joint points according to the foundation, is taken the photograph described in ground As a collected video carries out feature extraction and expression, it is assumed that the obtained motion feature for classification is the first classification movement Feature, for quality evaluation motion feature be the first quality evaluation motion feature, will it is described first classification motion feature input Into the behavior classification of the foundation differentiated to the motor pattern, the classification of the first classification motion feature is exported Class label selects the quality evaluation function of corresponding class categories to be estimated, chooses highest scoring from the class categories Video, linked character vector is displaced according to the artis, to obtain the classification optimal by dictionary mapping reduction Artis motion vector calculates the difference of artis position in current video artis position and optimum movement locus, is closed Node error position feedback carries out quality score to the human motion in the collected video of the camera and error is fed back.
6. a kind of motion quality evaluation system characterized by comprising
Unit, Environmental Evaluation Model, quality score unit are established in behavior classification;
It is described to establish unit, for extracting the local motion mode of corporal parts from human joint points motion profile, And it establishes the behavior differentiated to the motor pattern and classifies;
The Environmental Evaluation Model is established for being classified according to the behavior of the foundation differentiated to the motor pattern Associated posture feature expression and Environmental Evaluation Model are displaced based on human joint points;
The quality score unit, for according to the foundation based on human joint points be displaced associated posture feature indicate and Environmental Evaluation Model carries out quality score to the human motion in the collected video of camera.
7. motion quality evaluation system as claimed in claim 6, which is characterized in that the motion quality evaluation system is also wrapped It includes:
Motion profile extraction unit extracts human joint points fortune for using OpenPose Attitude estimation mode from video Dynamic rail mark.
8. motion quality evaluation system as claimed in claims 6 or 7, which is characterized in that it is described to establish unit, it is specifically used for:
The local motion mode of corporal parts is extracted from human joint points motion profile, is associated with to the motor pattern Motion profile indicate the frame length of video length, by artis in the position in every frame image centered on, extract one it is default The image block of size, therefore available one of direction is default big along the time axis in the video of the frame length in the video length The space-time cube of the small frame length multiplied by the video length, and to frame image block every in the space-time cube in spatial domain The influence of Gaussian smoothing removal noise point is carried out, and the space-time cube each super-pixel point edge is calculated using calculation The Central Moment Feature of time shaft is together in series the Central Moment Feature of all super-pixel points of the space-time cube to obtain the pass The local motion feature vector of node, and the motion feature that the Central Moment Feature vector of artis is together in series to obtain video is retouched Son is stated, and is behavior classification using the bag of words feature coding and one-to-many support vector cassification in computer vision Each behavior classification in set trains a behavior classification, establishes the behavior point differentiated to the motor pattern Class.
9. motion quality evaluation system as claimed in claims 6 or 7, which is characterized in that the Environmental Evaluation Model is specific to use In:
According to the behavior classification of the foundation differentiated to the motor pattern, calculates human joint points and be displaced associated appearance State character representation, and Environmental Evaluation Model estimation of the training based on Support vector regression mode is special by the human body attitude extracted Score value is levied, establishes and is displaced associated posture feature expression and Environmental Evaluation Model based on human joint points.
10. motion quality evaluation system as claimed in claims 6 or 7, which is characterized in that the quality score unit, specifically For:
It is displaced associated posture feature expression and Environmental Evaluation Model based on human joint points according to the foundation, is taken the photograph described in ground As a collected video carries out feature extraction and expression, it is assumed that the obtained motion feature for classification is the first classification movement Feature, for quality evaluation motion feature be the first quality evaluation motion feature, will it is described first classification motion feature input Into the behavior classification of the foundation differentiated to the motor pattern, the classification of the first classification motion feature is exported Class label selects the quality evaluation function of corresponding class categories to be estimated, chooses highest scoring from the class categories Video, linked character vector is displaced according to the artis, to obtain the classification optimal by dictionary mapping reduction Artis motion vector calculates the difference of artis position in current video artis position and optimum movement locus, is closed Node error position feedback carries out quality score to the human motion in the collected video of the camera and error is fed back.
CN201810909854.XA 2018-08-10 2018-08-10 Motion quality evaluation method and system Active CN109344692B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810909854.XA CN109344692B (en) 2018-08-10 2018-08-10 Motion quality evaluation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810909854.XA CN109344692B (en) 2018-08-10 2018-08-10 Motion quality evaluation method and system

Publications (2)

Publication Number Publication Date
CN109344692A true CN109344692A (en) 2019-02-15
CN109344692B CN109344692B (en) 2020-10-30

Family

ID=65291434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810909854.XA Active CN109344692B (en) 2018-08-10 2018-08-10 Motion quality evaluation method and system

Country Status (1)

Country Link
CN (1) CN109344692B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298279A (en) * 2019-06-20 2019-10-01 暨南大学 A kind of limb rehabilitation training householder method and system, medium, equipment
CN110414453A (en) * 2019-07-31 2019-11-05 电子科技大学成都学院 Human body action state monitoring method under a kind of multiple perspective based on machine vision
CN110705418A (en) * 2019-09-25 2020-01-17 西南大学 Taekwondo kicking motion video capture and scoring system based on deep LabCut
CN111507301A (en) * 2020-04-26 2020-08-07 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN111860157A (en) * 2020-06-15 2020-10-30 北京体育大学 Motion analysis method, device, equipment and storage medium
CN112329571A (en) * 2020-10-27 2021-02-05 同济大学 Self-adaptive human body posture optimization method based on posture quality evaluation
CN113221627A (en) * 2021-03-08 2021-08-06 广州大学 Method, system, device and medium for constructing human face genetic feature classification data set
WO2021203667A1 (en) * 2020-04-06 2021-10-14 Huawei Technologies Co., Ltd. Method, system and medium for identifying human behavior in a digital video using convolutional neural networks
CN113611387A (en) * 2021-07-30 2021-11-05 清华大学深圳国际研究生院 Motion quality assessment method based on human body pose estimation and terminal equipment
CN115019240A (en) * 2022-08-04 2022-09-06 成都西交智汇大数据科技有限公司 Grading method, device and equipment for chemical experiment operation and readable storage medium
CN115393964A (en) * 2022-10-26 2022-11-25 天津科技大学 Body-building action recognition method and device based on BlazePose
CN116453693A (en) * 2023-04-20 2023-07-18 深圳前海运动保网络科技有限公司 Exercise risk protection method and device based on artificial intelligence and computing equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622603A (en) * 2011-01-31 2012-08-01 索尼公司 Method and apparatus for evaluating human pose recognition technology
US20120219186A1 (en) * 2011-02-28 2012-08-30 Jinjun Wang Continuous Linear Dynamic Systems
CN102693413A (en) * 2011-02-18 2012-09-26 微软公司 Motion recognition
CN105825240A (en) * 2016-04-07 2016-08-03 浙江工业大学 Behavior identification method based on AP cluster bag of words modeling
CN105957103A (en) * 2016-04-20 2016-09-21 国网福建省电力有限公司 Vision-based motion feature extraction method
CN106446847A (en) * 2016-09-30 2017-02-22 深圳市唯特视科技有限公司 Human body movement analysis method based on video data
CN108256433A (en) * 2017-12-22 2018-07-06 银河水滴科技(北京)有限公司 A kind of athletic posture appraisal procedure and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622603A (en) * 2011-01-31 2012-08-01 索尼公司 Method and apparatus for evaluating human pose recognition technology
CN102693413A (en) * 2011-02-18 2012-09-26 微软公司 Motion recognition
US20120219186A1 (en) * 2011-02-28 2012-08-30 Jinjun Wang Continuous Linear Dynamic Systems
CN105825240A (en) * 2016-04-07 2016-08-03 浙江工业大学 Behavior identification method based on AP cluster bag of words modeling
CN105957103A (en) * 2016-04-20 2016-09-21 国网福建省电力有限公司 Vision-based motion feature extraction method
CN106446847A (en) * 2016-09-30 2017-02-22 深圳市唯特视科技有限公司 Human body movement analysis method based on video data
CN108256433A (en) * 2017-12-22 2018-07-06 银河水滴科技(北京)有限公司 A kind of athletic posture appraisal procedure and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
杜吉祥,郭一兰,翟传敏: "基于局部时空兴趣点特征包的事件识别", 《南京大学学报(自然科学版)》 *
雷庆,李绍滋,陈锻生: "一种结合姿态和场景的图像中人体行为分类方法", 《小型微型计算机系统》 *
雷庆,陈锻生,李绍滋: "复杂场景下的人体行为识别研究新进展", 《计算机科学》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298279A (en) * 2019-06-20 2019-10-01 暨南大学 A kind of limb rehabilitation training householder method and system, medium, equipment
CN110414453A (en) * 2019-07-31 2019-11-05 电子科技大学成都学院 Human body action state monitoring method under a kind of multiple perspective based on machine vision
CN110705418B (en) * 2019-09-25 2021-11-30 西南大学 Taekwondo kicking motion video capture and scoring system based on deep LabCut
CN110705418A (en) * 2019-09-25 2020-01-17 西南大学 Taekwondo kicking motion video capture and scoring system based on deep LabCut
WO2021203667A1 (en) * 2020-04-06 2021-10-14 Huawei Technologies Co., Ltd. Method, system and medium for identifying human behavior in a digital video using convolutional neural networks
US11625646B2 (en) 2020-04-06 2023-04-11 Huawei Cloud Computing Technologies Co., Ltd. Method, system, and medium for identifying human behavior in a digital video using convolutional neural networks
CN111507301A (en) * 2020-04-26 2020-08-07 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN111507301B (en) * 2020-04-26 2021-06-08 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN111860157A (en) * 2020-06-15 2020-10-30 北京体育大学 Motion analysis method, device, equipment and storage medium
CN111860157B (en) * 2020-06-15 2023-12-26 北京体育大学 Motion analysis method, device, equipment and storage medium
CN112329571A (en) * 2020-10-27 2021-02-05 同济大学 Self-adaptive human body posture optimization method based on posture quality evaluation
CN112329571B (en) * 2020-10-27 2022-12-16 同济大学 Self-adaptive human body posture optimization method based on posture quality evaluation
CN113221627B (en) * 2021-03-08 2022-05-10 广州大学 Method, system, device and medium for constructing face genetic feature classification data set
CN113221627A (en) * 2021-03-08 2021-08-06 广州大学 Method, system, device and medium for constructing human face genetic feature classification data set
CN113611387A (en) * 2021-07-30 2021-11-05 清华大学深圳国际研究生院 Motion quality assessment method based on human body pose estimation and terminal equipment
CN115019240A (en) * 2022-08-04 2022-09-06 成都西交智汇大数据科技有限公司 Grading method, device and equipment for chemical experiment operation and readable storage medium
CN115019240B (en) * 2022-08-04 2022-11-11 成都西交智汇大数据科技有限公司 Grading method, device and equipment for chemical experiment operation and readable storage medium
CN115393964A (en) * 2022-10-26 2022-11-25 天津科技大学 Body-building action recognition method and device based on BlazePose
CN116453693A (en) * 2023-04-20 2023-07-18 深圳前海运动保网络科技有限公司 Exercise risk protection method and device based on artificial intelligence and computing equipment
CN116453693B (en) * 2023-04-20 2023-11-14 深圳前海运动保网络科技有限公司 Exercise risk protection method and device based on artificial intelligence and computing equipment

Also Published As

Publication number Publication date
CN109344692B (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN109344692A (en) A kind of motion quality evaluation method and system
CN104850825B (en) A kind of facial image face value calculating method based on convolutional neural networks
CN108256433B (en) Motion attitude assessment method and system
Bhattacharya et al. Step: Spatial temporal graph convolutional networks for emotion perception from gaits
Presti et al. 3D skeleton-based human action classification: A survey
CN103942577B (en) Based on the personal identification method for establishing sample database and composite character certainly in video monitoring
CN104463100B (en) Intelligent wheel chair man-machine interactive system and method based on human facial expression recognition pattern
Jalal et al. Human daily activity recognition with joints plus body features representation using Kinect sensor
CN107808143A (en) Dynamic gesture identification method based on computer vision
CN104008564B (en) A kind of human face expression cloning process
CN109815826A (en) The generation method and device of face character model
CN103473801A (en) Facial expression editing method based on single camera and motion capturing data
CN105139004A (en) Face expression identification method based on video sequences
CN104517097A (en) Kinect-based moving human body posture recognition method
Avola et al. Deep temporal analysis for non-acted body affect recognition
CN104200203B (en) A kind of human action detection method based on action dictionary learning
CN104915658B (en) A kind of emotion component analyzing method and its system based on emotion Distributed learning
CN112183198A (en) Gesture recognition method for fusing body skeleton and head and hand part profiles
CN113255522B (en) Personalized motion attitude estimation and analysis method and system based on time consistency
CN111723687A (en) Human body action recognition method and device based on neural network
CN107103311A (en) A kind of recognition methods of continuous sign language and its device
Ravi et al. Sign language recognition with multi feature fusion and ANN classifier
Danelakis et al. A spatio-temporal wavelet-based descriptor for dynamic 3D facial expression retrieval and recognition
Bouchaffra Mapping Dynamic Bayesian Networks to $\alpha $-Shapes: Application to Human Faces Identification Across Ages
Hachaj et al. Human actions recognition on multimedia hardware using angle-based and coordinate-based features and multivariate continuous hidden Markov model classifier

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant