CN108509924A - The methods of marking and device of human body attitude - Google Patents
The methods of marking and device of human body attitude Download PDFInfo
- Publication number
- CN108509924A CN108509924A CN201810301255.XA CN201810301255A CN108509924A CN 108509924 A CN108509924 A CN 108509924A CN 201810301255 A CN201810301255 A CN 201810301255A CN 108509924 A CN108509924 A CN 108509924A
- Authority
- CN
- China
- Prior art keywords
- human body
- body attitude
- data
- audio
- product
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
This disclosure relates to field of artificial intelligence.Virtually enhance in display technology to solve existing human-computer interaction, it cannot be scored by each interactive session, score inaccurate problem, and the embodiment of the present disclosure provides a kind of methods of marking and device of human body attitude, which includes receiving module for receiving human body attitude data;Parsing is with read module for parsing and reading audio and video grading parameters;Grading module is for scoring to human body attitude data according to audio and video grading parameters.
Description
Technical field
This disclosure relates to field of artificial intelligence, in particular to the methods of marking and device of a kind of human body attitude.
Background technology
The combination of virtual enhancing display technology and human-computer interaction is more and more closer, and the occasion used is more and more.However, existing
Most of scoring be all the rule set, marking mode is only judged according to result, cannot be embodied each
Details in interactive session, to cannot it is accurate, sufficiently pass through marking mode and embody the people based on virtual enhancing display technology
The completion status and interaction effect of body posture.
Invention content
The embodiment of the present disclosure provides a kind of methods of marking and device of human body attitude.
In a first aspect, the embodiment of the present disclosure provides a kind of methods of marking of human body attitude, include the following steps:Recipient
Body attitude data;It parses and reads audio and video grading parameters;According to the audio and video grading parameters to the human body attitude data
It scores;Wherein, the application program for the audio and video grading parameters being parsed and being read uses virtual enhancing display technology.
Second aspect, the embodiment of the present disclosure provide a kind of computer readable storage medium, are stored thereon with computer journey
The step of sequence, which realizes above-mentioned method when being executed by processor.
The third aspect, the embodiment of the present disclosure provide a kind of computer equipment, including memory, processor and are stored in
On reservoir and the computer program that can run on a processor, the processor realize above-mentioned method when executing described program
Step.
Fourth aspect, the embodiment of the present disclosure provide a kind of scoring apparatus of human body attitude, including:Receiving module is used for
Receive human body attitude data;Parsing and read module, for parsing and reading audio and video grading parameters;Grading module is used for root
It scores the human body attitude data according to the audio and video grading parameters;Wherein, described to parse with read module to described
Audio and video grading parameters parse and the application program read uses virtual enhancing display technology.
It is to be understood that foregoing general description and following detailed description are both illustrative, and it is intended to
In the further explanation for providing claimed technology.
Description of the drawings
In order to illustrate more clearly of the technical solution of the embodiment of the present disclosure, below to needed in embodiment description
Attached drawing is briefly described:
Fig. 1 is the hardware architecture diagram of the terminal device of the embodiment of the present disclosure;
Fig. 2 is the structural schematic diagram of the scoring apparatus of the human body attitude of the embodiment of the present disclosure one;
Fig. 3 is the work flow diagram of the scoring apparatus of human body attitude shown in Fig. 2;
Fig. 4 is the structural schematic diagram of the scoring apparatus of the human body attitude of the embodiment of the present disclosure two;
Fig. 5 is the work flow diagram of the scoring apparatus of human body attitude shown in Fig. 4;
Fig. 6 is the structural schematic diagram of the scoring apparatus of the human body attitude of the embodiment of the present disclosure three;
Fig. 7 is the work flow diagram of the scoring apparatus of human body attitude shown in fig. 6;
Fig. 8 is the hardware block diagram of the scoring apparatus of the human body attitude of the embodiment of the present disclosure;
Fig. 9 is the schematic diagram of the computer readable storage medium of the embodiment of the present disclosure.
Specific implementation mode
The application is further discussed in detail with reference to the accompanying drawings and examples.
In following introductions, term " first ", " second " only for descriptive purposes, and should not be understood as instruction or dark
Show relative importance.It is following to introduce the multiple embodiments for providing the disclosure, it can replace or merge between different embodiments
Combination, therefore the application is it is also contemplated that include all possible combinations of recorded identical and/or different embodiment.Thus, such as
Fruit one embodiment include feature A, B, C, another embodiment include feature B, D, then the application also should be regarded as include containing
A, the embodiment of the every other possible combination of one or more of B, C, D, although the embodiment may be in the following contents
In have specific literature record.
As shown in Figure 1, terminal device can be implemented in a variety of manners, the terminal device in the disclosure may include but not
It is (flat to be limited to such as mobile phone, smart phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD
Plate computer), PMP (portable media player), navigation device, vehicle-mounted terminal equipment, vehicle-mounted display terminal, after vehicle electronics
The fixed terminal equipment of the mobile terminal device of visor etc. and such as number TV, desktop computer etc..
In one embodiment of the disclosure, terminal device may include wireless communication unit 1, A/V (audio/video) defeated
Enter unit 2, user input unit 3, sensing unit 4, output unit 5, memory 6, interface unit 7, controller 8 and power supply unit
9 etc..Wherein, A/V (audio/video) input unit 2 includes but not limited to camera, front camera, rear camera,
All kinds of audio and video input equipments.It should be appreciated by those skilled in the art included by the terminal device that above-described embodiment is listed
Component, more than type described above, may include less or more components.
It should be appreciated by those skilled in the art various embodiments described herein can be to use such as computer soft
Part, hardware or any combination thereof computer-readable medium implement.Hardware is implemented, embodiment described herein can be with
By using application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), can
Programmed logic device (PLD), processor, controller, microcontroller, microprocessor, is set field programmable gate array (FPGA)
It is calculated as executing at least one of electronic unit of function described herein to implement, in some cases, such embodiment party
Formula can be implemented in the controller.For software implementation, the embodiment of such as process or function can with allow to execute at least
A kind of individual software module of functions or operations is implemented.Software code can be by being write with any programming language appropriate
Software application (or program) is implemented, and software code can store in memory and be executed by controller.
Specifically, the embodiment of the present disclosure provides a kind of scoring apparatus of human body attitude, including:Receive human body attitude number
According to;It parses and reads audio and video grading parameters;It is scored human body attitude data according to audio and video grading parameters;Wherein, right
Audio and video grading parameters parse and the application program read uses virtual enhancing display technology.It is understood that the disclosure
A kind of scoring apparatus for the human body attitude being related to, the specially scoring apparatus of the human body attitude based on terminal device, wherein terminal
Equipment includes but not limited to:Mobile terminal, fixed terminal or vehicle-mounted Terminal Type etc..
It should be noted that receive human body attitude data can by optics camera device, such as the camera of mobile terminal,
Kinect etc. captures the human body attitude data of detection zone, can also include but not limited to that displacement is passed by action message acquisition module
Sensor, such as displacement gyroscope acquire the angular speed on three axis of three dimensions and displacement, to calculate distance, accelerate
Sensor, such as gravity accelerometer are spent, the acceleration parameter on three axis of three dimensions is acquired, to by multiple
Acceleration parameter calculates the acceleration of motion and direction of motion, Optical tracking sensor and position definition module.Wherein, position
Definition module for each sensor carry out position setting, while can also be used as information data end processing from it is each not
With the master data of the actuating signal of the sensor of position.
The scoring apparatus of human body attitude disclosed in the embodiment of the present disclosure receives human body attitude data by receiving module,
It is parsed again with read module by parsing and reads audio and video grading parameters;It is finally scored and is joined according to audio and video according to grading module
It is several to score human body attitude data;Wherein, the application that parsing is parsed and read to audio and video grading parameters with read module
Program uses virtual enhancing display technology.Reach by acquisition to human body attitude data and to audio and video grading parameters
Parsing and reading, finishing man-machine interaction virtually enhance under dispaly state, and score accurate advantageous effect, meanwhile, realize man-machine friendship
Mutual strong interactive and experience property.In addition, above-mentioned apparatus is joined by reception to human body attitude data and scoring audio and video
Several parsings and reading, finishing man-machine interaction virtually enhance under dispaly state, and score accurate advantageous effect, meanwhile, realize people
The strong interactive and experience property of machine interaction.
Embodiment one
As shown in Fig. 2, the scoring apparatus of the human body attitude of the present embodiment, including:Receiving module 200 is for receiving human body appearance
State data;Parsing is with read module 400 for parsing and reading audio and video grading parameters, wherein to audio and video grading parameters solution
It analyses and the application program read uses virtual enhancing display technology;Grading module 600 is used for according to audio and video grading parameters pair
Human body attitude data score.
In the present embodiment, receiving module 200 be additionally operable to receive human body foreleg and back leg angle-data, receive human arm with
Body subject angle data receive human arm and head angle data.Receiving module is improved as a result, receives the polygonal of data
Degree property.
Furthermore, it is necessary to explanation, receiving human body attitude data can be by optics camera device, such as taking the photograph for mobile terminal
As head, kinect etc. captures the human body attitude data of detection zone, can also include but not limited to by action message acquisition module
Displacement sensor, such as displacement gyroscope, acquire three axis of three dimensions on angular speed and displacement, to calculate away from
From, acceleration transducer, such as gravity accelerometer, the acceleration parameter on three axis of three dimensions is acquired, to
The acceleration of motion and direction of motion, Optical tracking sensor and position definition module are calculated by multiple acceleration parameters.
Wherein, position definition module carries out the setting of position for each sensor, while can also be used as information data end processing
The master data of the actuating signal of sensor from each different location.
Further, grading module 600 scores to human body attitude data according to audio and video grading parameters.In order to improve
The accuracy of scoring can also be shot with video-corder by pre-establishing human body attitude template by optics in addition to the algorithm that the disclosure refers to
Device, such as the camera of mobile terminal, kinect etc. captures the human body attitude of detection zone, will capture the human body attitude of detection zone
It is compared with the human body attitude template of foundation, compares successful human body attitude, as effective posture.
Establishing human body attitude template concrete operations is:Estimation based on human body attitude, it is crucial with human body based on human body attitude
The binding of point.Specifically, the estimation based on human body attitude is primarily referred to as obtaining each human part, i.e. human body in inputting picture
Component part, for example, head, the positions such as left and right upper arm, size and direction etc..In order to detect human body appearance from input picture
State, it is necessary to which input picture is scanned;Since the size and position distribution of human part in picture are all not fixed, sweep
It needs to be scanned with different positions, scale and direction when retouching each human part.Then, feature scanning obtained is sent
It is detected to two-value grader, to determine whether human body.It is understood that before testing, needing to classify to two-value
Device is trained to obtain the parameter of grader.It should be noted that may will be inputted when due to detection same in picture
One human testing is multiple and different but very close posture, it is therefore desirable to mixing operation is carried out to classification results, with
Exclude the posture repeated.
Further, the generation of the template based on human body attitude to prestore is also based on the estimation of the human body attitude of component
The graph structure model of use.Wherein, graph structure model is broadly divided into three graph model, the observation model of component and figure reasoning portions
Point.Graph model indicates human body framework, and for describing the whole restriction relation of all human parts, graph model is generally using tree mould
Type, wherein the restriction relation of each pair of adjacent component is modeled using the distorted pattern between component;The observation model of component
Model is established to the appearance of human part, the quality of feature selecting determines the quality of the display model of component;Figure reasoning is profit
The human body attitude in picture to be detected is estimated with the graph model of foundation and component observation model.Before carrying out figure reasoning,
The parameter of manikin is obtained by classifier training.
It should be noted that the manikin based on component generally uses skeleton pattern or hinge shape model.Wherein, bone
Frame model, that is, rod graph model is made of the axis line segment of human part, these line segments are generally connected with each other.Skeleton pattern is simply straight
It sees;Hinge type shape generally indicates that human part, hinge type shape contain more than skeleton pattern using rectangle
Information content, the position of human part can not only be described, moreover it is possible to describe the width information of human part, pass through enhancing as a result,
Describable amount is that subsequent comparison lays the foundation.
Further, after the completion of the estimation operation of human body attitude, human body key point is chosen.Wherein, key point, which is chosen, appoints
It anticipates multiple a bone key points, any number of a bone key points can be:Head, right side shoulder, right side ancon, right side wrist
Portion, the right hand, left side shoulder, left side ancon, left side wrist, left hand, right knee, right ankle, right crus of diaphragm, left knee, left ankle, left foot, the right side
Hip, left hip.Above-mentioned any number of bone key points are bound with this action event estimated based on human body attitude.By
This, the action event that the template based on human body attitude is executed for follow-up user provides accurate data supporting, has good
Ease for use.
In the present embodiment, since receiving module receives human body foreleg and back leg angle-data, reception human arm and body
Subject angle data receive human arm and head angle data.As a result, be subsequently according to audio and video grading parameters to human body appearance
State data carry out the multifarious reference that scoring provides data, and multi-dimensional data support is provided for the accuracy of scoring.
Fig. 3 is the work flow diagram of the scoring apparatus of human body attitude shown in Fig. 2.It is as follows:
Step 202, human body attitude data is received.
In the present embodiment, human body attitude data is received, including:Receive human body foreleg and back leg angle-data, reception human body
Arm and body subject angle data, reception human arm and head angle data.
Step 204, it parses and reads audio and video grading parameters.Wherein, what is audio and video grading parameters parsed and read answers
Virtual enhancing display technology is used with program.
Step 206, it is scored human body attitude data according to audio and video grading parameters.
The methods of marking of human body attitude disclosed in the embodiment of the present disclosure receives human body attitude data;It parses and reads sound and regard
Frequency grading parameters;It is scored human body attitude data according to audio and video grading parameters.The above method passes through to human body attitude number
According to acquisition and parsing and read audio and video grading parameters, finishing man-machine interaction virtually enhances under dispaly state, and scoring is accurate
Advantageous effect, meanwhile, realize the strong interactive and experience property of human-computer interaction.
In the present embodiment, due to receiving human body foreleg and back leg angle-data, reception human arm and body subject angle
Data receive human arm and head angle data.As a result, be subsequently according to audio and video grading parameters to human body attitude data into
Row scoring provides the multifarious reference of data, and multi-dimensional data support is provided for the accuracy of scoring.
The methods of marking of human body attitude disclosed in the embodiment of the present disclosure receives human body attitude data;It parses and reads sound and regard
Frequency grading parameters;It is scored human body attitude data according to audio and video grading parameters.The above method passes through to human body attitude number
According to acquisition and parsing and read audio and video grading parameters, finishing man-machine interaction virtually enhances under dispaly state, and scoring is accurate
Advantageous effect, meanwhile, realize the strong interactive and experience property of human-computer interaction.
Embodiment two
As shown in figure 4, the scoring apparatus of the human body attitude of the present embodiment is what is different from the first embodiment is that the device is added to
Weights assignment module 300, the first computing module 700, the second computing module 800 and third computing module 900.Specifically, receiving
Module 200 is for receiving human body attitude data;Weight assignment module 300 is for weighing multigroup human body attitude data of acquisition
Reassignment assigns human arm and body subject angle data human body foreleg and back leg angle-data the first weighted value of assignment
The second weighted value of value, and to human arm and head angle data assignment third weighted value;Parsing is used for read module 400
It parses and reads audio and video grading parameters;Grading module 600 is used to carry out human body attitude data according to audio and video grading parameters
Scoring;First computing module 700 be used for by the first weighted value, the second weighted value and third weighted value respectively with preset standard template
Weighted value product obtains the first product, the second product and third product;Second computing module 800 is used for the first product, the
Two products and third product respectively with the depth value of human body foreleg and back leg, human arm and body main body depth value and
Human arm and the depth value of head angle carry out product;Third computing module 900 is used to the product of acquisition carrying out summation flat
, in conjunction with audio and video grading parameters, the scoring to human body attitude data is completed.
In the present embodiment, due to being added to weight assignment module and the first computing module, the second computing module and third
Computing module provides accurately data support to the accuracy of the scoring of the human body attitude based on virtual enhancing display technology.
Fig. 5 is the work flow diagram of the scoring apparatus of human body attitude shown in Fig. 4.Detailed process step is described as follows:
Step 401, human body attitude data is received.
Step 402, weight assignment is carried out to multigroup human body attitude data of acquisition, to human body foreleg and back leg angle-data
The first weighted value of assignment, to human arm and body subject angle data the second weighted value of assignment, and to human arm and head
Portion's angle-data assignment third weighted value.
Step 403, it parses and reads audio and video grading parameters.Wherein, what is audio and video grading parameters parsed and read answers
Virtual enhancing display technology is used with program.
Step 404, the first weighted value, the second weighted value and third weighted value are multiplied with preset standard template weight value respectively
Product obtains the first product, the second product and third product.
Step 405, by the first product, the second product and third product respectively with the depth value of human body foreleg and back leg,
Human arm and the depth value and human arm and the depth value of head angle of body main body carry out product.
Step 406, the product of acquisition is subjected to sum-average arithmetic, in conjunction with audio and video grading parameters, completed to human body attitude number
According to scoring.
In the present embodiment, scored human body attitude data according to audio and video grading parameters.Specifically, in addition to the disclosure
The algorithm referred to can also pass through the camera shooting of optics camera device, such as mobile terminal by pre-establishing human body attitude template
Head, kinect etc. capture the human body attitude of detection zone, will capture the human body attitude template of the human body attitude and foundation of detection zone
It is compared, compares successful human body attitude, as effective posture.
Establishing human body attitude template concrete operations is:Estimation based on human body attitude, it is crucial with human body based on human body attitude
The binding of point.Specifically, the estimation based on human body attitude is primarily referred to as obtaining each human part, i.e. human body in inputting picture
Component part, for example, head, the positions such as left and right upper arm, size and direction etc..In order to detect human body appearance from input picture
State, it is necessary to which input picture is scanned;Since the size and position distribution of human part in picture are all not fixed, sweep
It needs to be scanned with different positions, scale and direction when retouching each human part.Then, feature scanning obtained is sent
It is detected to two-value grader, to determine whether human body.It is understood that before testing, needing to classify to two-value
Device is trained to obtain the parameter of grader.It should be noted that may will be inputted when due to detection same in picture
One human testing is multiple and different but very close posture, it is therefore desirable to mixing operation is carried out to classification results, with
Exclude the posture repeated.
Further, the generation of the template based on human body attitude to prestore is also based on the estimation of the human body attitude of component
The graph structure model of use.Wherein, graph structure model is broadly divided into three graph model, the observation model of component and figure reasoning portions
Point.Graph model indicates human body framework, and for describing the whole restriction relation of all human parts, graph model is generally using tree mould
Type, wherein the restriction relation of each pair of adjacent component is modeled using the distorted pattern between component;The observation model of component
Model is established to the appearance of human part, the quality of feature selecting determines the quality of the display model of component;Figure reasoning is profit
The human body attitude in picture to be detected is estimated with the graph model of foundation and component observation model.Before carrying out figure reasoning,
The parameter of manikin is obtained by classifier training.
It should be noted that the manikin based on component generally uses skeleton pattern or hinge shape model.Wherein, bone
Frame model, that is, rod graph model is made of the axis line segment of human part, these line segments are generally connected with each other.Skeleton pattern is simply straight
It sees;Hinge type shape generally indicates that human part, hinge type shape contain more than skeleton pattern using rectangle
Information content, the position of human part can not only be described, moreover it is possible to describe the width information of human part, pass through enhancing as a result,
Describable amount is that subsequent comparison lays the foundation.
Further, after the completion of the estimation operation of human body attitude, human body key point is chosen.Wherein, key point, which is chosen, appoints
It anticipates multiple bone key points, any number of bone key points can be:Head, right side shoulder, right side ancon, right side wrist, it is right
Hand, left side shoulder, left side ancon, left side wrist, left hand, right knee, right ankle, right crus of diaphragm, left knee, left ankle, left foot, right hip, a left side
Hip.Above-mentioned any number of bone key points are bound with this action event estimated based on human body attitude.After being as a result,
The action event that continuous user executes the template based on human body attitude provides accurate data supporting, has good easy-to-use
Property.
The methods of marking of human body attitude disclosed in the embodiment of the present disclosure receives human body attitude data;It parses and reads sound and regard
Frequency grading parameters;It is scored human body attitude data according to audio and video grading parameters.The above method passes through to human body attitude number
According to reception and parsing and read audio and video grading parameters, finishing man-machine interaction virtually enhances under dispaly state, and scoring is accurate
Advantageous effect, meanwhile, realize the strong interactive and experience property of human-computer interaction.
In the present embodiment, since multigroup human body attitude data to acquisition carry out weight assignment, to human body foreleg and back leg
The first weighted value of angle-data assignment, to human arm and body subject angle data the second weighted value of assignment, and to human body
Arm and head angle data assignment third weighted value;And by above-mentioned numerical value and preset standard template weight value, and
With the depth of the depth value of human body foreleg and back leg, the depth value of human arm and body main body and human arm and head angle
Angle value is calculated, and accurately data branch is provided to the accuracy of the scoring of the human body attitude based on virtual enhancing display technology
It holds.
Embodiment three
As shown in fig. 6, the scoring apparatus of the human body attitude of the present embodiment what is different from the first embodiment is that the device in addition to adding
Add except weights assignment module 300, the first computing module 700, the second computing module 800 and third computing module 900, has parsed
Further include with read module 400:Collecting unit 401 is used for the sound characteristic of preset standard audio and video in acquisition applications program and moves
Make feature.Wherein, the sound characteristic of preset standard audio and video includes:Loudness, tone, tone color and rhythm;Motion characteristic includes:It is dynamic
Make completion angle value, action is completed and audio and video rhythm match ratio attribute value and action deadline value.
It should be noted that grading module 600 scores to human body attitude data according to audio and video grading parameters, packet
It includes:It is carried out at calculating according to the rating matrix of motion characteristic matrix and the interactive action of pre-set different brackets difficulty
Reason generates the final score matrix of user.Specifically, for the multiple attribute values each acted, composition each acts dynamic
Make feature vector.Wherein, the multiple attribute values each acted include:Act completeness attribute value, action completion and music rhythm
Match ratio attribute value.The motion characteristic matrix of at least one action is constituted using the motion characteristic vector each acted.It determines again
The rating matrix of the interactive action of different brackets difficulty parses at least one of different brackets difficulty interactive action
The bone nodal information and bone nodal information of interactive action and music rhythm match ratio, to carrying out operation therebetween,
The scoring for obtaining a certain grade interactive action constitutes scoring square according to the scoring of different brackets difficulty interactive action
Battle array.Finally, the final score of user is determined according to motion characteristic matrix and rating matrix, specially:Obtain motion characteristic square
The product of the transposed matrix of battle array and motion characteristic matrix asks the first matrix and adjustment unit matrix as the first matrix
With obtain the second matrix, wherein adjustment unit matrix be unit matrix and regulation coefficient product, regulation coefficient be more than 0
Constant, obtain the second inverse of a matrix matrix, motion characteristic matrix and rating matrix transposed matrix product, using product as
The user of terminal device equipment is in interactive process, the score matrix in motion characteristic space.
In the present embodiment, by parsing with read module 400, the addition of collecting unit 401 to parse and reads sound
Video grading parameters are more detailed-oriented, various dimensions, are scored human body attitude data according to audio and video grading parameters
Accuracy has established data basis.
Fig. 7 is the work flow diagram of the scoring apparatus of human body attitude shown in fig. 6.It is as follows:
Step 601, human body attitude data is received.
In the present embodiment, human body attitude data is received, including:Receive human body foreleg and back leg angle-data, reception human body
Arm and body subject angle data, reception human arm and head angle data.
Step 602, weight assignment is carried out to multigroup human body attitude data of acquisition, to human body foreleg and back leg angle-data
The first weighted value of assignment, to human arm and body subject angle data the second weighted value of assignment, and to human arm and head
Portion's angle-data assignment third weighted value.
Step 603, in acquisition applications program preset standard audio and video sound characteristic and motion characteristic.Wherein, pre- bidding
The sound characteristic of quasi- audio and video includes:Loudness, tone, tone color and rhythm;Motion characteristic includes:Action is completed angle value, has been acted
At with audio and video rhythm match ratio attribute value and action deadline value.
Step 604, the first weighted value, the second weighted value and third weighted value are multiplied with preset standard template weight value respectively
Product obtains the first product, the second product and third product.
Step 605, by the first product, the second product and third product respectively with the depth value of human body foreleg and back leg,
Human arm and the depth value and human arm and the depth value of head angle of body main body carry out product.
Step 606, the product of acquisition is subjected to sum-average arithmetic, in conjunction with audio and video grading parameters, completed to human body attitude number
According to scoring.
The methods of marking of human body attitude disclosed in the embodiment of the present disclosure receives human body attitude data;It parses and reads sound and regard
Frequency grading parameters;It is scored human body attitude data according to audio and video grading parameters.The above method passes through to human body attitude number
According to reception and parsing and read audio and video grading parameters, finishing man-machine interaction virtually enhances under dispaly state, and scoring is accurate
Advantageous effect, meanwhile, realize the strong interactive and experience property of human-computer interaction.
In the present embodiment, pass through the sound characteristic and motion characteristic of preset standard audio and video in acquisition applications program so that
Parse and read that audio and video grading parameters are more detailed-oriented, various dimensions, for according to audio and video grading parameters to human body attitude number
Data basis has been established according to the accuracy to score.
Fig. 8 is the hardware block diagram for the scoring apparatus for illustrating human body attitude according to an embodiment of the present disclosure.As shown in figure 8,
Include memory 801 and processor 802 according to the scoring apparatus 80 of the human body attitude of the embodiment of the present disclosure.The scoring of human body attitude
Bindiny mechanism's (not shown) interconnection that each component in device 80 passes through bus system and/or other forms.
Memory 801 is for storing non-transitory computer-readable instruction.Specifically, memory 801 may include one
Or multiple computer program products, computer program product may include various forms of computer readable storage mediums, such as
Volatile memory and/or nonvolatile memory.Volatile memory for example may include random access memory (RAM)
And/or cache memory (cache) etc..Nonvolatile memory for example may include read-only memory (ROM), hard disk,
Flash memory etc..
Processor 802 can be central processing unit (CPU) or have data-handling capacity and/or instruction execution capability
Other forms processing unit, and the other components that can be controlled in the scoring apparatus 80 of human body attitude are desired to execute
Function.In one embodiment of the disclosure, the processor 802 is used in run memory 801 store computer-readable
Instruction so that the scoring apparatus 80 of human body attitude executes the methods of marking of above-mentioned human body attitude.The scoring apparatus of human body attitude with
The embodiment of the methods of marking description of above-mentioned human body attitude is identical, will omit its repeated description herein.
Fig. 9 is the schematic diagram for illustrating computer readable storage medium according to an embodiment of the present disclosure.As shown in figure 9, root
It is stored thereon with non-transitory computer-readable instruction 901 according to the computer readable storage medium 900 of the embodiment of the present disclosure.Work as institute
When stating non-transitory computer-readable instruction 901 and being run by processor, execute with reference to foregoing description according to the embodiment of the present disclosure
Human body attitude methods of marking.
More than, according to the methods of marking of the human body attitude of the embodiment of the present disclosure and device and computer-readable storage medium
Matter.By reception to human body attitude data and parsing and audio and video grading parameters are read, finishing man-machine interaction virtually enhances
Under dispaly state, score accurate advantageous effect, meanwhile, realize the strong interactive and experience property of human-computer interaction.
The basic principle of the disclosure is described above in association with specific embodiment, however, it is desirable to, it is noted that in the disclosure
The advantages of referring to, advantage, effect etc. are only exemplary rather than limitation, must not believe that these advantages, advantage, effect etc. are the disclosure
Each embodiment is prerequisite.In addition, detail disclosed above is merely to exemplary effect and the work being easy to understand
With, and it is unrestricted, it is that must be realized using above-mentioned concrete details that above-mentioned details, which is not intended to limit the disclosure,.
The block diagram of device, device, equipment, system involved in the disclosure only as illustrative example and is not intended to
It is required that or hint must be attached in such a way that box illustrates, arrange, configure.As those skilled in the art will appreciate that
, it can be connected by any way, arrange, configure these devices, device, equipment, system.Such as "include", "comprise", " tool
" etc. word be open vocabulary, refer to " including but not limited to ", and can be used interchangeably with it.Vocabulary used herein above
"or" and " and " refer to vocabulary "and/or", and can be used interchangeably with it, unless it is not such that context, which is explicitly indicated,.Here made
Vocabulary " such as " refers to phrase " such as, but not limited to ", and can be used interchangeably with it.
In addition, as used herein, the "or" instruction separation that is used in the enumerating of the item started with "at least one"
It enumerates, so that enumerating for such as " A, B or C's being at least one " means A or B or C or AB or AC or BC or ABC (i.e. A and B
And C).In addition, wording " exemplary " does not mean that the example of description is preferred or more preferable than other examples.
It may also be noted that in the system and method for the disclosure, each component or each step are can to decompose and/or again
Combination nova.These decompose and/or reconfigure the equivalent scheme that should be regarded as the disclosure.
The technology instructed defined by the appended claims can not departed from and carried out to the various of technology described herein
Change, replace and changes.In addition, the scope of the claims of the disclosure is not limited to process described above, machine, manufacture, thing
Composition, means, method and the specific aspect of action of part.It can be essentially identical using being carried out to corresponding aspect described herein
Function either realize essentially identical result there is currently or to be developed later processing, machine, manufacture, event group
At, means, method or action.Thus, appended claims include such processing within its scope, machine, manufacture, event
Composition, means, method or action.
The above description of disclosed aspect is provided so that any person skilled in the art can make or use this
It is open.Various modifications in terms of these are readily apparent to those skilled in the art, and are defined herein
General Principle can be applied to other aspect without departing from the scope of the present disclosure.Therefore, the disclosure is not intended to be limited to
Aspect shown in this, but according to the widest range consistent with principle disclosed herein and novel feature.
In order to which purpose of illustration and description has been presented for above description.In addition, this description is not intended to the reality of the disclosure
It applies example and is restricted to form disclosed herein.Although already discussed above multiple exemplary aspects and embodiment, this field skill
Art personnel will be recognized that its certain modifications, modification, change, addition and sub-portfolio.
Claims (16)
1. a kind of methods of marking of human body attitude, which is characterized in that include the following steps:
Receive human body attitude data;
It parses and reads audio and video grading parameters;
It is scored the human body attitude data according to the audio and video grading parameters;
Wherein, the application program for the audio and video grading parameters being parsed and being read uses virtual enhancing display technology.
2. the methods of marking of human body attitude according to claim 1, which is characterized in that the reception human body attitude data,
Including:Receive human body foreleg and back leg angle-data, receive human arm and body subject angle data, receive human arm and
Head angle data.
3. the methods of marking of human body attitude according to claim 2, which is characterized in that further include:To multigroup institute of reception
It states human body attitude data and carries out weight assignment, to the human body foreleg and back leg angle-data the first weighted value of assignment, to described
Human arm and body subject angle data the second weighted value of assignment, and to the human arm and head angle data assignment
Third weighted value.
4. the methods of marking of human body attitude according to claim 1, which is characterized in that the parsing is simultaneously read audio and video and commented
Divide parameter, including:The sound characteristic and motion characteristic of preset standard audio and video in acquisition applications program.
5. the methods of marking of human body attitude according to claim 4, which is characterized in that the institute of the preset standard audio and video
Stating sound characteristic includes:Loudness, tone, tone color and rhythm;The motion characteristic includes:Action complete angle value, action complete with
Audio and video rhythm match ratio attribute value and action deadline value.
6. the methods of marking of human body attitude according to claim 3, which is characterized in that further include:By first weight
Value, second weighted value and the third weighted value with preset standard template weight value product, obtain the first product, the respectively
Two products and third product.
7. the methods of marking of human body attitude according to claim 6, which is characterized in that further include:By first product,
Second product and third product respectively with the depth value of the depth value of human body foreleg and back leg, human arm and body main body with
And human arm and the depth value of head angle carry out product;
The product of acquisition is subjected to sum-average arithmetic, in conjunction with audio and video grading parameters, completes the scoring to human body attitude data.
8. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is held by processor
The step of any one of claim 1-7 the methods are realized when row.
9. a kind of computer equipment, including memory, processor and storage are on a memory and the meter that can run on a processor
Calculation machine program, which is characterized in that the processor realizes any one of the claim 1-7 sides when executing described program
The step of method.
10. a kind of scoring apparatus of human body attitude, which is characterized in that including:
Receiving module, for receiving human body attitude data;
Parsing and read module, for parsing and reading audio and video grading parameters;
Grading module, for being scored the human body attitude data according to the audio and video grading parameters;
Wherein, the parsing uses virtually with the application program that read module is parsed and read to the audio and video grading parameters
Enhance display technology.
11. the scoring apparatus of human body attitude according to claim 10, which is characterized in that the receiving module is additionally operable to
Receive human body foreleg and back leg angle-data, reception human arm and body subject angle data, reception human arm and head
Angle-data.
12. the scoring apparatus of human body attitude according to claim 11, which is characterized in that further include:Weight assignment module,
Weight assignment is carried out for multigroup human body attitude data to acquisition, to the human body foreleg and back leg angle-data assignment
First weighted value, to the human arm and body subject angle data the second weighted value of assignment, and to the human arm
With head angle data assignment third weighted value.
13. the scoring apparatus of human body attitude according to claim 10, which is characterized in that the parsing and read module,
Including:Collecting unit, sound characteristic and motion characteristic for acquiring preset standard audio and video in the application program.
14. the scoring apparatus of human body attitude according to claim 13, which is characterized in that the preset standard audio and video
The sound characteristic includes:Loudness, tone, tone color and rhythm;The motion characteristic includes:Angle value is completed in action, action is completed
With audio and video rhythm match ratio attribute value and action deadline value.
15. the scoring apparatus of human body attitude according to claim 12, which is characterized in that further include:First computing module,
For first weighted value, second weighted value to be multiplied with preset standard template weight value respectively with the third weighted value
Product obtains the first product, the second product and third product.
16. the scoring apparatus of human body attitude according to claim 15, which is characterized in that further include:Second computing module,
For by first product, the second product and third product respectively with the depth value of human body foreleg and back leg, human arm
Product is carried out with the depth value and human arm of body main body and the depth value of head angle;
Third computing module, the product for that will obtain carry out sum-average arithmetic, in conjunction with audio and video grading parameters, complete to human body appearance
The scoring of state data.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2018102735405 | 2018-03-29 | ||
CN201810273540 | 2018-03-29 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108509924A true CN108509924A (en) | 2018-09-07 |
CN108509924B CN108509924B (en) | 2020-01-14 |
Family
ID=63380852
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810301255.XA Active CN108509924B (en) | 2018-03-29 | 2018-04-04 | Human body posture scoring method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108509924B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110020630A (en) * | 2019-04-11 | 2019-07-16 | 成都乐动信息技术有限公司 | Method, apparatus, storage medium and the electronic equipment of assessment movement completeness |
CN113255462A (en) * | 2021-04-29 | 2021-08-13 | 深圳大学 | Gait scoring method, system, computer program product and readable storage medium |
CN114440884A (en) * | 2022-04-11 | 2022-05-06 | 天津果实科技有限公司 | Intelligent analysis method for human body posture for intelligent posture correction equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106075854A (en) * | 2016-07-13 | 2016-11-09 | 牡丹江师范学院 | A kind of dance training system |
CN106448279A (en) * | 2016-10-27 | 2017-02-22 | 重庆淘亿科技有限公司 | Interactive experience method and system for dance teaching |
CN107122048A (en) * | 2017-04-21 | 2017-09-01 | 甘肃省歌舞剧院有限责任公司 | One kind action assessment system |
-
2018
- 2018-04-04 CN CN201810301255.XA patent/CN108509924B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106075854A (en) * | 2016-07-13 | 2016-11-09 | 牡丹江师范学院 | A kind of dance training system |
CN106448279A (en) * | 2016-10-27 | 2017-02-22 | 重庆淘亿科技有限公司 | Interactive experience method and system for dance teaching |
CN107122048A (en) * | 2017-04-21 | 2017-09-01 | 甘肃省歌舞剧院有限责任公司 | One kind action assessment system |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110020630A (en) * | 2019-04-11 | 2019-07-16 | 成都乐动信息技术有限公司 | Method, apparatus, storage medium and the electronic equipment of assessment movement completeness |
CN113255462A (en) * | 2021-04-29 | 2021-08-13 | 深圳大学 | Gait scoring method, system, computer program product and readable storage medium |
CN114440884A (en) * | 2022-04-11 | 2022-05-06 | 天津果实科技有限公司 | Intelligent analysis method for human body posture for intelligent posture correction equipment |
Also Published As
Publication number | Publication date |
---|---|
CN108509924B (en) | 2020-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3885965B1 (en) | Image recognition method based on micro facial expressions, apparatus and related device | |
US10692183B2 (en) | Customizable image cropping using body key points | |
CN104781849B (en) | Monocular vision positions the fast initialization with building figure (SLAM) simultaneously | |
CN110827383B (en) | Attitude simulation method and device of three-dimensional model, storage medium and electronic equipment | |
CN108986801A (en) | A kind of man-machine interaction method, device and human-computer interaction terminal | |
CN109214366A (en) | Localized target recognition methods, apparatus and system again | |
CN111369428B (en) | Virtual head portrait generation method and device | |
EP3992919B1 (en) | Three-dimensional facial model generation method and apparatus, device, and medium | |
CN106919899A (en) | The method and system for imitating human face expression output based on intelligent robot | |
CN111833236B (en) | Method and device for generating three-dimensional face model for simulating user | |
CN108509924A (en) | The methods of marking and device of human body attitude | |
CN111027403A (en) | Gesture estimation method, device, equipment and computer readable storage medium | |
JP2019096113A (en) | Processing device, method and program relating to keypoint data | |
US20160232698A1 (en) | Apparatus and method for generating animation | |
TWI780919B (en) | Method and apparatus for processing face image, electronic device and storage medium | |
CN110322571B (en) | Page processing method, device and medium | |
JP2020177615A (en) | Method of generating 3d facial model for avatar and related device | |
CN108491881A (en) | Method and apparatus for generating detection model | |
CN108388889A (en) | Method and apparatus for analyzing facial image | |
CN112288766B (en) | Motion evaluation method, device, system and storage medium | |
US20150187114A1 (en) | Method and apparatus for editing 3d character motion | |
CN108537162A (en) | The determination method and apparatus of human body attitude | |
CN108549484B (en) | Man-machine interaction method and device based on human body dynamic posture | |
Ishikawa et al. | Audio-visual hybrid approach for filling mass estimation | |
CN109977815A (en) | Image quality evaluating method and device, electronic equipment, storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230103 Address after: Room 1445A, No. 55 Xili Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai, 200120 Patentee after: Honey Grapefruit Network Technology (Shanghai) Co.,Ltd. Address before: 100080 408, 4th floor, 51 Zhichun Road, Haidian District, Beijing Patentee before: BEIJING MICROLIVE VISION TECHNOLOGY Co.,Ltd. |