The content of the invention
It is contemplated that solve at least some of the technical problems in related technologies.
For this reason, first purpose of the present invention is to propose a kind of recognition methods of human action, by identifying that human body regards
Human body adjacent segment in frequency image frame, obtains the line of adjacent segment, calculate adjacent segment line and preset reference direction it
Between actual angle, and according to the difference between the actual angle and standard angle, determine human action whether with standard operation
Matching, with the accurate technical problem for identifying, solving action recognition inaccuracy in the prior art of realization action.
Second object of the present invention is to propose a kind of identification device of human action.
Third object of the present invention is to propose a kind of electronic equipment.
Fourth object of the present invention is to propose a kind of non-transitorycomputer readable storage medium.
In order to achieve the above object, first aspect present invention embodiment proposes a kind of recognition methods of human action, including:
When showing standard operation, the video pictures frame of human action is gathered;
In the video pictures frame, each joint of human body is identified;
Two joint adjacent in each joint of human body is connected, obtains the line between adjacent two joint;
Calculate the actual angle between the line between adjacent two joint and preset reference direction;
According to the difference between the actual angle and standard angle, determine whether the human action moves with the standard
Match;Wherein, the standard angle, is line and the ginseng between each adjacent two joint when performing the standard operation
Examine the angle between direction.
Alternatively, the first possible implementation as first aspect, it is described according to the actual angle and standard
Difference between angle, determines whether the human action matches with standard operation, including:
For the line between the adjacent two joint of each, calculate the corresponding standard angle and the actual angle it
Between difference;
If the difference that the line between the adjacent two joint of each calculates in error range, determines that the human body moves
Work is matched with the standard operation;
To be not in if there are the difference that the line between at least one adjacent two joint calculates in error range, determine institute
Human action is stated to mismatch with the standard operation.
Alternatively, second of possible implementation as first aspect, it is described determine the human action with it is described
After standard operation matching, further include:
For the line between the adjacent two joint of each, according to corresponding difference and the error range, determine described
The scoring coefficient of line;
According to the scoring coefficient of the line and the corresponding score value of the line, the evaluation information of the line is generated;Institute
Stating the evaluation information of line includes micromotion score value, scoring coefficient and the company of the micromotion score value for the line
The product of the corresponding score value of line;
According to the evaluation information of the line between the adjacent two joint of each bar, the evaluation information of the human action is generated;Its
In, the evaluation information of the human action includes human action score value, the human action score value for each micromotion score value it
With.
Alternatively, the third possible implementation as first aspect, it is described according to corresponding difference and the mistake
Poor scope, determines the scoring coefficient of the line, including:
Using formula p=1- [2 Δs/(a-b)], the scoring coefficient p of line is calculated;Wherein, b is under error range
Limit, a is the error range upper limit, and Δ is difference.
Alternatively, the 4th kind of possible implementation as first aspect, it is described determine the human action with it is described
After standard operation mismatches, further include:
Determine that human action score value is zero in the evaluation information of the human action.
Alternatively, the 5th kind of possible implementation as first aspect, it is described when showing standard operation, gather people
Before the video pictures frame of body action, further include:
Obtain selected audio, and the corresponding standard operation of each timing node in the audio;
Play the audio;
When the audio is played to each timing node, corresponding standard operation is shown.
Alternatively, the 6th kind of possible implementation as first aspect, the method further include:
At the end of the audio plays, the evaluation information of each human action is obtained;Wherein, the evaluation of the human action
Information, for indicating difference degree between the human action and corresponding standard operation;
According to the action evaluation information of the audio, each video pictures frame and each human action, target video is generated.
In the recognition methods of the human action of the embodiment of the present invention, when showing standard operation, regarding for human action is gathered
Frequency image frame, in video pictures frame, identifies each joint of human body, connects two joint adjacent in each joint of human body, obtain phase
Line between adjacent two joint, calculates the actual angle between the line between adjacent two joint and preset reference direction, according to
Difference between actual angle and standard angle, determines whether human action matches with standard operation.By identifying people's volumetric video
Human body adjacent segment in image frame, obtains the line of adjacent segment, calculates between the line of adjacent segment and preset reference direction
Actual angle, and according to the difference between the actual angle and standard angle, determine human action whether with standard operation
Match somebody with somebody, with the accurate technical problem for identifying, solving action recognition inaccuracy in the prior art of realization action.
In order to achieve the above object, second aspect of the present invention embodiment proposes a kind of identification device of human action, including:
Acquisition module, for when showing standard operation, gathering the video pictures frame of human action;
Identification module, in the video pictures frame, identifying each joint of human body;
Link block, for connecting two joint adjacent in each joint of human body, obtains the line between adjacent two joint;
Computing module, for calculating the actual angle between the line between adjacent two joint and preset reference direction;
Determining module, for according to the difference between the actual angle and standard angle, determining that the human action is
It is no to be matched with the standard operation;Wherein, the standard angle, is when performing the standard operation, between each adjacent two joint
Line and the reference direction between angle.
Alternatively, the first possible implementation as second aspect, the determining module include:
Computing unit, for for the line between the adjacent two joint of each, calculate the corresponding standard angle with
Difference between the actual angle;
Determination unit, if the difference calculated for the line between the adjacent two joint of each in error range,
Determine that the human action is matched with the standard operation;If calculated there are the line between at least one adjacent two joint
Difference is not in error range, determines that the human action is mismatched with the standard operation.
Alternatively, second of possible implementation as second aspect, the determining module further include:
First scoring unit, for for the line between the adjacent two joint of each, according to corresponding difference and described
Error range, determines the scoring coefficient of the line;It is raw according to the scoring coefficient of the line and the corresponding score value of the line
Into the evaluation information of the line;The evaluation information of the line includes micromotion score value, and the micromotion score value is institute
State the scoring coefficient of line and the product of the corresponding score value of the line;According to the evaluation of the line between the adjacent two joint of each bar
Information, generates the evaluation information of the human action;Wherein, the evaluation information of the human action includes human action score value,
The human action score value is the sum of each micromotion score value.
Alternatively, the third possible implementation as second aspect, the first scoring unit are specifically used for:
Using formula p=1- [2 Δs/(a-b)], the scoring coefficient p of line is calculated;Wherein, b is under error range
Limit, a is the error range upper limit, and Δ is difference.
Alternatively, the 4th kind of possible implementation as second aspect, the determining module further include:
Second scoring unit, human action score value is zero in the evaluation information for determining the human action.
Alternatively, the 5th kind of possible implementation as second aspect, described device further include:
Module is chosen, for obtaining the corresponding standard operation of each timing node in selected audio, and the audio;
Playing module, for playing the audio;
Display module, for when the audio is played to each timing node, showing corresponding standard operation.
Alternatively, the 6th kind of possible implementation as second aspect, described device further include:
Generation module, at the end of being played when the audio, obtains the evaluation information of each human action;Wherein, it is described
The evaluation information of human action, for indicating difference degree between the human action and corresponding standard operation;According to described
The action evaluation information of audio, each video pictures frame and each human action, generates target video.
In the identification device of the human action of the embodiment of the present invention, acquisition module is used for when showing standard operation, collection
The video pictures frame of human action, identification module are used in video pictures frame, identify each joint of human body, link block is used for
Two joint adjacent in each joint of human body is connected, obtains the line between adjacent two joint, computing module is used to calculate adjacent two
The actual angle between line and preset reference direction between joint, determining module are used for according to actual angle and standard angle
Between difference, determine whether human action matches with standard operation.By identifying the adjacent pass of human body in human body video pictures frame
Section, obtains the line of adjacent segment, calculates the actual angle between the line of adjacent segment and preset reference direction, and according to this
Difference between actual angle and standard angle, determines whether human action matches with standard operation, with the accurate of realization action
Identification, it is inaccurate to solve action recognition in the prior art.
In order to achieve the above object, third aspect present invention embodiment proposes a kind of electronic equipment, including:Housing, processor,
Memory, circuit board and power circuit, wherein, circuit board is placed in the interior volume that housing surrounds, and processor and memory are set
Put on circuit boards;Power circuit, for each circuit or the device power supply for above-mentioned electronic equipment;Memory can for storage
Executive program code;The executable program code that processor is stored by reading in memory is run and executable program code
Corresponding program, for performing the recognition methods of the human action described in first aspect.
In order to achieve the above object, fourth aspect present invention embodiment proposes a kind of non-transitory computer-readable storage medium
Matter, is stored thereon with computer program, realizes that the human body as described in first aspect embodiment moves when which is executed by processor
The recognition methods of work.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description
Obtain substantially, or recognized by the practice of the present invention.
Embodiment
The embodiment of the present invention is described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar element or has the function of same or like element.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Below with reference to the accompanying drawings recognition methods, device and the electronic equipment of the human action of the embodiment of the present invention are described.
Electronic equipment in the present embodiment, is specifically as follows mobile phone, and those skilled in the art could be aware that, electronic equipment is also
Can be other mobile terminals, the scheme that may be referred to provide in the present embodiment carries out the identification of human action.
In following embodiments, by taking electronic equipment is mobile phone as an example, the recognition methods of human action is explained.
The flow diagram of the recognition methods for a kind of human action that Fig. 1 is provided by the embodiment of the present invention, such as Fig. 1 institutes
Show, this method comprises the following steps:
Step 101, when showing standard operation, the video pictures frame of human action is gathered.
Specifically, application program of mobile phone is opened,, can be as a kind of possible implementation into video acquisition interface
Into before video acquisition interface, audio selection interface is introduced into, by way of drop-down menu user can be allowed to click what is liked
Audio, each timing node has corresponding standard operation in audio, selectes audio by ACK button, and enter video acquisition interface
The collection of video pictures frame is proceeded by, during mobile phone plays audio, shows that corresponding standard is moved in corresponding timing node
Make, when showing standard operation, user synchronously does same action according to the standard operation of displaying, while camera device collection is used
Do the video pictures frame of same human action in family.
When showing standard operation, the video pictures frame comprising human action that synchronous acquisition arrives is multiframe, as one kind
Possible implementation, can gather N frames backward and includes human action using the time point for showing standard operation as time reference
Picture, for the value of N, those skilled in the art can determine according to practical situations.
, can be in whole audio playing process as alternatively possible implementation, continuous acquisition is moved comprising human body
The video pictures frame of work.
Step 102, in video pictures frame, each joint of human body is identified.
, can be by each frame picture in the case that video pictures frame carries depth information as a kind of possible implementation
In human body and background separation, and then identify each joint of human body.In order to enable video pictures frame carries depth information, it is used for
The camera device for gathering human body video pictures frame can be can sampling depth information camera device, by the depth information of acquisition,
Identify the human body in image, such as dual camera, depth camera (Red-Green-Blue Depth) RGBD, imaging
While obtain depth information, the acquisition of depth information can be additionally carried out by structure light/TOF camera lenses, is not arranged one by one herein
Lift.
Specifically, according to the depth information of acquisition, with reference to face recognition technology identify human face region in image and
Positional information, so as to obtain pixel and its corresponding depth information that human face region includes, is calculated face pixel pair
The average value for the depth information answered.Further, since human body and face are substantially on same imaging plane, thus will with face
Pixel of the difference of the average value of the corresponding depth information of pixel within threshold range is identified as human body, you can identifies
Human body and human body contour outline, so that it is determined that in human body and profile each pixel depth information and positional information, and then can be by human body
Come out with background separation.Further, for the ease of the joint in identification human body, the interference of background is excluded, image can be carried out
Binaryzation so that the pixel value of background is 0, and the pixel value of human body is 1.
Further, according to the face and the positional information of human body identified, and according to limbs in human anatomy and body
High proportionate relationship, can be calculated the positional information in each joint of human body.For example, Fig. 2 is human dissection provided in this embodiment
The schematic diagram of the ratio of limbs and height in, lists proportionate relationship of each joint in limbs in Fig. 2, according to face and people
The positional information of body can determine that the positional information of human body neck joint in video frame, you can obtain the two-dimensional coordinate of human body neck joint
Information (x, y).As shown in Figure 2, the difference of the height where shoulder joint and the height where neck joint is fixed, according to neck
The coordinate information in joint, and the difference are the row that can determine that where shoulder joint, because background parts pixel value is 0, human body portion
It is 1 to divide pixel value, and therefore, the left side and the point that the right most edge respective pixel value is 1 are the corresponding point of shoulder joint in the row,
So that it is determined that the two-dimensional coordinate information (x1, y1) of left shoulder joint, the two-dimensional coordinate information (x2, y2) of right shoulder joint.
According to the positional information of definite left shoulder joint, according to the gauged distance of left shoulder joint and left elbow joint in Fig. 2, with
The gauged distance draws circle for diameter, since the pixel value of background parts is 0, when the left side and right pixels of identifying that pixel is 1
During point position, you can determine the two-dimensional coordinate information (x3, y3) of left elbow joint.
Similarly, it can further identify and determine the two-dimensional coordinate information in the other each joints of human body, each joint of human body is at least
Including:Neck joint, left shoulder joint, right shoulder joint, left elbow joint, right elbow joint, left wrist joint, right wrist joint, left knee joint, a left side
Ankle-joint, right knee joint, right ankle-joint etc., because joint is more, do not enumerate herein.For identifying and determining other each joints
Two-dimensional coordinate method, principle is identical, does not repeat one by one herein.
Step 103, two joint adjacent in each joint of human body is connected, obtains the line between adjacent two joint.
For example, left shoulder joint and left elbow joint are adjacent two joints, will do during human action corresponding left shoulder joint and
Left elbow joint connection, obtains the line between left shoulder joint and left elbow joint.
Step 104, the actual angle between the line between adjacent two joint and preset reference direction is calculated.
Specifically, if preset reference direction is horizontal direction, according to the adjacent diarticular positional information of acquisition, can calculate
The angle of line and horizontal direction between adjacent two joint, for example, angle is defined as θ, the two-dimensional coordinate of left shoulder joint is
(x1, y1), the two-dimensional coordinate of left elbow joint is (x3, y3), according to formula tg (θ)=(y3-y1)/(x3-x1), you can calculate
Actual angle theta between the line and horizontal direction of adjacent left shoulder joint and left elbow joint, can similarly be calculated other
The actual angle between line and horizontal direction between adjacent two joint.
Step 105, according to the difference between actual angle and standard angle, determine human action whether with standard operation
Match somebody with somebody.
Specifically, standard angle is when performing standard operation, between the line and reference direction between each adjacent two joint
Angle, for the line between the adjacent two joint of each, user is performed to the actual angle during standard operation, it is and corresponding
Standard angle calculating difference, if the difference that the line between the adjacent two joint of each calculates in error range, determines
Human action is matched with standard operation;To be not in missing if there are the difference that the line between at least one adjacent two joint calculates
In poor scope, determine that human action is mismatched with standard operation.
It should be noted that by the human action in the multi-frame video picture comprising human action collected and standard
Action matching, and the difference in error range is smaller, then illustrates the human action and standard operation matching degree height, i.e. user's mould
The more standard that imitative standard operation is done.
In the recognition methods of the human action of the embodiment of the present invention, when showing standard operation, regarding for human action is gathered
Frequency image frame, in video pictures frame, identifies each joint of human body, connects two joint adjacent in each joint of human body, obtain phase
Line between adjacent two joint, calculates the actual angle between the line between adjacent two joint and preset reference direction, according to
Difference between actual angle and standard angle, determines whether human action matches with standard operation.By identifying people's volumetric video
Human body adjacent segment in image frame, obtains the line of adjacent segment, calculates between the line of adjacent segment and preset reference direction
Actual angle, and according to the difference between the actual angle and standard angle, determine human action whether with standard operation
Match somebody with somebody, with the accurate technical problem for identifying, solving action recognition inaccuracy in the prior art of realization action.
On the basis of a upper embodiment, the recognition methods of another human action is present embodiments provided, Fig. 3 is this hair
The flow diagram of the recognition methods for another human action that bright embodiment is provided, as shown in figure 3, this method can wrap
Include:
Step 301, selected audio is obtained, and the corresponding standard operation of each timing node in audio, and play the sound
Frequently.
Specifically, mobile phone is prefixed multiple audios, and each timing node has corresponding standard operation, user in each audio
According to the selected audio of hobby, and play out, while the audio is played, synchronous acquisition includes each video pictures of the user
Frame, terminates until audio plays.
Step 302, when audio is played to each timing node, corresponding standard operation is shown.
Specifically, when being played to corresponding timing node, i.e., the corresponding mark of video acquisition showing interface in camera device
Quasi- action, as a kind of possible implementation, can show corresponding mark in video acquisition interface in the form of suspended frame
Quasi- action, as alternatively possible implementation, it is corresponding can to roll in the form of barrage displaying in video acquisition interface
Standard operation.
For example, the structure diagram for the standard operation that Fig. 4 A are provided by the embodiment of the present invention, certain is shown in figure
The standard operation of timing node displaying, and the associated joint that the standard operation is related to, each joint include:Left wrist joint, right wrist close
Save, left elbow joint, right elbow joint, left shoulder joint, right shoulder joint, totally 6 joints.
Step 303, when showing standard operation, the video pictures frame of human action is gathered.
Specifically, when audio is played to some corresponding timing node, and shows corresponding standard operation, camera device
Synchronous acquisition user imitates the video pictures frame for the human action that the standard operation is made, the human action that camera device collects
Video pictures frame be multiframe picture, the human action of the corresponding standard operation is have recorded in the multiframe picture.For example, Fig. 4 B
It is the structure diagram for the actual act that the embodiment of the present invention is provided, is shown in Fig. 4 B, when displaying Fig. 4 A Plays action
When, actual act that user makes.
It should be noted that the video pictures frame of the human action collected is multiple image, have in each two field picture
Corresponding human action, is illustrated in the present embodiment, the processing method phase of other frame pictures with a wherein frame picture
Together.
Step 304, in video pictures frame, identify each joint of human body, and obtain the line between adjacent two joint.
In the video pictures frame comprising human action collected, each joint of human body is identified, specifically can refer to Fig. 1 realities
The step of applying step 102 in example, does not repeat in the present embodiment.
Further, each joint of human body is identified according to the human action collected, obtains the line between adjacent two joint,
Obtain line 2, the right shoulder joint between line 1 in Fig. 4 B between right wrist joint and right elbow joint, right elbow joint and right shoulder joint
Between the line 4 and left elbow joint and right wrist joint between line 3, left shoulder joint and left elbow joint between section and left shoulder joint
Line 5, for convenience of description, the corresponding action of each line is known as the micromotion of actual act that the user makes,
All micromotions form the actual act.
Step 305, the actual angle between the line between adjacent two joint and preset reference direction is calculated.
Specifically, as shown in Figure 4 B, preset reference direction is the horizontal direction of screen, and No. 1 line and screen is calculated
Angle between curtain horizontal direction is 35 degree, and the angle between No. 2 lines and screen level direction is 0 degree, No. 3 lines and screen
Angle between horizontal direction is 0 degree, and the angle between No. 4 lines and screen level direction is 0 degree, No. 5 lines and screen water
Angle square between is 130 degree.
Step 306, for the line between the adjacent two joint of each, calculate corresponding standard angle and actual angle it
Between difference, determine whether human action matches with standard operation, if mismatch, perform step 307, if matching, perform step
308。
Specifically, standard angle is when performing standard operation, between the line and reference direction between each adjacent two joint
Angle, by taking the line 1 in Fig. 4 B between right wrist joint and right elbow joint as an example, illustrate, 1 corresponding diagram 4A Plays of line
Angle is 45 degree, and the actual angle that actual act measurement obtains in Fig. 4 B is 35 degree, and difference is 10 degree, according to default difference
Threshold value, is, for example, 15 degree, and 10 degree of difference is less than 15 degree, you can determines point in 1 corresponding micromotion of line and standard operation
Solution action matching, further, determines whether line 2, line 3, line 4 and 5 corresponding micromotion of line move with standard respectively
Micromotion matching in work, matches, then actual human body action and standard if all of micromotion with standard operation
Action is matched, if there is corresponding micromotion in any one micromotion and standard operation to mismatch, the actual persons
Body action is unmatched with standard operation.
Step 307, determine that human action score value is zero in the evaluation information of human action.
Specifically, if actual human body action and standard operation are unmatched, user is done what the human action obtained
Scoring is set to 0.
Step 308, for the line between the adjacent two joint of each, according to corresponding difference and error range, determine
The scoring coefficient of line.
Specifically, according to formula p=1- [2 Δs/(a-b)], the scoring coefficient p of line is calculated, wherein, b is error
Range lower limit, a are the error range upper limit, and Δ is difference.By taking the line 1 in Fig. 4 B between right wrist joint and right elbow joint as an example,
Its corresponding difference is 10 degree, and e.g., the upper limit of the error range of difference is positive 50 degree, and the lower limit of error range is minus 50 degree, root
According to formula P=1- [2 × 10/ (50- (- 50))]=0.8, i.e. the scoring coefficient of line 1 is 0.8.
Further, the scoring coefficient that line 2 can similarly be calculated respectively is 1, and the scoring coefficient of line 3 is 1, line 4
Scoring coefficient be 1, the scoring coefficient of line 5 is 0.9.
Step 309, according to the corresponding score value of scoring coefficient and line of line, the evaluation information of line, Jin Ersheng are generated
The evaluation information of adult body action.
Specifically, the evaluation information of line includes micromotion score value, micromotion score value for line scoring coefficient and
The product of the corresponding score value of line, in Fig. 4 B, which is 100 points, shares 5 micromotions, then every decomposition is dynamic
The score value full marks of work are 20 points, and the score value full marks 20 of 1 corresponding micromotion of line are taken separately with corresponding scoring coefficient 0.8,
The score value for then obtaining 1 corresponding micromotion of line is 16 points, so as to generate the evaluation information of line 1.Similarly, line 2 is obtained
Evaluation information in the score value of 2 corresponding micromotion of line that includes be 20 points, the line included in the evaluation information of line 3
The score value of 3 corresponding micromotions is 20 points, the score value of the 4 corresponding micromotion of line included in the evaluation information of line 4
For 20 points, the score value of the 5 corresponding micromotion of line included in the evaluation information of line 5 is 18 points, and each line is corresponding
The score value summation of micromotion, that is, the score value for obtaining the action of the people's body is 94 points, that is, obtains the evaluation information of the people's body action.
Further, the video pictures frame of other multiple human actions is handled according to the method described above, can be respectively obtained
The evaluation information of human action in different video image frame, can be by the evaluation of human action as a kind of possible implementation
Score is acted in information and exceedes threshold score, such as 60 points, video pictures frame the regarding as generation of corresponding multiple human actions
It is used for the video frame for showing individual part fraction in frequency, i.e., the score letter of respective action is added in the plurality of video pictures frame
Breath, so that the long enough of time delay, specific score information can be seen in user.
Step 310, at the end of audio plays, the evaluation information of each human action is obtained, and generate target video.
Specifically, at the end of audio plays, the corresponding each human action of different time node display standard operation is obtained
Evaluation information, wherein, the evaluation information of human action, for indicating the difference between human action and corresponding standard operation
Degree, the score of the action is higher in the evaluation information of human action, then the human action is got over corresponding standard operation difference
It is small, conversely, difference is then bigger.
Further, it is raw according to the action evaluation information of the audio, each video pictures frame obtained and corresponding human action
Into target video, when target video plays back, each human action can show corresponding score so that it is dynamic that user understands oneself
The scoring event of work, can help user's improvement to act, while user experience is good.
In the recognition methods of the human action of the embodiment of the present invention, when showing standard operation, regarding for human action is gathered
Frequency image frame, by identifying human body adjacent segment in human body video pictures frame, obtains the line of adjacent segment, calculates adjacent segment
Line and preset reference direction between actual angle, and according to the difference between the actual angle and standard angle, determine
Whether human action matches with standard operation, and with the accurate identification of realization action, it is inaccurate to solve action recognition in the prior art
Technical problem.At the same time it can also give a mark to human action in the video pictures frame collected, action evaluation information is obtained, is referred to
Show the difference degree between human action and standard operation, by generating target video so that user understands and entangles in playback
Positive human action so that action more standard during record video next time.
In order to realize above-described embodiment, the present invention also proposes a kind of identification device of human action.
Fig. 5 is a kind of structure diagram of the identification device of human action provided in an embodiment of the present invention.
As shown in figure 5, the device includes:Acquisition module 51, identification module 52, link block 53, computing module 54 and really
Cover half block 55.
Acquisition module 51, for when showing standard operation, gathering the video pictures frame of human action.
Identification module 52, in video pictures frame, identifying each joint of human body.
Link block 53, for connecting two joint adjacent in each joint of human body, obtains the line between adjacent two joint.
Computing module 54, for calculating the actual angle between the line between adjacent two joint and preset reference direction.
Determining module 55, for according to the difference between actual angle and standard angle, determine human action whether with mark
Quasi- action matching, wherein, standard angle is when performing standard operation, the line and reference direction between each adjacent two joint it
Between angle.
It should be noted that the foregoing explanation to embodiment of the method is also applied for the device of the embodiment, herein not
Repeat again.
In the identification device of the human action of the embodiment of the present invention, acquisition module is used for when showing standard operation, collection
The video pictures frame of human action, identification module are used in video pictures frame, identify each joint of human body, link block is used for
Two joint adjacent in each joint of human body is connected, obtains the line between adjacent two joint, computing module is used to calculate adjacent two
The actual angle between line and preset reference direction between joint, determining module are used for according to actual angle and standard angle
Between difference, determine whether human action matches with standard operation.By identifying the adjacent pass of human body in human body video pictures frame
Section, obtains the line of adjacent segment, calculates the actual angle between the line of adjacent segment and preset reference direction, and according to this
Difference between actual angle and standard angle, determines whether human action matches with standard operation, with the accurate of realization action
Identification, solves the technical problem of action recognition inaccuracy in the prior art.
Based on above-described embodiment, the embodiment of the present invention additionally provides the possible reality of the identification device of another human action
Existing mode, Fig. 6 are the structure diagram of the identification device of another human action provided in an embodiment of the present invention, are implemented upper one
On the basis of example, determining module 55, can also include:Computing unit 551, determination unit 552, first score unit 553 and the
Two scoring units 554.
Computing unit 551, for for the line between the adjacent two joint of each, calculating corresponding standard angle and reality
Difference between the angle of border.
Determination unit 552, if the difference calculated for the line between the adjacent two joint of each is in error range
It is interior, determine that human action is matched with standard operation;If there are the difference that the line between at least one adjacent two joint calculates
It is not in error range, determines that human action is mismatched with standard operation.
A kind of possible implementation as the embodiment of the present invention, however, it is determined that unit 552, determines human action and standard
Action matching, then the first scoring unit 553, is specifically used for:
For the line between the adjacent two joint of each, according to corresponding difference and error range, commenting for line is determined
Point coefficient, according to the corresponding score value of scoring coefficient and line of line, generates the evaluation information of line, the evaluation information bag of line
Micromotion score value is included, micromotion score value is the scoring coefficient of line and the product of the corresponding score value of line, according to each bar phase
The evaluation information of line between adjacent two joint, generates the evaluation information of human action, wherein, the evaluation information bag of human action
Human action score value is included, human action score value is the sum of each micromotion score value.
Alternatively possible implementation as the embodiment of the present invention, however, it is determined that unit 552, determines human action and mark
Quasi- action mismatches, then the second scoring unit 554, is specifically used for:
Determine that human action score value is zero in the evaluation information of human action.
As a kind of possible implementation of the present embodiment, which can also include:Choose module 56, playing module
57th, display module 58 and generation module 59.
Module 56 is chosen, for obtaining selected audio, and the corresponding standard operation of each timing node in audio.
Playing module 57, for playing audio.
Display module 58, for when audio is played to each timing node, showing corresponding standard operation.
Generation module 59, at the end of audio plays, obtains the evaluation information of each human action, wherein, human action
Evaluation information, for indicating difference degree between human action and corresponding standard operation, according to audio, each video pictures frame and
The action evaluation information of each human action, generates target video.
It should be noted that the foregoing explanation to embodiment of the method is also applied for the device of the embodiment, herein not
Repeat again.
In the identification device of the human action of the embodiment of the present invention, when showing standard operation, regarding for human action is gathered
Frequency image frame, by identifying human body adjacent segment in human body video pictures frame, obtains the line of adjacent segment, calculates adjacent segment
Line and preset reference direction between actual angle, and according to the difference between the actual angle and standard angle, determine
Whether human action matches with standard operation, and with the accurate identification of realization action, it is inaccurate to solve action recognition in the prior art
Technical problem.At the same time it can also give a mark to human action in the video pictures frame collected, action evaluation information is obtained, is referred to
Show the difference degree between human action and standard operation, by generating target video so that user understands and entangles in playback
Positive human action so that action more standard during record video next time.
To realize above-described embodiment, the embodiment of the present invention also proposed a kind of electronic equipment, and Fig. 7 is electronic equipment of the present invention
The structure diagram of one embodiment, as shown in fig. 7, the electronic equipment includes:Housing 71, processor 72, memory 73, circuit
Plate 74 and power circuit 75, wherein, circuit board 74 is placed in the interior volume that housing 71 surrounds, and processor 72 and memory 73 are set
Put on circuit board 74;Power circuit 75, for each circuit or the device power supply for above-mentioned electronic equipment;Memory 73 is used for
Store executable program code;Processor 72 is run with that can hold by reading the executable program code stored in memory 73
The corresponding program of line program code, for performing the recognition methods of the human action described in preceding method embodiment.
Processor 72 to the specific implementation procedures of above-mentioned steps and processor 72 by run executable program code come
The step of further performing, may refer to the description of Fig. 1-3 illustrated embodiments of the present invention, details are not described herein.
The electronic equipment exists in a variety of forms, includes but not limited to:
(1) mobile communication equipment:The characteristics of this kind equipment is that possess mobile communication function, and to provide speech, data
Communicate as main target.This Terminal Type includes:Smart mobile phone (such as iPhone), multimedia handset, feature mobile phone, and it is low
Hold mobile phone etc..
(2) super mobile personal computer equipment:This kind equipment belongs to the category of personal computer, there is calculating and processing work(
Can, generally also possess mobile Internet access characteristic.This Terminal Type includes:PDA, MID and UMPC equipment etc., such as iPad.
(3) portable entertainment device:This kind equipment can show and play content of multimedia.The kind equipment includes:Audio,
Video player (such as iPod), handheld device, e-book, and intelligent toy and portable car-mounted navigation equipment.
(4) server:The equipment for providing the service of calculating, the composition of server are total including processor, hard disk, memory, system
Line etc., server is similar with general computer architecture, but due to needing to provide highly reliable service, in processing energy
Power, stability, reliability, security, scalability, manageability etc. are more demanding.
(5) other have the function of the electronic equipment of data interaction.
To realize above-described embodiment, the embodiment of the present invention also proposed a kind of non-transitorycomputer readable storage medium,
Computer program is stored thereon with, which realizes the knowledge of the human action described in the above method embodiment when being executed by processor
Other method.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description
Point is contained at least one embodiment of the present invention or example.In the present specification, schematic expression of the above terms is not
It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office
Combined in an appropriate manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this area
Art personnel can be tied the different embodiments or example described in this specification and different embodiments or exemplary feature
Close and combine.
In addition, term " first ", " second " are only used for description purpose, and it is not intended that instruction or hint relative importance
Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or
Implicitly include at least one this feature.In the description of the present invention, " multiple " are meant that at least two, such as two, three
It is a etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include
Module, fragment or the portion of the code of the executable instruction of one or more the step of being used for realization custom logic function or process
Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable
Sequence, including according to involved function by it is basic at the same time in the way of or in the opposite order, carry out perform function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction
The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass
Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment
Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring
Connecting portion (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable
Medium, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or if necessary with it
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned
In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage
Or firmware is realized.Such as, if realized with hardware with another embodiment, following skill well known in the art can be used
Any one of art or their combination are realized:With the logic gates for realizing logic function to data-signal from
Logic circuit is dissipated, the application-specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile
Journey gate array (FPGA) etc..
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above
The embodiment of the present invention is stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the present invention
System, those of ordinary skill in the art can be changed above-described embodiment, change, replace and become within the scope of the invention
Type.
The above description is merely a specific embodiment, but protection scope of the present invention is not limited thereto, any
Those familiar with the art the invention discloses technical scope in, the change or replacement that can readily occur in, all should
It is included within the scope of the present invention.Therefore, protection scope of the present invention should be subject to scope of the claims.