CN107943291A - Recognition methods, device and the electronic equipment of human action - Google Patents

Recognition methods, device and the electronic equipment of human action Download PDF

Info

Publication number
CN107943291A
CN107943291A CN201711182909.3A CN201711182909A CN107943291A CN 107943291 A CN107943291 A CN 107943291A CN 201711182909 A CN201711182909 A CN 201711182909A CN 107943291 A CN107943291 A CN 107943291A
Authority
CN
China
Prior art keywords
line
joint
human action
adjacent
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711182909.3A
Other languages
Chinese (zh)
Other versions
CN107943291B (en
Inventor
严程
李震
方醒
郭宏财
张迎春
李红成
叶进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuomi Private Ltd
Original Assignee
Happy Honey Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Happy Honey Co Ltd filed Critical Happy Honey Co Ltd
Priority to CN201711182909.3A priority Critical patent/CN107943291B/en
Publication of CN107943291A publication Critical patent/CN107943291A/en
Priority to PCT/CN2018/098598 priority patent/WO2019100754A1/en
Application granted granted Critical
Publication of CN107943291B publication Critical patent/CN107943291B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention proposes a kind of recognition methods of human action, device and electronic equipment, wherein, method includes:When showing standard operation, gather the video pictures frame of human action, in video pictures frame, identify each joint of human body, two joint adjacent in each joint of human body is connected, the line between adjacent two joint is obtained, calculates the actual angle between the line between adjacent two joint and preset reference direction, according to the difference between actual angle and standard angle, determine whether human action matches with standard operation.By identifying human body adjacent segment in human body video pictures frame, obtain the line of adjacent segment, calculate the actual angle between the line of adjacent segment and preset reference direction, and according to the difference between the actual angle and standard angle, determine whether human action matches with standard operation, with the accurate technical problem for identifying, solving action recognition inaccuracy in the prior art of realization action.

Description

Recognition methods, device and the electronic equipment of human action
Technical field
The present invention relates to technical field of mobile terminals, more particularly to a kind of recognition methods of human action, device and electronics Equipment.
Background technology
Somatic sensation television game, by internet operation platform, carries out human-computer interaction, player holds special game paddle, passes through knowledge Acting to control the action of personage in game for other player's body, can allow player's " whole body " to put among game, enjoy The new experience of body-sensing interaction.
In correlation technique, somatic sensation television game technology is mainly used on computer and game host, and portability is poor, and to user's body The judgement of body action, is to judge and calculate body action correctness by determining the position of user's hand held controller, causes Judge inaccurate.
The content of the invention
It is contemplated that solve at least some of the technical problems in related technologies.
For this reason, first purpose of the present invention is to propose a kind of recognition methods of human action, by identifying that human body regards Human body adjacent segment in frequency image frame, obtains the line of adjacent segment, calculate adjacent segment line and preset reference direction it Between actual angle, and according to the difference between the actual angle and standard angle, determine human action whether with standard operation Matching, with the accurate technical problem for identifying, solving action recognition inaccuracy in the prior art of realization action.
Second object of the present invention is to propose a kind of identification device of human action.
Third object of the present invention is to propose a kind of electronic equipment.
Fourth object of the present invention is to propose a kind of non-transitorycomputer readable storage medium.
In order to achieve the above object, first aspect present invention embodiment proposes a kind of recognition methods of human action, including:
When showing standard operation, the video pictures frame of human action is gathered;
In the video pictures frame, each joint of human body is identified;
Two joint adjacent in each joint of human body is connected, obtains the line between adjacent two joint;
Calculate the actual angle between the line between adjacent two joint and preset reference direction;
According to the difference between the actual angle and standard angle, determine whether the human action moves with the standard Match;Wherein, the standard angle, is line and the ginseng between each adjacent two joint when performing the standard operation Examine the angle between direction.
Alternatively, the first possible implementation as first aspect, it is described according to the actual angle and standard Difference between angle, determines whether the human action matches with standard operation, including:
For the line between the adjacent two joint of each, calculate the corresponding standard angle and the actual angle it Between difference;
If the difference that the line between the adjacent two joint of each calculates in error range, determines that the human body moves Work is matched with the standard operation;
To be not in if there are the difference that the line between at least one adjacent two joint calculates in error range, determine institute Human action is stated to mismatch with the standard operation.
Alternatively, second of possible implementation as first aspect, it is described determine the human action with it is described After standard operation matching, further include:
For the line between the adjacent two joint of each, according to corresponding difference and the error range, determine described The scoring coefficient of line;
According to the scoring coefficient of the line and the corresponding score value of the line, the evaluation information of the line is generated;Institute Stating the evaluation information of line includes micromotion score value, scoring coefficient and the company of the micromotion score value for the line The product of the corresponding score value of line;
According to the evaluation information of the line between the adjacent two joint of each bar, the evaluation information of the human action is generated;Its In, the evaluation information of the human action includes human action score value, the human action score value for each micromotion score value it With.
Alternatively, the third possible implementation as first aspect, it is described according to corresponding difference and the mistake Poor scope, determines the scoring coefficient of the line, including:
Using formula p=1- [2 Δs/(a-b)], the scoring coefficient p of line is calculated;Wherein, b is under error range Limit, a is the error range upper limit, and Δ is difference.
Alternatively, the 4th kind of possible implementation as first aspect, it is described determine the human action with it is described After standard operation mismatches, further include:
Determine that human action score value is zero in the evaluation information of the human action.
Alternatively, the 5th kind of possible implementation as first aspect, it is described when showing standard operation, gather people Before the video pictures frame of body action, further include:
Obtain selected audio, and the corresponding standard operation of each timing node in the audio;
Play the audio;
When the audio is played to each timing node, corresponding standard operation is shown.
Alternatively, the 6th kind of possible implementation as first aspect, the method further include:
At the end of the audio plays, the evaluation information of each human action is obtained;Wherein, the evaluation of the human action Information, for indicating difference degree between the human action and corresponding standard operation;
According to the action evaluation information of the audio, each video pictures frame and each human action, target video is generated.
In the recognition methods of the human action of the embodiment of the present invention, when showing standard operation, regarding for human action is gathered Frequency image frame, in video pictures frame, identifies each joint of human body, connects two joint adjacent in each joint of human body, obtain phase Line between adjacent two joint, calculates the actual angle between the line between adjacent two joint and preset reference direction, according to Difference between actual angle and standard angle, determines whether human action matches with standard operation.By identifying people's volumetric video Human body adjacent segment in image frame, obtains the line of adjacent segment, calculates between the line of adjacent segment and preset reference direction Actual angle, and according to the difference between the actual angle and standard angle, determine human action whether with standard operation Match somebody with somebody, with the accurate technical problem for identifying, solving action recognition inaccuracy in the prior art of realization action.
In order to achieve the above object, second aspect of the present invention embodiment proposes a kind of identification device of human action, including:
Acquisition module, for when showing standard operation, gathering the video pictures frame of human action;
Identification module, in the video pictures frame, identifying each joint of human body;
Link block, for connecting two joint adjacent in each joint of human body, obtains the line between adjacent two joint;
Computing module, for calculating the actual angle between the line between adjacent two joint and preset reference direction;
Determining module, for according to the difference between the actual angle and standard angle, determining that the human action is It is no to be matched with the standard operation;Wherein, the standard angle, is when performing the standard operation, between each adjacent two joint Line and the reference direction between angle.
Alternatively, the first possible implementation as second aspect, the determining module include:
Computing unit, for for the line between the adjacent two joint of each, calculate the corresponding standard angle with Difference between the actual angle;
Determination unit, if the difference calculated for the line between the adjacent two joint of each in error range, Determine that the human action is matched with the standard operation;If calculated there are the line between at least one adjacent two joint Difference is not in error range, determines that the human action is mismatched with the standard operation.
Alternatively, second of possible implementation as second aspect, the determining module further include:
First scoring unit, for for the line between the adjacent two joint of each, according to corresponding difference and described Error range, determines the scoring coefficient of the line;It is raw according to the scoring coefficient of the line and the corresponding score value of the line Into the evaluation information of the line;The evaluation information of the line includes micromotion score value, and the micromotion score value is institute State the scoring coefficient of line and the product of the corresponding score value of the line;According to the evaluation of the line between the adjacent two joint of each bar Information, generates the evaluation information of the human action;Wherein, the evaluation information of the human action includes human action score value, The human action score value is the sum of each micromotion score value.
Alternatively, the third possible implementation as second aspect, the first scoring unit are specifically used for:
Using formula p=1- [2 Δs/(a-b)], the scoring coefficient p of line is calculated;Wherein, b is under error range Limit, a is the error range upper limit, and Δ is difference.
Alternatively, the 4th kind of possible implementation as second aspect, the determining module further include:
Second scoring unit, human action score value is zero in the evaluation information for determining the human action.
Alternatively, the 5th kind of possible implementation as second aspect, described device further include:
Module is chosen, for obtaining the corresponding standard operation of each timing node in selected audio, and the audio;
Playing module, for playing the audio;
Display module, for when the audio is played to each timing node, showing corresponding standard operation.
Alternatively, the 6th kind of possible implementation as second aspect, described device further include:
Generation module, at the end of being played when the audio, obtains the evaluation information of each human action;Wherein, it is described The evaluation information of human action, for indicating difference degree between the human action and corresponding standard operation;According to described The action evaluation information of audio, each video pictures frame and each human action, generates target video.
In the identification device of the human action of the embodiment of the present invention, acquisition module is used for when showing standard operation, collection The video pictures frame of human action, identification module are used in video pictures frame, identify each joint of human body, link block is used for Two joint adjacent in each joint of human body is connected, obtains the line between adjacent two joint, computing module is used to calculate adjacent two The actual angle between line and preset reference direction between joint, determining module are used for according to actual angle and standard angle Between difference, determine whether human action matches with standard operation.By identifying the adjacent pass of human body in human body video pictures frame Section, obtains the line of adjacent segment, calculates the actual angle between the line of adjacent segment and preset reference direction, and according to this Difference between actual angle and standard angle, determines whether human action matches with standard operation, with the accurate of realization action Identification, it is inaccurate to solve action recognition in the prior art.
In order to achieve the above object, third aspect present invention embodiment proposes a kind of electronic equipment, including:Housing, processor, Memory, circuit board and power circuit, wherein, circuit board is placed in the interior volume that housing surrounds, and processor and memory are set Put on circuit boards;Power circuit, for each circuit or the device power supply for above-mentioned electronic equipment;Memory can for storage Executive program code;The executable program code that processor is stored by reading in memory is run and executable program code Corresponding program, for performing the recognition methods of the human action described in first aspect.
In order to achieve the above object, fourth aspect present invention embodiment proposes a kind of non-transitory computer-readable storage medium Matter, is stored thereon with computer program, realizes that the human body as described in first aspect embodiment moves when which is executed by processor The recognition methods of work.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Substantially and it is readily appreciated that, wherein:
A kind of flow diagram of the recognition methods for human action that Fig. 1 is provided by the embodiment of the present invention;
Fig. 2 is the schematic diagram of the ratio of limbs and height in human anatomy provided in this embodiment;
The flow diagram of the recognition methods for another human action that Fig. 3 is provided by the embodiment of the present invention;
The structure diagram for the standard operation that Fig. 4 A are provided by the embodiment of the present invention;
Fig. 4 B are the structure diagrams for the actual act that the embodiment of the present invention is provided;
Fig. 5 is a kind of structure diagram of the identification device of human action provided in an embodiment of the present invention;
Fig. 6 is the structure diagram of the identification device of another human action provided in an embodiment of the present invention;And
Fig. 7 is the structure diagram of electronic equipment one embodiment of the present invention.
Embodiment
The embodiment of the present invention is described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or has the function of same or like element.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Below with reference to the accompanying drawings recognition methods, device and the electronic equipment of the human action of the embodiment of the present invention are described.
Electronic equipment in the present embodiment, is specifically as follows mobile phone, and those skilled in the art could be aware that, electronic equipment is also Can be other mobile terminals, the scheme that may be referred to provide in the present embodiment carries out the identification of human action.
In following embodiments, by taking electronic equipment is mobile phone as an example, the recognition methods of human action is explained.
The flow diagram of the recognition methods for a kind of human action that Fig. 1 is provided by the embodiment of the present invention, such as Fig. 1 institutes Show, this method comprises the following steps:
Step 101, when showing standard operation, the video pictures frame of human action is gathered.
Specifically, application program of mobile phone is opened,, can be as a kind of possible implementation into video acquisition interface Into before video acquisition interface, audio selection interface is introduced into, by way of drop-down menu user can be allowed to click what is liked Audio, each timing node has corresponding standard operation in audio, selectes audio by ACK button, and enter video acquisition interface The collection of video pictures frame is proceeded by, during mobile phone plays audio, shows that corresponding standard is moved in corresponding timing node Make, when showing standard operation, user synchronously does same action according to the standard operation of displaying, while camera device collection is used Do the video pictures frame of same human action in family.
When showing standard operation, the video pictures frame comprising human action that synchronous acquisition arrives is multiframe, as one kind Possible implementation, can gather N frames backward and includes human action using the time point for showing standard operation as time reference Picture, for the value of N, those skilled in the art can determine according to practical situations.
, can be in whole audio playing process as alternatively possible implementation, continuous acquisition is moved comprising human body The video pictures frame of work.
Step 102, in video pictures frame, each joint of human body is identified.
, can be by each frame picture in the case that video pictures frame carries depth information as a kind of possible implementation In human body and background separation, and then identify each joint of human body.In order to enable video pictures frame carries depth information, it is used for The camera device for gathering human body video pictures frame can be can sampling depth information camera device, by the depth information of acquisition, Identify the human body in image, such as dual camera, depth camera (Red-Green-Blue Depth) RGBD, imaging While obtain depth information, the acquisition of depth information can be additionally carried out by structure light/TOF camera lenses, is not arranged one by one herein Lift.
Specifically, according to the depth information of acquisition, with reference to face recognition technology identify human face region in image and Positional information, so as to obtain pixel and its corresponding depth information that human face region includes, is calculated face pixel pair The average value for the depth information answered.Further, since human body and face are substantially on same imaging plane, thus will with face Pixel of the difference of the average value of the corresponding depth information of pixel within threshold range is identified as human body, you can identifies Human body and human body contour outline, so that it is determined that in human body and profile each pixel depth information and positional information, and then can be by human body Come out with background separation.Further, for the ease of the joint in identification human body, the interference of background is excluded, image can be carried out Binaryzation so that the pixel value of background is 0, and the pixel value of human body is 1.
Further, according to the face and the positional information of human body identified, and according to limbs in human anatomy and body High proportionate relationship, can be calculated the positional information in each joint of human body.For example, Fig. 2 is human dissection provided in this embodiment The schematic diagram of the ratio of limbs and height in, lists proportionate relationship of each joint in limbs in Fig. 2, according to face and people The positional information of body can determine that the positional information of human body neck joint in video frame, you can obtain the two-dimensional coordinate of human body neck joint Information (x, y).As shown in Figure 2, the difference of the height where shoulder joint and the height where neck joint is fixed, according to neck The coordinate information in joint, and the difference are the row that can determine that where shoulder joint, because background parts pixel value is 0, human body portion It is 1 to divide pixel value, and therefore, the left side and the point that the right most edge respective pixel value is 1 are the corresponding point of shoulder joint in the row, So that it is determined that the two-dimensional coordinate information (x1, y1) of left shoulder joint, the two-dimensional coordinate information (x2, y2) of right shoulder joint.
According to the positional information of definite left shoulder joint, according to the gauged distance of left shoulder joint and left elbow joint in Fig. 2, with The gauged distance draws circle for diameter, since the pixel value of background parts is 0, when the left side and right pixels of identifying that pixel is 1 During point position, you can determine the two-dimensional coordinate information (x3, y3) of left elbow joint.
Similarly, it can further identify and determine the two-dimensional coordinate information in the other each joints of human body, each joint of human body is at least Including:Neck joint, left shoulder joint, right shoulder joint, left elbow joint, right elbow joint, left wrist joint, right wrist joint, left knee joint, a left side Ankle-joint, right knee joint, right ankle-joint etc., because joint is more, do not enumerate herein.For identifying and determining other each joints Two-dimensional coordinate method, principle is identical, does not repeat one by one herein.
Step 103, two joint adjacent in each joint of human body is connected, obtains the line between adjacent two joint.
For example, left shoulder joint and left elbow joint are adjacent two joints, will do during human action corresponding left shoulder joint and Left elbow joint connection, obtains the line between left shoulder joint and left elbow joint.
Step 104, the actual angle between the line between adjacent two joint and preset reference direction is calculated.
Specifically, if preset reference direction is horizontal direction, according to the adjacent diarticular positional information of acquisition, can calculate The angle of line and horizontal direction between adjacent two joint, for example, angle is defined as θ, the two-dimensional coordinate of left shoulder joint is (x1, y1), the two-dimensional coordinate of left elbow joint is (x3, y3), according to formula tg (θ)=(y3-y1)/(x3-x1), you can calculate Actual angle theta between the line and horizontal direction of adjacent left shoulder joint and left elbow joint, can similarly be calculated other The actual angle between line and horizontal direction between adjacent two joint.
Step 105, according to the difference between actual angle and standard angle, determine human action whether with standard operation Match somebody with somebody.
Specifically, standard angle is when performing standard operation, between the line and reference direction between each adjacent two joint Angle, for the line between the adjacent two joint of each, user is performed to the actual angle during standard operation, it is and corresponding Standard angle calculating difference, if the difference that the line between the adjacent two joint of each calculates in error range, determines Human action is matched with standard operation;To be not in missing if there are the difference that the line between at least one adjacent two joint calculates In poor scope, determine that human action is mismatched with standard operation.
It should be noted that by the human action in the multi-frame video picture comprising human action collected and standard Action matching, and the difference in error range is smaller, then illustrates the human action and standard operation matching degree height, i.e. user's mould The more standard that imitative standard operation is done.
In the recognition methods of the human action of the embodiment of the present invention, when showing standard operation, regarding for human action is gathered Frequency image frame, in video pictures frame, identifies each joint of human body, connects two joint adjacent in each joint of human body, obtain phase Line between adjacent two joint, calculates the actual angle between the line between adjacent two joint and preset reference direction, according to Difference between actual angle and standard angle, determines whether human action matches with standard operation.By identifying people's volumetric video Human body adjacent segment in image frame, obtains the line of adjacent segment, calculates between the line of adjacent segment and preset reference direction Actual angle, and according to the difference between the actual angle and standard angle, determine human action whether with standard operation Match somebody with somebody, with the accurate technical problem for identifying, solving action recognition inaccuracy in the prior art of realization action.
On the basis of a upper embodiment, the recognition methods of another human action is present embodiments provided, Fig. 3 is this hair The flow diagram of the recognition methods for another human action that bright embodiment is provided, as shown in figure 3, this method can wrap Include:
Step 301, selected audio is obtained, and the corresponding standard operation of each timing node in audio, and play the sound Frequently.
Specifically, mobile phone is prefixed multiple audios, and each timing node has corresponding standard operation, user in each audio According to the selected audio of hobby, and play out, while the audio is played, synchronous acquisition includes each video pictures of the user Frame, terminates until audio plays.
Step 302, when audio is played to each timing node, corresponding standard operation is shown.
Specifically, when being played to corresponding timing node, i.e., the corresponding mark of video acquisition showing interface in camera device Quasi- action, as a kind of possible implementation, can show corresponding mark in video acquisition interface in the form of suspended frame Quasi- action, as alternatively possible implementation, it is corresponding can to roll in the form of barrage displaying in video acquisition interface Standard operation.
For example, the structure diagram for the standard operation that Fig. 4 A are provided by the embodiment of the present invention, certain is shown in figure The standard operation of timing node displaying, and the associated joint that the standard operation is related to, each joint include:Left wrist joint, right wrist close Save, left elbow joint, right elbow joint, left shoulder joint, right shoulder joint, totally 6 joints.
Step 303, when showing standard operation, the video pictures frame of human action is gathered.
Specifically, when audio is played to some corresponding timing node, and shows corresponding standard operation, camera device Synchronous acquisition user imitates the video pictures frame for the human action that the standard operation is made, the human action that camera device collects Video pictures frame be multiframe picture, the human action of the corresponding standard operation is have recorded in the multiframe picture.For example, Fig. 4 B It is the structure diagram for the actual act that the embodiment of the present invention is provided, is shown in Fig. 4 B, when displaying Fig. 4 A Plays action When, actual act that user makes.
It should be noted that the video pictures frame of the human action collected is multiple image, have in each two field picture Corresponding human action, is illustrated in the present embodiment, the processing method phase of other frame pictures with a wherein frame picture Together.
Step 304, in video pictures frame, identify each joint of human body, and obtain the line between adjacent two joint.
In the video pictures frame comprising human action collected, each joint of human body is identified, specifically can refer to Fig. 1 realities The step of applying step 102 in example, does not repeat in the present embodiment.
Further, each joint of human body is identified according to the human action collected, obtains the line between adjacent two joint, Obtain line 2, the right shoulder joint between line 1 in Fig. 4 B between right wrist joint and right elbow joint, right elbow joint and right shoulder joint Between the line 4 and left elbow joint and right wrist joint between line 3, left shoulder joint and left elbow joint between section and left shoulder joint Line 5, for convenience of description, the corresponding action of each line is known as the micromotion of actual act that the user makes, All micromotions form the actual act.
Step 305, the actual angle between the line between adjacent two joint and preset reference direction is calculated.
Specifically, as shown in Figure 4 B, preset reference direction is the horizontal direction of screen, and No. 1 line and screen is calculated Angle between curtain horizontal direction is 35 degree, and the angle between No. 2 lines and screen level direction is 0 degree, No. 3 lines and screen Angle between horizontal direction is 0 degree, and the angle between No. 4 lines and screen level direction is 0 degree, No. 5 lines and screen water Angle square between is 130 degree.
Step 306, for the line between the adjacent two joint of each, calculate corresponding standard angle and actual angle it Between difference, determine whether human action matches with standard operation, if mismatch, perform step 307, if matching, perform step 308。
Specifically, standard angle is when performing standard operation, between the line and reference direction between each adjacent two joint Angle, by taking the line 1 in Fig. 4 B between right wrist joint and right elbow joint as an example, illustrate, 1 corresponding diagram 4A Plays of line Angle is 45 degree, and the actual angle that actual act measurement obtains in Fig. 4 B is 35 degree, and difference is 10 degree, according to default difference Threshold value, is, for example, 15 degree, and 10 degree of difference is less than 15 degree, you can determines point in 1 corresponding micromotion of line and standard operation Solution action matching, further, determines whether line 2, line 3, line 4 and 5 corresponding micromotion of line move with standard respectively Micromotion matching in work, matches, then actual human body action and standard if all of micromotion with standard operation Action is matched, if there is corresponding micromotion in any one micromotion and standard operation to mismatch, the actual persons Body action is unmatched with standard operation.
Step 307, determine that human action score value is zero in the evaluation information of human action.
Specifically, if actual human body action and standard operation are unmatched, user is done what the human action obtained Scoring is set to 0.
Step 308, for the line between the adjacent two joint of each, according to corresponding difference and error range, determine The scoring coefficient of line.
Specifically, according to formula p=1- [2 Δs/(a-b)], the scoring coefficient p of line is calculated, wherein, b is error Range lower limit, a are the error range upper limit, and Δ is difference.By taking the line 1 in Fig. 4 B between right wrist joint and right elbow joint as an example, Its corresponding difference is 10 degree, and e.g., the upper limit of the error range of difference is positive 50 degree, and the lower limit of error range is minus 50 degree, root According to formula P=1- [2 × 10/ (50- (- 50))]=0.8, i.e. the scoring coefficient of line 1 is 0.8.
Further, the scoring coefficient that line 2 can similarly be calculated respectively is 1, and the scoring coefficient of line 3 is 1, line 4 Scoring coefficient be 1, the scoring coefficient of line 5 is 0.9.
Step 309, according to the corresponding score value of scoring coefficient and line of line, the evaluation information of line, Jin Ersheng are generated The evaluation information of adult body action.
Specifically, the evaluation information of line includes micromotion score value, micromotion score value for line scoring coefficient and The product of the corresponding score value of line, in Fig. 4 B, which is 100 points, shares 5 micromotions, then every decomposition is dynamic The score value full marks of work are 20 points, and the score value full marks 20 of 1 corresponding micromotion of line are taken separately with corresponding scoring coefficient 0.8, The score value for then obtaining 1 corresponding micromotion of line is 16 points, so as to generate the evaluation information of line 1.Similarly, line 2 is obtained Evaluation information in the score value of 2 corresponding micromotion of line that includes be 20 points, the line included in the evaluation information of line 3 The score value of 3 corresponding micromotions is 20 points, the score value of the 4 corresponding micromotion of line included in the evaluation information of line 4 For 20 points, the score value of the 5 corresponding micromotion of line included in the evaluation information of line 5 is 18 points, and each line is corresponding The score value summation of micromotion, that is, the score value for obtaining the action of the people's body is 94 points, that is, obtains the evaluation information of the people's body action.
Further, the video pictures frame of other multiple human actions is handled according to the method described above, can be respectively obtained The evaluation information of human action in different video image frame, can be by the evaluation of human action as a kind of possible implementation Score is acted in information and exceedes threshold score, such as 60 points, video pictures frame the regarding as generation of corresponding multiple human actions It is used for the video frame for showing individual part fraction in frequency, i.e., the score letter of respective action is added in the plurality of video pictures frame Breath, so that the long enough of time delay, specific score information can be seen in user.
Step 310, at the end of audio plays, the evaluation information of each human action is obtained, and generate target video.
Specifically, at the end of audio plays, the corresponding each human action of different time node display standard operation is obtained Evaluation information, wherein, the evaluation information of human action, for indicating the difference between human action and corresponding standard operation Degree, the score of the action is higher in the evaluation information of human action, then the human action is got over corresponding standard operation difference It is small, conversely, difference is then bigger.
Further, it is raw according to the action evaluation information of the audio, each video pictures frame obtained and corresponding human action Into target video, when target video plays back, each human action can show corresponding score so that it is dynamic that user understands oneself The scoring event of work, can help user's improvement to act, while user experience is good.
In the recognition methods of the human action of the embodiment of the present invention, when showing standard operation, regarding for human action is gathered Frequency image frame, by identifying human body adjacent segment in human body video pictures frame, obtains the line of adjacent segment, calculates adjacent segment Line and preset reference direction between actual angle, and according to the difference between the actual angle and standard angle, determine Whether human action matches with standard operation, and with the accurate identification of realization action, it is inaccurate to solve action recognition in the prior art Technical problem.At the same time it can also give a mark to human action in the video pictures frame collected, action evaluation information is obtained, is referred to Show the difference degree between human action and standard operation, by generating target video so that user understands and entangles in playback Positive human action so that action more standard during record video next time.
In order to realize above-described embodiment, the present invention also proposes a kind of identification device of human action.
Fig. 5 is a kind of structure diagram of the identification device of human action provided in an embodiment of the present invention.
As shown in figure 5, the device includes:Acquisition module 51, identification module 52, link block 53, computing module 54 and really Cover half block 55.
Acquisition module 51, for when showing standard operation, gathering the video pictures frame of human action.
Identification module 52, in video pictures frame, identifying each joint of human body.
Link block 53, for connecting two joint adjacent in each joint of human body, obtains the line between adjacent two joint.
Computing module 54, for calculating the actual angle between the line between adjacent two joint and preset reference direction.
Determining module 55, for according to the difference between actual angle and standard angle, determine human action whether with mark Quasi- action matching, wherein, standard angle is when performing standard operation, the line and reference direction between each adjacent two joint it Between angle.
It should be noted that the foregoing explanation to embodiment of the method is also applied for the device of the embodiment, herein not Repeat again.
In the identification device of the human action of the embodiment of the present invention, acquisition module is used for when showing standard operation, collection The video pictures frame of human action, identification module are used in video pictures frame, identify each joint of human body, link block is used for Two joint adjacent in each joint of human body is connected, obtains the line between adjacent two joint, computing module is used to calculate adjacent two The actual angle between line and preset reference direction between joint, determining module are used for according to actual angle and standard angle Between difference, determine whether human action matches with standard operation.By identifying the adjacent pass of human body in human body video pictures frame Section, obtains the line of adjacent segment, calculates the actual angle between the line of adjacent segment and preset reference direction, and according to this Difference between actual angle and standard angle, determines whether human action matches with standard operation, with the accurate of realization action Identification, solves the technical problem of action recognition inaccuracy in the prior art.
Based on above-described embodiment, the embodiment of the present invention additionally provides the possible reality of the identification device of another human action Existing mode, Fig. 6 are the structure diagram of the identification device of another human action provided in an embodiment of the present invention, are implemented upper one On the basis of example, determining module 55, can also include:Computing unit 551, determination unit 552, first score unit 553 and the Two scoring units 554.
Computing unit 551, for for the line between the adjacent two joint of each, calculating corresponding standard angle and reality Difference between the angle of border.
Determination unit 552, if the difference calculated for the line between the adjacent two joint of each is in error range It is interior, determine that human action is matched with standard operation;If there are the difference that the line between at least one adjacent two joint calculates It is not in error range, determines that human action is mismatched with standard operation.
A kind of possible implementation as the embodiment of the present invention, however, it is determined that unit 552, determines human action and standard Action matching, then the first scoring unit 553, is specifically used for:
For the line between the adjacent two joint of each, according to corresponding difference and error range, commenting for line is determined Point coefficient, according to the corresponding score value of scoring coefficient and line of line, generates the evaluation information of line, the evaluation information bag of line Micromotion score value is included, micromotion score value is the scoring coefficient of line and the product of the corresponding score value of line, according to each bar phase The evaluation information of line between adjacent two joint, generates the evaluation information of human action, wherein, the evaluation information bag of human action Human action score value is included, human action score value is the sum of each micromotion score value.
Alternatively possible implementation as the embodiment of the present invention, however, it is determined that unit 552, determines human action and mark Quasi- action mismatches, then the second scoring unit 554, is specifically used for:
Determine that human action score value is zero in the evaluation information of human action.
As a kind of possible implementation of the present embodiment, which can also include:Choose module 56, playing module 57th, display module 58 and generation module 59.
Module 56 is chosen, for obtaining selected audio, and the corresponding standard operation of each timing node in audio.
Playing module 57, for playing audio.
Display module 58, for when audio is played to each timing node, showing corresponding standard operation.
Generation module 59, at the end of audio plays, obtains the evaluation information of each human action, wherein, human action Evaluation information, for indicating difference degree between human action and corresponding standard operation, according to audio, each video pictures frame and The action evaluation information of each human action, generates target video.
It should be noted that the foregoing explanation to embodiment of the method is also applied for the device of the embodiment, herein not Repeat again.
In the identification device of the human action of the embodiment of the present invention, when showing standard operation, regarding for human action is gathered Frequency image frame, by identifying human body adjacent segment in human body video pictures frame, obtains the line of adjacent segment, calculates adjacent segment Line and preset reference direction between actual angle, and according to the difference between the actual angle and standard angle, determine Whether human action matches with standard operation, and with the accurate identification of realization action, it is inaccurate to solve action recognition in the prior art Technical problem.At the same time it can also give a mark to human action in the video pictures frame collected, action evaluation information is obtained, is referred to Show the difference degree between human action and standard operation, by generating target video so that user understands and entangles in playback Positive human action so that action more standard during record video next time.
To realize above-described embodiment, the embodiment of the present invention also proposed a kind of electronic equipment, and Fig. 7 is electronic equipment of the present invention The structure diagram of one embodiment, as shown in fig. 7, the electronic equipment includes:Housing 71, processor 72, memory 73, circuit Plate 74 and power circuit 75, wherein, circuit board 74 is placed in the interior volume that housing 71 surrounds, and processor 72 and memory 73 are set Put on circuit board 74;Power circuit 75, for each circuit or the device power supply for above-mentioned electronic equipment;Memory 73 is used for Store executable program code;Processor 72 is run with that can hold by reading the executable program code stored in memory 73 The corresponding program of line program code, for performing the recognition methods of the human action described in preceding method embodiment.
Processor 72 to the specific implementation procedures of above-mentioned steps and processor 72 by run executable program code come The step of further performing, may refer to the description of Fig. 1-3 illustrated embodiments of the present invention, details are not described herein.
The electronic equipment exists in a variety of forms, includes but not limited to:
(1) mobile communication equipment:The characteristics of this kind equipment is that possess mobile communication function, and to provide speech, data Communicate as main target.This Terminal Type includes:Smart mobile phone (such as iPhone), multimedia handset, feature mobile phone, and it is low Hold mobile phone etc..
(2) super mobile personal computer equipment:This kind equipment belongs to the category of personal computer, there is calculating and processing work( Can, generally also possess mobile Internet access characteristic.This Terminal Type includes:PDA, MID and UMPC equipment etc., such as iPad.
(3) portable entertainment device:This kind equipment can show and play content of multimedia.The kind equipment includes:Audio, Video player (such as iPod), handheld device, e-book, and intelligent toy and portable car-mounted navigation equipment.
(4) server:The equipment for providing the service of calculating, the composition of server are total including processor, hard disk, memory, system Line etc., server is similar with general computer architecture, but due to needing to provide highly reliable service, in processing energy Power, stability, reliability, security, scalability, manageability etc. are more demanding.
(5) other have the function of the electronic equipment of data interaction.
To realize above-described embodiment, the embodiment of the present invention also proposed a kind of non-transitorycomputer readable storage medium, Computer program is stored thereon with, which realizes the knowledge of the human action described in the above method embodiment when being executed by processor Other method.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment of the present invention or example.In the present specification, schematic expression of the above terms is not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office Combined in an appropriate manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this area Art personnel can be tied the different embodiments or example described in this specification and different embodiments or exemplary feature Close and combine.
In addition, term " first ", " second " are only used for description purpose, and it is not intended that instruction or hint relative importance Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the present invention, " multiple " are meant that at least two, such as two, three It is a etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include Module, fragment or the portion of the code of the executable instruction of one or more the step of being used for realization custom logic function or process Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic at the same time in the way of or in the opposite order, carry out perform function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring Connecting portion (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable Medium, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or if necessary with it His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage Or firmware is realized.Such as, if realized with hardware with another embodiment, following skill well known in the art can be used Any one of art or their combination are realized:With the logic gates for realizing logic function to data-signal from Logic circuit is dissipated, the application-specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile Journey gate array (FPGA) etc..
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above The embodiment of the present invention is stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the present invention System, those of ordinary skill in the art can be changed above-described embodiment, change, replace and become within the scope of the invention Type.
The above description is merely a specific embodiment, but protection scope of the present invention is not limited thereto, any Those familiar with the art the invention discloses technical scope in, the change or replacement that can readily occur in, all should It is included within the scope of the present invention.Therefore, protection scope of the present invention should be subject to scope of the claims.

Claims (9)

1. a kind of recognition methods of human action, it is characterised in that comprise the following steps:
When showing standard operation, the video pictures frame of human action is gathered;
In the video pictures frame, each joint of human body is identified;
Two joint adjacent in each joint of human body is connected, obtains the line between adjacent two joint;
Calculate the actual angle between the line between adjacent two joint and preset reference direction;
According to the difference between the actual angle and standard angle, determine the human action whether with the standard operation Match somebody with somebody;Wherein, the standard angle, is line and the reference side between each adjacent two joint when performing the standard operation Angle between.
2. recognition methods according to claim 1, it is characterised in that it is described according to the actual angle and standard angle it Between difference, determine whether the human action matches with standard operation, including:
For the line between the adjacent two joint of each, calculate between the corresponding standard angle and the actual angle Difference;
If the difference that the line between the adjacent two joint of each calculates in error range, determine the human action with The standard operation matching;
To be not in if there are the difference that the line between at least one adjacent two joint calculates in error range, determine the people Body is acted to be mismatched with the standard operation.
3. recognition methods according to claim 2, it is characterised in that described to determine that the human action is moved with the standard After matching, further include:
For the line between the adjacent two joint of each, according to corresponding difference and the error range, the line is determined Scoring coefficient;
According to the scoring coefficient of the line and the corresponding score value of the line, the evaluation information of the line is generated;The company The evaluation information of line includes micromotion score value, scoring coefficient and the line pair of the micromotion score value for the line The product for the score value answered;
According to the evaluation information of the line between the adjacent two joint of each bar, the evaluation information of the human action is generated;Wherein, institute Stating the evaluation information of human action includes human action score value, and the human action score value is the sum of each micromotion score value.
4. recognition methods according to claim 3, it is characterised in that described according to corresponding difference and the error model Enclose, determine the scoring coefficient of the line, including:
Using formula p=1- [2 Δs/(a-b)], the scoring coefficient p of line is calculated;Wherein, b is error range lower limit, and a is The error range upper limit, Δ are difference.
5. recognition methods according to claim 2, it is characterised in that described to determine that the human action is moved with the standard After mismatching, further include:
Determine that human action score value is zero in the evaluation information of the human action.
6. according to claim 1-5 any one of them recognition methods, it is characterised in that it is described when showing standard operation, adopt Before the video pictures frame for collecting human action, further include:
Obtain selected audio, and the corresponding standard operation of each timing node in the audio;
Play the audio;
When the audio is played to each timing node, corresponding standard operation is shown.
7. recognition methods according to claim 6, it is characterised in that the method further includes:
At the end of the audio plays, the evaluation information of each human action is obtained;Wherein, the evaluation letter of the human action Breath, for indicating difference degree between the human action and corresponding standard operation;
According to the action evaluation information of the audio, each video pictures frame and each human action, target video is generated.
8. a kind of identification device of human action, it is characterised in that described device includes:
Acquisition module, for when showing standard operation, gathering the video pictures frame of human action;
Identification module, in the video pictures frame, identifying each joint of human body;
Link block, for connecting two joint adjacent in each joint of human body, obtains the line between adjacent two joint;
Computing module, for calculating the actual angle between the line between adjacent two joint and preset reference direction;
Determining module, for according to the difference between the actual angle and standard angle, determine the human action whether with The standard operation matching;Wherein, the standard angle, is the company between each adjacent two joint when performing the standard operation Angle between line and the reference direction.
9. a kind of electronic equipment, it is characterised in that including:Housing, processor, memory, circuit board and power circuit, wherein, Circuit board is placed in the interior volume that housing surrounds, and processor and memory are set on circuit boards;Power circuit, for for State each circuit or the device power supply of electronic equipment;Memory is used to store executable program code;Processor is deposited by reading The executable program code stored in reservoir runs program corresponding with executable program code, and 1- is required for perform claim The recognition methods of 7 any one of them human actions.
CN201711182909.3A 2017-11-23 2017-11-23 Human body action recognition method and device and electronic equipment Active CN107943291B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711182909.3A CN107943291B (en) 2017-11-23 2017-11-23 Human body action recognition method and device and electronic equipment
PCT/CN2018/098598 WO2019100754A1 (en) 2017-11-23 2018-08-03 Human body movement identification method and device, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711182909.3A CN107943291B (en) 2017-11-23 2017-11-23 Human body action recognition method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN107943291A true CN107943291A (en) 2018-04-20
CN107943291B CN107943291B (en) 2021-06-08

Family

ID=61930056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711182909.3A Active CN107943291B (en) 2017-11-23 2017-11-23 Human body action recognition method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN107943291B (en)
WO (1) WO2019100754A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875687A (en) * 2018-06-28 2018-11-23 泰康保险集团股份有限公司 A kind of appraisal procedure and device of nursing quality
CN109432753A (en) * 2018-09-26 2019-03-08 Oppo广东移动通信有限公司 Act antidote, device, storage medium and electronic equipment
CN109462776A (en) * 2018-11-29 2019-03-12 北京字节跳动网络技术有限公司 A kind of special video effect adding method, device, terminal device and storage medium
CN109621332A (en) * 2018-12-29 2019-04-16 北京卡路里信息技术有限公司 A kind of attribute determining method, device, equipment and the storage medium of body-building movement
WO2019100754A1 (en) * 2017-11-23 2019-05-31 乐蜜有限公司 Human body movement identification method and device, and electronic device
CN110728181A (en) * 2019-09-04 2020-01-24 北京奇艺世纪科技有限公司 Behavior evaluation method and apparatus, computer device, and storage medium
CN111105345A (en) * 2018-10-26 2020-05-05 北京微播视界科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111107279A (en) * 2018-10-26 2020-05-05 北京微播视界科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111158486A (en) * 2019-12-31 2020-05-15 恒信东方文化股份有限公司 Method and system for recognizing action of singing and jumping program
CN112399234A (en) * 2019-08-18 2021-02-23 聚好看科技股份有限公司 Interface display method and display equipment
CN112487940A (en) * 2020-11-26 2021-03-12 腾讯音乐娱乐科技(深圳)有限公司 Video classification method and device
US11924513B2 (en) 2019-08-18 2024-03-05 Juhaokan Technology Co., Ltd. Display apparatus and method for display user interface

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112998700B (en) * 2021-05-26 2021-09-24 北京欧应信息技术有限公司 Apparatus, system and method for assisting assessment of a motor function of an object

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105307017A (en) * 2015-11-03 2016-02-03 Tcl集团股份有限公司 Method and device for correcting posture of smart television user

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5294983B2 (en) * 2009-05-21 2013-09-18 Kddi株式会社 Portable terminal, program and method for determining direction of travel of pedestrian using acceleration sensor and geomagnetic sensor
CN105278685B (en) * 2015-09-30 2018-12-21 陕西科技大学 A kind of assisted teaching system and teaching method based on EON
CN107943291B (en) * 2017-11-23 2021-06-08 卓米私人有限公司 Human body action recognition method and device and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105307017A (en) * 2015-11-03 2016-02-03 Tcl集团股份有限公司 Method and device for correcting posture of smart television user

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019100754A1 (en) * 2017-11-23 2019-05-31 乐蜜有限公司 Human body movement identification method and device, and electronic device
CN108875687A (en) * 2018-06-28 2018-11-23 泰康保险集团股份有限公司 A kind of appraisal procedure and device of nursing quality
CN109432753A (en) * 2018-09-26 2019-03-08 Oppo广东移动通信有限公司 Act antidote, device, storage medium and electronic equipment
CN111107279A (en) * 2018-10-26 2020-05-05 北京微播视界科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111105345B (en) * 2018-10-26 2021-11-09 北京微播视界科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111107279B (en) * 2018-10-26 2021-06-29 北京微播视界科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111105345A (en) * 2018-10-26 2020-05-05 北京微播视界科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109462776B (en) * 2018-11-29 2021-08-20 北京字节跳动网络技术有限公司 Video special effect adding method and device, terminal equipment and storage medium
CN109462776A (en) * 2018-11-29 2019-03-12 北京字节跳动网络技术有限公司 A kind of special video effect adding method, device, terminal device and storage medium
CN109621332A (en) * 2018-12-29 2019-04-16 北京卡路里信息技术有限公司 A kind of attribute determining method, device, equipment and the storage medium of body-building movement
CN112399234A (en) * 2019-08-18 2021-02-23 聚好看科技股份有限公司 Interface display method and display equipment
US11924513B2 (en) 2019-08-18 2024-03-05 Juhaokan Technology Co., Ltd. Display apparatus and method for display user interface
CN110728181A (en) * 2019-09-04 2020-01-24 北京奇艺世纪科技有限公司 Behavior evaluation method and apparatus, computer device, and storage medium
CN110728181B (en) * 2019-09-04 2022-07-12 北京奇艺世纪科技有限公司 Behavior evaluation method and apparatus, computer device, and storage medium
CN111158486A (en) * 2019-12-31 2020-05-15 恒信东方文化股份有限公司 Method and system for recognizing action of singing and jumping program
CN111158486B (en) * 2019-12-31 2023-12-05 恒信东方文化股份有限公司 Method and system for identifying singing jump program action
CN112487940A (en) * 2020-11-26 2021-03-12 腾讯音乐娱乐科技(深圳)有限公司 Video classification method and device
WO2022111168A1 (en) * 2020-11-26 2022-06-02 腾讯音乐娱乐科技(深圳)有限公司 Video classification method and apparatus
CN112487940B (en) * 2020-11-26 2023-02-28 腾讯音乐娱乐科技(深圳)有限公司 Video classification method and device

Also Published As

Publication number Publication date
WO2019100754A1 (en) 2019-05-31
CN107943291B (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN107943291A (en) Recognition methods, device and the electronic equipment of human action
CN107968921A (en) Video generation method, device and electronic equipment
JP6369909B2 (en) Facial expression scoring device, dance scoring device, karaoke device, and game device
US8854356B2 (en) Storage medium having stored therein image processing program, image processing apparatus, image processing system, and image processing method
CN109432753A (en) Act antidote, device, storage medium and electronic equipment
CN107920203A (en) Image-pickup method, device and electronic equipment
CN107952238A (en) Video generation method, device and electronic equipment
US9898850B2 (en) Support and complement device, support and complement method, and recording medium for specifying character motion or animation
WO2019100757A1 (en) Video generation method and device, and electronic apparatus
US8982229B2 (en) Storage medium recording information processing program for face recognition process
US10423978B2 (en) Method and device for playing advertisements based on relationship information between viewers
CN109325450A (en) Image processing method, device, storage medium and electronic equipment
CN108537110A (en) Generate the device and method based on virtual reality of three-dimensional face model
CN108537867A (en) According to the Video Rendering method and apparatus of user's limb motion
CN109472296A (en) A kind of model training method and device promoting decision tree based on gradient
CN102647606B (en) Stereoscopic image processor, stereoscopic image interaction system and stereoscopic image display method
CN109068053A (en) Image special effect methods of exhibiting, device and electronic equipment
CN110209285B (en) Sand table display system based on gesture control
CN109996107A (en) Video generation method, device and system
CN106582005A (en) Data synchronous interaction method and device in virtual games
CN108769649B (en) Advanced treating device and three dimensional image apparatus
CN110728191A (en) Sign language translation method, and MR-based sign language-voice interaction method and system
CN109670385A (en) The method and device that expression updates in a kind of application program
CN108629821A (en) Animation producing method and device
CN107656611A (en) Somatic sensation television game implementation method and device, terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190626

Address after: Room 1101, Santai Commercial Building, 139 Connaught Road, Hong Kong, China

Applicant after: Hong Kong Lemi Co., Ltd.

Address before: Cayman Islands, Greater Cayman Island, Kamana Bay, Casia District, Seitus Chamber of Commerce, 2547

Applicant before: Happy honey Company Limited

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210524

Address after: 25, 5th floor, shuangjingfang office building, 3 frisha street, Singapore

Applicant after: Zhuomi Private Ltd.

Address before: Room 1101, Santai Commercial Building, 139 Connaught Road, Hong Kong, China

Applicant before: HONG KONG LIVE.ME Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant