The content of the invention
It is contemplated that solve at least some of the technical problems in related technologies.
For this reason, first purpose of the present invention is to propose a kind of video generation method, to realize the process for playing audio
Middle synchronous acquisition video pictures frame, the image of the standard operation of addition displaying at video acquisition interface, and at the end of video, it is raw
Into target video, solve user in the prior art and, when playing somatic sensation television game, substitute into and feel not strong, and can-not be automatically generated and accordingly regard
The technical problem of frequency.
Second object of the present invention is to propose a kind of video-generating device.
Third object of the present invention is to propose a kind of electronic equipment.
Fourth object of the present invention is to propose a kind of non-transitorycomputer readable storage medium.
In order to achieve the above object, first aspect present invention embodiment proposes a kind of video generation method, including:
Obtain selected audio, and the corresponding standard operation of each timing node in the audio;
The audio is played, and each video pictures frame is gathered in the audio process is played;
When the audio is played to each timing node, corresponding standard operation is shown;
In each video pictures frame, the image for the standard operation that addition is synchronously shown when gathering;
At the end of the audio plays, according to the audio and each video pictures frame, target video is generated.
Alternatively, the first possible implementation as first aspect, the method, further includes:
Human action identification is carried out for the video pictures frame that when showing the standard operation, synchronous acquisition arrives;
According to the difference degree between the standard operation and the human action, the action for generating the human action is commented
Valency information;
By the action evaluation information of the human action, it is added to the video pictures frame that the synchronous acquisition arrives.
Alternatively, second of possible implementation as first aspect, the action evaluation information, including:Human body
Act score value and the corresponding animation effect in the affiliated section of human action score value.
Alternatively, the third possible implementation as first aspect, it is described according to the standard operation with it is described
Difference degree between human action, before the action evaluation information for generating the human action, further includes:
Determine that the standard operation is matched with the human action.
Alternatively, the 4th kind of possible implementation as first aspect, it is described determine the standard operation with it is described
After human action matching, further include:
Terminate the displaying process of the standard operation.
Alternatively, the 5th kind of possible implementation as first aspect, when showing the standard operation, is synchronously adopted
The video pictures frame collected is multiple, each video pictures frame has a corresponding action evaluation information, described by described in
The action evaluation information of human action, is added to the video pictures frame that the synchronous acquisition arrives, including:
Multiple action evaluation information of generation are screened, retains and evaluates highest action evaluation information;
By the highest action evaluation information of the evaluation, be added to the synchronous acquisition to multiple video pictures frames in
At least one video pictures frame;Wherein, at least one video pictures frame, the highest action evaluation of commentary valency is believed for displaying
The corresponding human action of breath.
Alternatively, the 6th kind of possible implementation as first aspect, it is described after audio broadcasting terminates,
Further include:
According to the action evaluation information of each human action, generation overall scores interface;
Show the overall scores interface.
Alternatively, the 7th kind of possible implementation as first aspect, it is described generation target video after, also wrap
Include:
The target video is issued.
In a kind of video generation method of the embodiment of the present invention, selected audio, and each timing node in audio are obtained
Corresponding standard operation, plays audio, and each video pictures frame is gathered in audio process is played, and is played in audio to each
During timing node, corresponding standard operation is shown, in each video pictures frame, standard operation that when addition collection synchronously shows
Image, at the end of audio plays, according to audio and each video pictures frame, generates target video.By playing selected audio
During, video pictures frame is gathered, the image of the standard operation of addition displaying at video acquisition interface, and terminate in video
When, target video is generated, user in the prior art is solved and, when playing somatic sensation television game, substitutes into and feel not strong, and can-not be automatically generated
The technical problem of corresponding video.
In order to achieve the above object, second aspect of the present invention embodiment proposes a kind of video-generating device, including:
Acquisition module, for obtaining the corresponding standard operation of each timing node in selected audio, and the audio;
Playing module, for playing the audio, and gathers each video pictures frame in the audio process is played;
First display module, for when the audio is played to each timing node, showing corresponding standard operation;
First add module, in each video pictures frame, adding the image of the standard operation synchronously shown during collection;
First generation module, at the end of being played in the audio, according to the audio and each video pictures frame, generation
Target video.
Alternatively, the first possible implementation as second aspect, described device, further includes:
Identification module, for carrying out human body for the video pictures frame that when showing the standard operation, synchronous acquisition arrives
Action recognition;
Generation module, for according to the difference degree between the standard operation and the human action, generating the people
The action evaluation information of body action;
Second add module, for by the action evaluation information of the human action, being added to what the synchronous acquisition arrived
Video pictures frame.
Alternatively, as second of possible implementation of second aspect, the action evaluation information, including:Human body moves
Make the corresponding animation effect of score value and the affiliated section of human action score value.
Alternatively, the third possible implementation as second aspect, described device, further includes:
Determining module, for determining that the standard operation is matched with the human action.
Alternatively, as the 4th kind of possible implementation of second aspect, described device, further includes:
Terminate module, for terminating the displaying process of the standard operation.
Alternatively, moved as the 5th kind of possible implementation of second aspect, the identification module in the displaying standard
When making, the video pictures frame that synchronous acquisition arrives is multiple, and the generation module generates corresponding for each video pictures frame
One action evaluation information, second add module, including:
Screening unit, for being screened to multiple action evaluation information of generation, retains and evaluates highest action evaluation
Information;
Adding device, for by the highest action evaluation information of the evaluation, be added to the synchronous acquisition arrive it is multiple
At least one video pictures frame in video pictures frame;Wherein, at least one video pictures frame, shows that commentary valency is most
High action evaluation information corresponds to human action.
Alternatively, the 6th kind of possible implementation as second aspect, described device, further includes:
Second display module, for the action evaluation information according to each human action, generation overall scores interface;Displaying
The overall scores interface.
Alternatively, the 7th kind of possible implementation as second aspect, described device, further includes:
Release module, for being issued to the target video.
In a kind of video-generating device of the embodiment of the present invention, acquisition module is used to obtain selected audio, and audio
In the corresponding standard operation of each timing node, playing module is used to play audio, and gathers each video in audio process is played
Image frame, the first display module are used for when audio is played to each timing node, show corresponding standard operation, first adds
Module is added to be used in each video pictures frame, the image for the standard operation that addition is synchronously shown when gathering, the first generation module is used
At the end of being played in audio, according to audio and each video pictures frame, target video is generated.By the mistake for playing selected audio
Cheng Zhong, gathers video pictures frame, the image of the standard operation of addition displaying at video acquisition interface, and at the end of video, it is raw
Into target video, solve user in the prior art and, when playing somatic sensation television game, substitute into and feel not strong, and can-not be automatically generated and accordingly regard
The technical problem of frequency.
In order to achieve the above object, third aspect present invention embodiment proposes electronic equipment, including:Housing, processor, storage
Device, circuit board and power circuit, wherein, circuit board is placed in the interior volume that housing surrounds, and processor and memory are arranged on
On circuit board;Power circuit, for each circuit or the device power supply for above-mentioned electronic equipment;Memory is used to store executable
Program code;The executable program code that processor is stored by reading in memory is corresponding with executable program code to run
Program, for performing the video generation method described in first aspect.
In order to achieve the above object, fourth aspect present invention embodiment proposes a kind of non-transitory computer-readable storage medium
Matter, is stored thereon with computer program, and video generation method as described in relation to the first aspect is realized when which is executed by processor.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description
Obtain substantially, or recognized by the practice of the present invention.
Embodiment
The embodiment of the present invention is described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar element or has the function of same or like element.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Below with reference to the accompanying drawings the video generation method, device and electronic equipment of the embodiment of the present invention are described.
Electronic equipment in the present embodiment, is specifically as follows mobile phone, and those skilled in the art could be aware that, electronic equipment is also
Can be other mobile terminals, the scheme that may be referred to provide in the present embodiment carries out the identification of human action.
In following embodiments, by taking electronic equipment is mobile phone as an example, video generation method is explained.
The flow diagram for a kind of video generation method that Fig. 1 is provided by the embodiment of the present invention, as shown in Figure 1, method
Comprise the following steps:
Step 101, selected audio is obtained, and the corresponding standard operation of each timing node in audio.
Specifically, application program of mobile phone is opened,, can be as a kind of possible implementation into video acquisition interface
Into before video acquisition interface, audio selection interface is introduced into, by way of drop-down menu user can be allowed to click what is liked
Audio, each timing node has corresponding standard operation in audio, selectes audio by ACK button, and enter video acquisition circle
Face.
Step 102, audio is played, and each video pictures frame is gathered in audio process is played.
Specifically, commenced play out into video acquisition interface, audio, while gather each video in audio process is played and draw
Face frame.
Step 103, when audio is played to each timing node, corresponding standard operation is shown.
Specifically, when audio is played to each timing node, the standard operation of the corresponding timing node is opened up
Show, while user does same action, while gather the video pictures that user does human action according to the standard operation of displaying
Frame.
Step 104, in each video pictures frame, the image for the standard operation that addition is synchronously shown when gathering.
Specifically, each timing node of audio is being played, when showing corresponding standard operation, the human body of synchronous acquisition moves
The video pictures frame of work includes multiframe, as a kind of possible implementation, can using show time point of standard operation as when
Between benchmark, gather N frames backward and include the picture of human action, for the value of N, those skilled in the art can be according to practical application
Situation determines, the image of the standard operation synchronously shown when gathering is added in the N frames.
Step 105, at the end of audio plays, according to audio and each video pictures frame, target video is generated.
Specifically, each video pictures frame that will be gathered in audio and audio playing process, generates target video.
As a kind of possible implementation, in the target video of generation, rolled using figural person of low position in screen
Dynamic display, carries out the displaying of standard operation, and when the action and standard operation that user is done match, person of low position disappears, meanwhile, display
The score of the action and corresponding animation effect.
After target video generation, which can be published to community by user, can also be published to social network sites, be improved
The Experience Degree of user.
In a kind of video generation method of the embodiment of the present invention, selected audio, and each timing node in audio are obtained
Corresponding standard operation, plays audio, and each video pictures frame is gathered in audio process is played, and is played in audio to each
During timing node, corresponding standard operation is shown, in each video pictures frame, standard operation that when addition collection synchronously shows
Image, at the end of audio plays, according to audio and each video pictures frame, generates target video.By playing selected audio
During, video pictures frame is gathered, the image of the standard operation of addition displaying at video acquisition interface, and terminate in video
When, target video is generated, user in the prior art is solved and, when playing somatic sensation television game, substitutes into and feel not strong, and can-not be automatically generated
The technical problem of corresponding video.
For an embodiment in clear explanation, another video generation method is present embodiments provided, Fig. 2 is real for the present invention
The flow diagram for another video generation method that example is provided is applied, as shown in Fig. 2, this method includes:
Step 201, selected audio is obtained, and the corresponding standard operation of each timing node in audio.
Step 202, audio is played, and starts to gather each video pictures frame.
Step 101~step 102 in an embodiment is can refer to, details are not described herein again.
Step 203, when audio is played to each timing node, corresponding standard operation is shown along desired guiding trajectory.
Specifically, when being played to corresponding timing node, i.e., the corresponding mark of video acquisition showing interface in camera device
Quasi- action, as a kind of possible implementation, can show corresponding mark in video acquisition interface in the form of suspended frame
Quasi- action, as alternatively possible implementation, it is corresponding can to roll in the form of barrage displaying in video acquisition interface
Standard operation.
Step 204, when showing corresponding standard operation, the video pictures frame arrived to synchronous acquisition carries out human action knowledge
Not.
Human action identification is carried out to video pictures frame, as a kind of possible implementation, can in video pictures frame,
Identify each joint of human body, the identification of human action is carried out according to each joint of the human body identified.
The implementation in identification human body each joint has very much, as a kind of possible implementation of the present embodiment, according to
The depth information carried in the video pictures frame of human action, can be by the human body and background separation in each frame picture, Jin Ershi
Do not go out each joint of human body.In order to enable video pictures frame carries depth information, for gathering the shooting of human body video pictures frame
Device can be the camera device of energy sampling depth information, by the depth information of acquisition, identify the human body in image, example
Such as dual camera, depth camera (Red-Green-Blue Depth) RGBD, imaging while, obtains depth information, additionally
The acquisition of depth information can be carried out by structure light/TOF camera lenses, it is numerous to list herein.Specifically, as a kind of possible reality
Existing mode, according to the depth information of acquisition, the human face region and positional information in image are identified with reference to face recognition technology, from
And pixel and its corresponding depth information that human face region includes are obtained, the corresponding depth information of face pixel is calculated
Average value.Further, since human body and face are substantially on same imaging plane, therefore will be corresponding with pixel in face
Pixel of the difference of the average value of depth information within threshold range is identified as human body, you can identifies human body and human body wheel
Exterior feature, so that it is determined that in human body and profile each pixel depth information and positional information, and then human body and background separation can be gone out
Come.Further, for the ease of the joint in identification human body, the interference of background is excluded, image can be subjected to binaryzation so that the back of the body
The pixel value of scape is 0, and the pixel value of human body is 1.
Further, according to the face and the positional information of human body identified, and according to limbs in human anatomy and body
High proportionate relationship, can be calculated the positional information in each joint of human body.For example, Fig. 3 is human dissection provided in this embodiment
The schematic diagram of the ratio of limbs and height in, lists proportionate relationship of each joint in limbs in Fig. 3, according to face and people
The positional information of body can determine that the positional information of human body neck joint in video frame, you can obtain the two-dimensional coordinate of human body neck joint
Information (x, y).As shown in Figure 3, the difference of the height where shoulder joint and the height where neck joint is fixed, according to neck
The coordinate information in joint, and the difference are the row that can determine that where shoulder joint, because background parts pixel value is 0, human body portion
It is 1 to divide pixel value, and therefore, the left side and the point that the right most edge respective pixel value is 1 are the corresponding point of shoulder joint in the row,
So that it is determined that the two-dimensional coordinate information (x1, y1) of left shoulder joint, the two-dimensional coordinate information (x2, y2) of right shoulder joint.
According to the positional information of definite left shoulder joint, according to the gauged distance of left shoulder joint and left elbow joint in Fig. 3, with
The gauged distance draws circle for diameter, since the pixel value of background parts is 0, when the left side and right pixels of identifying that pixel is 1
During point position, you can determine the two-dimensional coordinate information (x3, y3) of left elbow joint.
Similarly, it can further identify and determine the two-dimensional coordinate information in the other each joints of human body, each joint of human body is at least
Including:Neck joint, left shoulder joint, right shoulder joint, left elbow joint, right elbow joint, left wrist joint, right wrist joint, left knee joint, a left side
Ankle-joint, right knee joint, right ankle-joint etc., because joint is more, do not enumerate herein.For identifying and determining other each joints
Two-dimensional coordinate method, principle is identical, does not repeat one by one herein.
Step 205, whether criterion action matches with human action, if matching, performs step 206, if mismatching,
Perform step 207.
Alternatively, according to the human synovial identified, two joint adjacent in each joint of human body is connected, adjacent two is obtained and closes
Line between section, calculates the actual angle between the line between adjacent two joint and preset reference direction, according to actual folder
Difference between angle and standard angle, determines whether human action matches with standard operation, wherein, standard angle is execution standard
During action, the angle between line and reference direction between each adjacent two joint, specifically, for the adjacent two joint of each
Between line, the difference between corresponding standard angle and actual angle is calculated, if the company between the adjacent two joint of each
The difference that line computation goes out determines that human action is matched with standard operation in error range;If there are at least one adjacent two
The difference that line between joint calculates is not in error range, determines that human action is mismatched with standard operation.
Citing, the standard operation shown with one timing node of audio, and user imitate the reality that the standard operation is done
Exemplified by the human action of border, standard operation and the matched method of human action are illustrated, Fig. 4 A are carried by the embodiment of the present invention
The structure diagram of the standard operation supplied, the standard operation of certain timing node displaying, and the standard operation are shown in figure
The associated joint being related to, each joint include:Left wrist joint, right wrist joint, left elbow joint, right elbow joint, left shoulder joint, right shoulder joint
Save, totally 6 joints.
Fig. 4 B are the structure diagrams for the actual act that the embodiment of the present invention is provided, and are shown in Fig. 4 B as displaying Fig. 4 A
When Plays act, actual act that user makes.Human action according to collecting identifies each joint of human body, obtains adjacent
Line between two joint, obtain line 1, right elbow joint and right shoulder joint in Fig. 4 B between right wrist joint and right elbow joint it
Between line 2, the line 3 between right shoulder joint and left shoulder joint, the line 4 between left shoulder joint and left elbow joint and left elbow close
Line 5 between section and right wrist joint, for convenience of description, by the corresponding action reality that referred to as the user makes of each line
The micromotion of action, all micromotions form the actual act.
If preset reference direction is the horizontal direction of screen, the folder between No. 1 line and screen level direction is calculated
Angle is 35 degree, and the angle between No. 2 lines and screen level direction is 0 degree, the angle between No. 3 lines and screen level direction
For 0 degree, the angle between No. 4 lines and screen level direction is 0 degree, and the angle between No. 5 lines and screen level direction is
130 degree.
Further, judge whether the human action matches with standard operation, with right wrist joint in Fig. 4 B and right elbow joint it
Between line 1 exemplified by, illustrate, 1 corresponding diagram 4A Plays angle of line is 45 degree, in Fig. 4 B actual act measurement obtain
Actual angle is 35 degree, and difference is 10 degree, is, for example, 15 degree according to the threshold value of default difference, 10 degree of difference is less than 15 degree, i.e.,
It can determine that the micromotion matching in 1 corresponding micromotion of line and standard operation, further, determine line 2, line respectively
3rd, whether line 4 and 5 corresponding micromotion of line match with the micromotion in standard operation, if all of micromotion
Matched with standard operation, then the human action is matched with standard operation, if having any one micromotion and standard to move
Corresponding micromotion mismatches in work, then the human action is unmatched with standard operation.
Step 206, if matching, the displaying process of ending standard action.
Specifically, however, it is determined that human action and standard operation are matched, then are disposed from screen for showing standard
The picture of action, the displaying process acted with ending standard.
Step 207, if mismatching, continue to show standard operation, until standard operation is moved to the terminal of desired guiding trajectory
Position.
Specifically, if human action is mismatched with standard operation, continue to show standard operation, when standard operation moves
To desired guiding trajectory final position when, then the picture for showing standard operation is disposed from screen, is acted with ending standard
Displaying.
Step 208, according to the difference degree between standard operation and human action, generate and show the action of human action
Evaluation information.
When alternatively, according to displaying standard operation, the human action identified and the corresponding joint identified, for every
Line between one adjacent two joint, according to corresponding difference and error range, determines the scoring coefficient of line, according to line
Scoring coefficient and the corresponding score value of line, generate the actual score value of line, and then generate the evaluation information of human action, specifically
Ground, according to formula p=1- [2 Δs/(a-b)], can be calculated scoring coefficient p, wherein, b is error range lower limit, and a is error
Range limit, Δ are difference, and by taking the line 1 in Fig. 4 B between right wrist joint and right elbow joint as an example, its corresponding difference is 10
Degree, e.g., the upper limit of the error range of difference are positive 50 degree, and the lower limit of error range is minus 50 degree, according to formula P=1- [2 ×
10/ (50- (- 50))]=0.8, i.e. the scoring coefficient of line 1 is 0.8.Similarly, the scoring coefficient of line 2 can be calculated respectively
For 1, the scoring coefficient of line 3 is 1, and the scoring coefficient of line 4 is 1, and the scoring coefficient of line 5 is 0.9.
Further, the actual score value of line is calculated, and then generates the evaluation information of human action, human action is commented
Valency information includes the corresponding animation effect of human action score value and the affiliated section of human action score value, for example, human action score value
Full marks are that the corresponding animation effect in section belonging to more than 100,90 points is " perfect prefect " flickering displays;80-90 points, institute
The corresponding animation effect in section of category is " fine good " flickering displays.Specifically, the evaluation information of line includes micromotion
Score value, micromotion score value are the scoring coefficient of line and the product of the corresponding score value of line, and in Fig. 4 B, which is
100 points, 5 micromotions are shared, then the score value full marks of every micromotion are 20 points, by 1 corresponding micromotion of line
Score value full marks 20 are taken separately with corresponding scoring coefficient 0.8, then the score value for obtaining 1 corresponding micromotion of line is 16 points, so that
Generate the evaluation information of line 1.Similarly, the score value of 2 corresponding micromotion of line included in the evaluation information of line 2 is obtained
For 20 points, the score value of the 3 corresponding micromotion of line included in the evaluation information of line 3 is 20 points, the evaluation information of line 4
In the score value of 4 corresponding micromotion of line that includes be 20 points, corresponding point of the line 5 included in the evaluation information of line 5
The score value of solution action is 18 points, and the score value of the corresponding micromotion of each line is summed, that is, the score value for obtaining the action of the people's body is
94 points, the corresponding animation effect in section belonging to the score value is " prefect " flickering display, so as to obtain commenting for the human action
Valency information, and the action evaluation information of human action is shown, the standard of current action is allowed a user knowledge of by displaying
Degree, excitation user make great efforts to do action accurately.
It should be noted that it is not matched in abovementioned steps 207 and the matched human action of standard operation, human action
Action evaluation information in score value be 0 point, for score value be 0 human action action evaluation information, can not be in this step
It is shown in rapid, only shows preferably action evaluation information.
Further, the video pictures frame of other multiple human actions is handled according to the method described above, can be respectively obtained
The evaluation information of human action in different video image frame.
Step 209, at the end of audio plays, according to the action evaluation information of each human action, overall scores is generated
Interface, and show overall scores interface.
Specifically, at the end of audio plays, according to the action evaluation information of each human action, by people wherein included
Body action score value, which adds up to sum, generates total achievement score value, and the corresponding animation effect in the affiliated section of total score value, generation are total
Achievement interface, and show overall scores interface.
As a kind of possible implementation, the corresponding weight of each standard operation in audio can be pre-set,
, can be by the way that the human action score value of each human action be multiplied by after the action evaluation information for determining each human action
Corresponding weight, obtains product value, so that by the product value that adds up, obtains total achievement score value, then according to total achievement point
Section belonging to value, determines corresponding score value grade.
For example, when having 100 timing nodes in audio, that is, when there are 100 standard operations, each can be set
The corresponding weight of standard operation, such as it is 0.01 that can set each standard operation respective weights, when determining everyone
Body action action evaluation information after, can by the way that the human action score value of each human action is multiplied by corresponding weight,
Product value is obtained, so that by the product value that adds up, obtains total achievement score value.If the total achievement score value obtained is 87, can
Know section belonging to it for [80,90), therefore, score value grade can be " C " level.
Fig. 5 is overall scores showing interface schematic diagram provided in an embodiment of the present invention, as shown in figure 5, the user is in video knot
Shu Hou, obtained total fraction are 88 points, and score value grade is " C " level.
Step 210, by the action evaluation information of human action and the image of standard operation, it is added to the video collected
In image frame, according to audio and each video pictures frame, target video is generated.
On the one hand, it is necessary to which action evaluation information is added in video pictures frame, specifically, when showing standard operation,
The video pictures frame that synchronous acquisition arrives is multiple, each video pictures frame has a corresponding action evaluation information, by people
The action evaluation information of body action, is added to the video pictures frame that synchronous acquisition arrives, as a kind of possible implementation, to life
Into multiple action evaluation information screened, retain and evaluate highest action evaluation information, highest action evaluation will be evaluated
Information, be added to synchronous acquisition to multiple video pictures frames at least one video pictures frame, added with action evaluation believe
In the video pictures frame of breath, the corresponding human action of the highest action evaluation information of evaluation should have been shown.
It should be noted that it is not matched in abovementioned steps 207 and the matched human action of standard operation, human action
Action evaluation information in score value be 0 point, for score value be 0 human action action evaluation information, be not added to collection
To video pictures frame in, picture can be made cleaner, reduced to the visual interference of beholder.
On the other hand, it is necessary to which the image of standard operation is added in video pictures frame, specifically, in displaying standard operation
When, the video pictures frame that synchronous acquisition arrives is multiple, each video pictures frame has a corresponding action evaluation information, is made
For a kind of possible implementation, using the score value and threshold scores ratio that will be included in the action evaluation information of each video pictures frame
To mode, determine that the video pictures frame of standard operation need to be added.If comparison result is higher than threshold scores, corresponding regarded at its
In frequency image frame, addition gathers the image of the standard operation synchronously shown during the video pictures frame.
Picture effect schematic diagram in the target video that Fig. 6 is provided by the present embodiment, as shown in fig. 6, user is according to screen
The standard operation that the person of low position scrolled on curtain is done, synchronously does human action, when the action done and standard operation matching,
Person of low position disappears, meanwhile, according to the action evaluation information of human action, show corresponding fraction, and the corresponding animation in fraction section
Effect, fraction are 94 points, and corresponding animation effect is prefect.Standard operation displaying is carried out with the image of person of low position, makes user's
It is stronger to substitute into sense.
Step 211, when detecting the operation of issue control in click overall scores interface, target video is issued.
Specifically,, can be by the target video if user, which is clicked in overall scores interface, issues control after target video generation
Community is published to, or video sharing is gone out, improves the participation and experience sense of user.
It should be noted that step 209 and step 210 can be performed at the same time in the present embodiment, such as:It can show
While the overall scores page shown in Fig. 5, backstage generates target video, target video formation efficiency is improved, so as to avoid user
The process waited as long for.
In a kind of video generation method of the embodiment of the present invention, selected audio, and each timing node in audio are obtained
Corresponding standard operation, plays audio, and each video pictures frame is gathered in audio process is played, and is played in audio to each
During timing node, corresponding standard operation is shown, in each video pictures frame, standard operation that when addition collection synchronously shows
Image, at the end of audio plays, according to audio and each video pictures frame, generates target video.By playing selected audio
During, video pictures frame is gathered, the image of the standard operation of addition displaying at video acquisition interface, and terminate in video
When, target video is generated, user in the prior art is solved and, when playing somatic sensation television game, substitutes into and feel not strong, and can-not be automatically generated
The technical problem of corresponding video.
In order to realize above-described embodiment, the present invention also proposes a kind of video-generating device.
Fig. 7 is a kind of structure diagram of video-generating device provided in an embodiment of the present invention.
As shown in fig. 7, the device includes:Acquisition module 71, playing module 72, the first display module 73, first addition mould
74 and first generation module 75 of block.
Acquisition module 71, for obtaining selected audio, and the corresponding standard operation of each timing node in audio.
Playing module 72, for playing audio, and gathers each video pictures frame in audio process is played.
First display module 73, for when audio is played to each timing node, showing corresponding standard operation.
First add module 74, in each video pictures frame, adding the figure of the standard operation synchronously shown during collection
Picture.
First generation module 75, at the end of being played in audio, according to audio and each video pictures frame, generation target regards
Frequently.
It should be noted that the foregoing explanation to embodiment of the method is also applied for the device of the embodiment, herein not
Repeat again.
In a kind of video-generating device of the embodiment of the present invention, acquisition module is used to obtain selected audio, and audio
In the corresponding standard operation of each timing node, playing module is used to play audio, and gathers each video in audio process is played
Image frame, the first display module are used for when audio is played to each timing node, show corresponding standard operation, first adds
Module is added to be used in each video pictures frame, the image for the standard operation that addition is synchronously shown when gathering, the first generation module is used
At the end of being played in audio, according to audio and each video pictures frame, target video is generated.By the mistake for playing selected audio
Cheng Zhong, gathers video pictures frame, the image of the standard operation of addition displaying at video acquisition interface, and at the end of video, it is raw
Into target video, solve user in the prior art and, when playing somatic sensation television game, substitute into and feel not strong, and can-not be automatically generated and accordingly regard
The technical problem of frequency.
Based on above-described embodiment, the embodiment of the present invention additionally provides the possible realization side of another video-generating device
Formula, Fig. 8 is the structure diagram of another video-generating device provided in an embodiment of the present invention, on the basis of a upper embodiment
On, which further includes:Identification module 76, determining module 77, terminate module 78, the second generation module 79, the second add module
80th, the second display module 81 and release module 82.
Identification module 76, for for when showing standard operation, synchronous acquisition to video pictures frame carry out human body and move
Identify.
Determining module 77, for determining that standard operation is matched with human action.
Terminate module 78, the displaying process for ending standard action.
Second generation module 79, for according to the difference degree between standard operation and human action, generating human action
Action evaluation information.
Second add module 80, draws for by the action evaluation information of human action, being added to the video that synchronous acquisition arrives
Face frame.
Second display module 81, for the action evaluation information according to each human action, generation overall scores interface;Exhibition
Show overall scores interface.
Release module 82, for being issued to target video.
As a kind of possible implementation, the second add module 80, can also include:Screening unit 801 and addition are single
Member 802.
Screening unit 801, for being screened to multiple action evaluation information of generation, retains the highest action of evaluation and comments
Valency information.
Adding device 802, for that will evaluate highest action evaluation information, is added to multiple videos that synchronous acquisition arrives and draws
At least one video pictures frame in the frame of face, wherein, at least one video pictures frame, displaying has the highest action evaluation letter of evaluation
The corresponding human action of breath.
It should be noted that the foregoing explanation to embodiment of the method is also applied for the device of the embodiment, herein not
Repeat again.
In a kind of video-generating device of the embodiment of the present invention, acquisition module is used to obtain selected audio, and audio
In the corresponding standard operation of each timing node, playing module is used to play audio, and gathers each video in audio process is played
Image frame, the first display module are used for when audio is played to each timing node, show corresponding standard operation, first adds
Module is added to be used in each video pictures frame, the image for the standard operation that addition is synchronously shown when gathering, the first generation module is used
At the end of being played in audio, according to audio and each video pictures frame, target video is generated.By the mistake for playing selected audio
Cheng Zhong, gathers video pictures frame, the image of the standard operation of addition displaying at video acquisition interface, and at the end of video, it is raw
Into target video, solve user in the prior art and, when playing somatic sensation television game, substitute into and feel not strong, and can-not be automatically generated and accordingly regard
The technical problem of frequency.
The embodiment of the present invention also provides a kind of electronic equipment, and electronic equipment includes the device described in foregoing any embodiment.
Fig. 9 is the structure diagram of electronic equipment one embodiment of the present invention, it is possible to achieve is implemented shown in Fig. 1-2 of the present invention
The flow of example, as shown in figure 9, above-mentioned electronic equipment can include:Housing 91, processor 92, memory 93, circuit board 94 and electricity
Source circuit 95, wherein, circuit board 94 is placed in the interior volume that housing 91 surrounds, and processor 92 and memory 93 are arranged on circuit
On plate 94;Power circuit 95, for each circuit or the device power supply for above-mentioned electronic equipment;Memory 93 is used to store and can hold
Line program code;Processor 92 is run and executable program generation by reading the executable program code stored in memory 93
The corresponding program of code, for performing the video generation method described in foregoing any embodiment.
Processor 92 to the specific implementation procedures of above-mentioned steps and processor 92 by run executable program code come
The step of further performing, may refer to the description of Fig. 1-2 illustrated embodiments of the present invention, details are not described herein.
The electronic equipment exists in a variety of forms, includes but not limited to:
(1) mobile communication equipment:The characteristics of this kind equipment is that possess mobile communication function, and to provide speech, data
Communicate as main target.This Terminal Type includes:Smart mobile phone (such as iPhone), multimedia handset, feature mobile phone, and it is low
Hold mobile phone etc..
(2) super mobile personal computer equipment:This kind equipment belongs to the category of personal computer, there is calculating and processing work(
Can, generally also possess mobile Internet access characteristic.This Terminal Type includes:PDA, MID and UMPC equipment etc., such as iPad.
(3) portable entertainment device:This kind equipment can show and play content of multimedia.The kind equipment includes:Audio,
Video player (such as iPod), handheld device, e-book, and intelligent toy and portable car-mounted navigation equipment.
(4) server:The equipment for providing the service of calculating, the composition of server are total including processor, hard disk, memory, system
Line etc., server is similar with general computer architecture, but due to needing to provide highly reliable service, in processing energy
Power, stability, reliability, security, scalability, manageability etc. are more demanding.
(5) other have the function of the electronic equipment of data interaction.
To realize above-described embodiment, the embodiment of the present invention also proposes a kind of non-transitorycomputer readable storage medium, its
On be stored with computer program, which realizes the video generation method described in the above method embodiment when being executed by processor.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description
Point is contained at least one embodiment of the present invention or example.In the present specification, schematic expression of the above terms is not
It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office
Combined in an appropriate manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this area
Art personnel can be tied the different embodiments or example described in this specification and different embodiments or exemplary feature
Close and combine.
In addition, term " first ", " second " are only used for description purpose, and it is not intended that instruction or hint relative importance
Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or
Implicitly include at least one this feature.In the description of the present invention, " multiple " are meant that at least two, such as two, three
It is a etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include
Module, fragment or the portion of the code of the executable instruction of one or more the step of being used for realization custom logic function or process
Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable
Sequence, including according to involved function by it is basic at the same time in the way of or in the opposite order, carry out perform function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction
The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass
Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment
Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring
Connecting portion (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable
Medium, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or if necessary with it
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned
In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage
Or firmware is realized.Such as, if realized with hardware with another embodiment, following skill well known in the art can be used
Any one of art or their combination are realized:With the logic gates for realizing logic function to data-signal from
Logic circuit is dissipated, the application-specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile
Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries
Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage medium
In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can also
That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould
Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above
The embodiment of the present invention is stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the present invention
System, those of ordinary skill in the art can be changed above-described embodiment, change, replace and become within the scope of the invention
Type.
The above description is merely a specific embodiment, but protection scope of the present invention is not limited thereto, any
Those familiar with the art the invention discloses technical scope in, the change or replacement that can readily occur in, all should
It is included within the scope of the present invention.Therefore, protection scope of the present invention should be subject to scope of the claims.