CN106097417A - Subject generating method, device, equipment - Google Patents
Subject generating method, device, equipment Download PDFInfo
- Publication number
- CN106097417A CN106097417A CN201610404634.2A CN201610404634A CN106097417A CN 106097417 A CN106097417 A CN 106097417A CN 201610404634 A CN201610404634 A CN 201610404634A CN 106097417 A CN106097417 A CN 106097417A
- Authority
- CN
- China
- Prior art keywords
- animation
- trigger condition
- theme
- terminal
- exercise data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/60—3D [Three Dimensional] animation of natural phenomena, e.g. rain, snow, water or plants
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2213/00—Indexing scheme for animation
- G06T2213/08—Animation software package
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a kind of subject generating method, described method includes: obtain the exercise data of the first object;Exercise data according to described first object and the preset model of the second object, generate the animation of described second object, and wherein, described first object is different from described second object;Setting up the incidence relation between animation, trigger condition and the response events of described second object, wherein said trigger condition is used for triggering described animated show and triggering described response events performing;According to described incidence relation by the animation producing theme bag of described second object, wherein, can indicate that terminal detection described trigger condition during described theme contracted affreightment row and show described animation when meeting described trigger condition and perform described response events.Further, the embodiment of the present invention also provides a kind of theme generating means and equipment.
Description
Technical field
The present invention relates to the information processing technology, particularly relate to a kind of subject generating method, device, equipment.
Background technology
With the development of electronic product, the function of mobile phone also gets more and more, and user, when selecting mobile phone, not only focuses on mobile phone
Call performance, whether the additional properties also becoming more concerned with mobile phone meet demand, and wherein the diversified theme of mobile phone becomes year
A light generation selects the important indicator of mobile phone.
Current theme makes, it is necessary first to text design personnel first use the design works such as PS (Adobe Photoshop)
Tool designs multiple pictures that can reflect continuous action, then saves as independent PNG (Portable Network Graphic
Format, portable network figure form) or JPGE (Joint Photographic Experts Group, joint image is special
Group of family) picture of form, then again by programming personnel's coding so that above-mentioned plurality of pictures shows according to demand,
Generate animation, and then by programming personnel, above-mentioned animation is combined with theme program again, generate theme bag, again that this is main afterwards
What topic bag was manual imports in mobile phone, in order to mobile phone can run the current theme of this theme bag change.
But, use said method making theme to need text design personnel and programming personnel to cooperate and complete, make
Relatively costly, simultaneously as the step that theme makes is complex, makes and easily make mistakes, so rework rate is high;In addition, single set
The cycle that theme makes is long, thus causes make efficiency not high.
Content of the invention
In view of this, the embodiment of the present invention provides a kind of main for solving at least one problem present in prior art
Topic generation method, device, equipment, can reduce theme cost of manufacture, shortens theme fabrication cycle, reduces theme and makes mistake
Rate.
The technical scheme is that and be achieved in that:
First aspect, the embodiment of the present invention provides a kind of subject generating method, and described method includes:
Obtain the exercise data of the first object;
Exercise data according to described first object and the preset model of the second object, generate the dynamic of described second object
Drawing, wherein, described first object is different from described second object;
Set up the incidence relation between animation, trigger condition and the response events of described second object, wherein said triggering
Condition is used for triggering described animated show and triggering described response events performing;
According to described incidence relation by the animation producing theme bag of described second object, wherein, during described theme contracted affreightment row
Can indicate that the described trigger condition of terminal detection and show described animation when meeting described trigger condition and perform described response
Event.
Second aspect, the embodiment of the present invention provides a kind of theme generating means, and described device includes:
First acquiring unit, for obtaining the exercise data of the first object;
First signal generating unit, for the preset model of the exercise data according to described first object and the second object, generates
The animation of described second object, wherein, described first object is different from described second object;
First sets up unit, for setting up the association between animation, trigger condition and the response events of described second object
Relation, wherein said trigger condition is used for triggering described animated show and triggering described response events performing;
Second signal generating unit, is used for the animation producing theme bag of described second object according to described incidence relation, described
Theme bag is used for enhancement system or software interface.
The third aspect, the embodiment of the present invention provides a kind of theme to generate equipment, it is characterised in that described equipment includes communication
Interface & processor, wherein said processor, for being obtained the exercise data of the first object by described communication interface;According to institute
State the exercise data of the first object and the preset model of the second object, generate the animation of described second object, wherein, described first
Object is different from described second object;Set up the association between animation, trigger condition and the response events of described second object to close
System, wherein said trigger condition is used for triggering described animated show and triggering described response events performing;Close according to described association
The animation producing theme bag of described second object, described theme bag are used for enhancement system or software interface by system.
Embodiments providing a kind of subject generating method, device, equipment, described method includes: obtain first right
The exercise data of elephant, the exercise data according to described first object and the preset model of the second object, generate described second object
Animation, wherein, described first object is different from described second object, set up the animation of described second object, trigger condition and
Incidence relation between response events, wherein said trigger condition is used for triggering described animated show and triggers described response events
Perform, according to described incidence relation by the animation producing theme bag of described second object, wherein, can during described theme contracted affreightment row
Instruction terminal detects described trigger condition and shows described animation when meeting described trigger condition and perform described response events.
Compared to prior art, the platform making theme can be set when initializing by programming personnel, can root by this platform
Exercise data according to the first object makes the animation of the second object, can also select the animation of the second object on the platform simultaneously
Trigger condition and response events, it is not necessary to be desirable that programming personnel's coding when making every time theme, reduce theme system
Make cost, simplify theme making step and shorten theme Production Time.
Brief description
Fig. 1 is the schematic diagram of a kind of implementation environment involved by the embodiment of the present invention;
A kind of subject generating method that Fig. 2 provides for the embodiment of the present invention realize schematic flow sheet one;
The schematic diagram of a kind of theme platform for making that Fig. 3 provides for the embodiment of the present invention;
A kind of animation schematic diagram one that Fig. 4 provides for the embodiment of the present invention;
A kind of subject generating method that Fig. 5 provides for the embodiment of the present invention realize schematic flow sheet two;
A kind of subject generating method that Fig. 6 provides for the embodiment of the present invention realize schematic flow sheet three;
A kind of animation schematic diagram two that Fig. 7 provides for the embodiment of the present invention;
A kind of subject generating method that Fig. 8 provides for the embodiment of the present invention realize schematic flow sheet four;
A kind of subject generating method that Fig. 9 provides for the embodiment of the present invention realize schematic flow sheet five;
A kind of subject generating method that Figure 10 provides for the embodiment of the present invention realize schematic flow sheet six;
A kind of scene schematic diagram obtaining video that Figure 11 provides for the embodiment of the present invention;
A kind of animation schematic diagram three that Figure 12 provides for the embodiment of the present invention;
A kind of animation schematic diagram four that Figure 13 provides for the embodiment of the present invention;
A kind of hardware logic schematic diagram that Figure 14 provides for the embodiment of the present invention;
The structural representation one of a kind of theme generating means that Figure 15 provides for the embodiment of the present invention;
The structural representation two of a kind of theme generating means that Figure 16 provides for the embodiment of the present invention;
The structural representation three of a kind of theme generating means that Figure 17 provides for the embodiment of the present invention;
The structural representation four of a kind of theme generating means that Figure 18 provides for the embodiment of the present invention;
The structural representation five of a kind of theme generating means that Figure 19 provides for the embodiment of the present invention;
The structural representation six of a kind of theme generating means that Figure 20 provides for the embodiment of the present invention;
The structural representation seven of a kind of theme generating means that Figure 21 provides for the embodiment of the present invention.
Detailed description of the invention
In order to solve problem present in background technology, the embodiment of the present invention provides a kind of subject generating method, Ke Yiying
For server, described server is generally provided by operator or third party, can be a station server, it is also possible to be multiple stage clothes
The server cluster of business device composition, or a cloud computing service center.Fig. 1 is enforcement when being applied to server for the method
Environment schematic, as it is shown in figure 1, this implementation environment includes: first terminal 11 and the server 12 being arranged on network side, described the
One terminal 11 can be mobile terminal, such as mobile phone, panel computer etc.;Also can be fixed terminal such as landline telephone, ATM
Deng.First terminal 11 is connected by wireless network or cable network with server 12, and server 12 can be according to first terminal 11
Request the theme bag of generation is sent to first terminal 11, in order to first terminal 11 applies this theme bag.This theme generates
Method also can apply to terminal, and described terminal can be mobile terminal, such as mobile phone, panel computer etc., and this terminal is provided with
It is able to carry out the platform of described subject generating method, after using this platform to generate theme bag, this theme bag can be saved in
This locality, and run this theme bag.
The technical solution of the present invention is further elaborated with specific embodiment below in conjunction with the accompanying drawings.
The embodiment of the present invention provides a kind of subject generating method, is applied to server, Fig. 2 be the method realize that flow process is shown
It is intended to, as in figure 2 it is shown, described method includes:
Step 201, the exercise data obtaining the first object.
In the present embodiment, described first object can be any one object of nature, such as human body, animal, or work
Tool etc..Described exercise data can be the direction of motion of this first object, move distance, the angle of change, and movement locus
Etc. a series of parameters that can characterize the first object motion state.
The first optionally obtains mode: can be obtained the motion of described first object by external image acquisition component
Data, a series of energy such as described image acquisition component can be camera, video camera, 3D (3Dimensions, three-dimensional) video camera
Enough shooting object pictures, or the equipment of video.For example, when obtaining the exercise data of the first object, 3D can be set and find a view
Region, this view area sets up multiple video camera, and the plurality of video camera is configured around seat in the plane according to circular rings, and formation takes
Jing Quan and all with send equipment is connected, then the first object is arranged on this find a view circle inside, when described first object motion,
Start multiple camera around the first object shooting, record the sport video in the first multiple orientation of object, then by the equipment of transmission
The sport video in the first multiple orientation of object is sent to server, and server can be from the fortune in the described first multiple orientation of object
Dynamic video extracts the exercise data of the first object.
Or, it is also possible to setting up 3D video camera, described 3D video camera is the video camera utilizing 3D camera lens to manufacture, and is generally of
Plural pick-up lens, the spacing between adjacent two cameras is close with human eye spacing, can shoot similar human eye
The seen different images for Same Scene.When the first object motion, start 3D video camera and shoot, obtain first right
Then this 3D sport video is sent to server by the 3D sport video of elephant, and server can carry from described 3D sport video
Take the exercise data of the first object.
The second optionally obtains mode: can obtain the sport video of described first object from this locality, or, to video
The sport video of the first object described in server request, then to obtain described first right for sport video according to described first object
The exercise data of elephant.Illustratively, server local preserves the sport video of multiple different object, for example, the video of people's dancing,
The video that animal is run, the video etc. of ball rolling, when obtaining the exercise data of the first object, it is first determined the first object
Type, then can indicate the fortune of the first object of inquiry character conjunction user's request from multiple videos that this locality preserves according to user
Dynamic video, then extracts the exercise data of the first object from selected video.Or, during initialization, video server is protected
There is the sport video of a large amount of different types of object, when obtaining the exercise data of the first object, it is first determined the first object
Type, then server to video server send inquiry request, video server according to this inquiry request allow server
Checking the sport video being preserved, server selects sport video the point of suitable first object according to user's request
Hitting download, then the sport video of the first selected object is sent to server by video server, and server is from receiving
The first object sport video in extract the exercise data of the first object.Or, after determining the type of the first object, server
Can also indicate the download request sending the first video to the video server being connected with internet according to user, described first regards
Frequency is the sport video of described first object, then receives the first video that video server is issued by internet, and from connecing
The first video receiving extracts the exercise data of the first object.
Step 202, the preset model according to the exercise data of described first object and the second object, generate described second right
The animation of elephant, wherein, described first object is different from described second object.
That in the embodiment of the present invention, the preset model of the exercise data according to described first object and the second object generates
In the animation of two objects, the second object is according to the motion campaign of the first object, for example, it is assumed that the first object is cat, first is right
The exercise data of elephant have recorded an exercise data during cat predation, including jump height, jump direction, the direction of motion of claw
And the data of the motion frequency of claw, the second object is cartoon character little Huang people, according to exercise data and the little Huang of cat predation
The preset model of people, can generate the animation that little Huang people is preyed on according to the action that cat preys on.
A kind of optional generating mode, first arranges multiple calibration point on the first object, arranges many on the second object
Individual impact point, then obtains the movement locus of the plurality of calibration point from the sport video of the first object, and according to preset rules
Set up the first corresponding relation between each calibration point of described first object and each impact point of described second object, generally
The described preset rules described calibration point of instruction and described impact point one_to_one corresponding, and then according to described first corresponding relation and described
The movement locus of each calibration point, generates the animation of the second object, wherein, the second object described in the animation of described second object
Either objective point according to calibration point corresponding with described impact point movement locus move.Described preset rules also can indicate that
One calibration point is corresponding with multiple impact points, or multiple calibration point is corresponding with an impact point, when preset rules indicates a mark
Fixed point can be all according to corresponding calibration point with the multiple impact points in the animation to the second object when corresponding for multiple impact points
Movement locus move;When the multiple calibration point of preset rules instruction and the impact point animation to the second object when corresponding
In any one impact point can be according to the combination campaign of corresponding multiple calibration point movement locus, for example, false
If impact point corresponding first calibration point and second calibration point respectively, the movement locus of described first calibration point is the first demarcation
The vertical movement of point, the movement locus of the second calibration point is the horizontal movement of the second calibration point, then impact point can be oblique along 45° angle
Upwards move.
During implementing, the type according to the first object is different, can arrange different calibration points.Illustratively, may be used
To be first according to the feature of object, different objects is divided into multiple classification.For example, it is possible to object is divided into human body class, animal
Class, toys and other classes.If the first object is toys, it is assumed that being balloon, the exercise data of the first object describes balloon
Up and down floating sight in the air;Second object is other class classes, it is assumed that preset model is honey peach, first can be at gas
It is uniformly arranged multiple calibration point on the outer surface of ball, and from the exercise data of the first object, extract the motion rail of each calibration point
Mark, then according to preset rules, the correspondence position at honey peach arranges impact point, and set up each calibration point of the first object with
The first corresponding relation between each impact point of second object, then makes animation so that each impact point of honey peach is pressed
Calibration point according to corresponding balloon moves, and can get honey peach up and down floating animation in the air.
Or, if the bouncing ball that the first object is other classes, the second object is the little Huang people of cartoon character class, and bouncing ball exists
When being extruded by external force, its outline line can change, and therefore can be uniformly arranged on the outer surface of bouncing ball by calibration point,
Assume the sight that the exercise data instruction bouncing ball of the first object is upspring from ground several times and fallen, from the motion number of bouncing ball
Can extract the movement locus of each calibration point according to, then according to preset rules, the correspondence position little Huang people arranges target
Point, and set up the first corresponding relation between each calibration point of the first object and each impact point of the second object, then obtain
Take the animation of little Huang people so that each impact point moves according to the movement locus of corresponding calibration point, now obtains
The animation upspring from ground several times to little Huang people and fall, and refined the shape that is squeezed of little Huang people when contacting every time with ground
State.
In an embodiment of the present invention, after getting the animation of the second object, particIe system can also be obtained
The particle element relating to, and the effect of the second object according to the animation of described second object and described particle Element generation moves
Draw.Described particIe system is the technology of the blooming simulating some particle elements in three dimensional computer graphics, its design
Particle element includes fire, blast, cigarette, current, spark, fallen leaves, cloud, mist, snow, dirt, meteor trail or luminous track etc., and using should
Technology can simulate the abstract visual effect of particle element, and the effect animation of described second object i.e. combines particle element and regards
The animation of feel effect, picture effect is more rich, and Consumer's Experience is higher.
Incidence relation between step 203, the animation setting up described second object, trigger condition and response events, wherein
Described trigger condition is used for triggering described animated show and triggering described response events performing.
During initialization, can arrange theme platform for making as shown in Figure 3 on the server, described platform includes animation system
Make region 301 and setting area 302.After getting the animation of the second object, the animation of this second object is in cartoon making
Region 301 shows, the animation of the second object is likely to longer, and user can click on cartoon making region 301 as required and arrange
Starting point 3011 and end point 3012, server can according to user's request intercept Len req animation.Getting conjunction
After the animation of the second suitable object, the corresponding trigger condition of this animation and response events can be set in setting area 302,
Setting up the incidence relation between animation, trigger condition and the response events of the second object, described trigger condition is the theme and is applied to
After terminal, triggering terminal shows the condition of animation of the second object, and for example user's various paddling on the touchscreen or user are to end
The click of end function key etc. all can be as the trigger condition of the animation of terminal display the second object;Described response events is the theme
The operation that after being applied to terminal, terminal performs according to trigger condition, for example, open application, opens note interface, improves ringing volume
Deng;The animation that described incidence relation indicates the second object is mutually corresponding with the trigger condition arranging and response events respectively.
Illustratively, with reference to shown in Fig. 3, it is assumed that the animation of the second object that this platform is shown in cartoon making region 301 is retouched
Having stated a cartoon figure to sleep, unexpected quilt is raised downwards, the sight of the tears direct current that cartoon figure is frozen, and this moves
Draw the multiple elements including being arranged on multiple figure layer, such as quilt, alarm clock, cartoon figure, clothes, sheet etc..Setting area 302
Showing the size of the key frame of the animation of the second object in middle dotted line frame 3021, the relative position with terminal screen center etc. is believed
Breath, the key frame of the second animation can be adjusted by designer, the key frame of described animation be animation do not start show when
The static images of terminal demonstration, could be arranged to the wallpaper of terminal.Dotted line frame 3022 shows trigger condition and the sound of terminal
Answering event options, and entering the option of edlin to each animated element, designer can click on response events option, now
Platform shows drop-down menu, wherein enumerates the assembly of multiple response events that terminal can complete, and designer is as required
Select corresponding assembly, and then platform shows the design considerations of the trigger condition of the selected corresponding response events of assembly, for example, if
Designer selects to solve lock set, now solves and shows release region (UA) option and response region (RA) option below lock set,
The region that described release region terminates for the finger paddling of user preset, when lower stroke of user's finger and when reaching this region, end
End performs to solve latching operation;Described response region is starting point during lower stroke of user, wants for one of the generally corresponding animation of this starting point
The lower section of terminal screen, for example, it is possible to the upper end of quilt is set to response region, is set to release region, works as user by element
Finger clicks on quilt upper end, and during downward stroke, the motion that user's finger followed by quilt is raised slowly, when finger paddling is to terminal
When below screen, quilt is raised completely.Also show the editing options of each element in dotted line frame 3022, designer can be led to
Cross the option clicking on animated element, replace animated element or the color of change element, decorative pattern, ratio etc..Lead to designer
After crossing the trigger condition of animation and the response events that this platform is provided with the second object, i.e. establish the dynamic of the second object
Incidence relation between picture, trigger condition and response events, and then theme bag can be generated according to this incidence relation.
In other embodiments of the invention, if having got the effect animation of the second object, then this step after step 202
In can set up the incidence relation between effect animation, trigger condition and the response events of the second object, the mode of foundation can be adopted
Described in above-described embodiment.
Step 204, according to described incidence relation by the animation producing theme bag of described second object, wherein, described theme
Can indicate that terminal detection described trigger condition during contracted affreightment row and show described animation when meeting described trigger condition and perform
Described response events.
Illustratively, after the incidence relation between the animation, trigger condition and the response events that are provided with the second object,
Can be according to this incidence relation by the animation producing theme bag of described second object, for example, it is possible to first dynamic by the second object
In picture, the type of the animation of the position attribution of each element, the second object, trigger condition and response events save as exclusive scene
Describe file, then by related resources such as this exclusive scene description document, icon picture and preset models, generate theme bag,
And this theme bag is sent to terminal, when this theme bag of terminal operating, terminal can detect described trigger condition and meet
Showing described animation during described trigger condition and performing described response events, for example, integrating step 203 explains, and works as theme
After bag is applied to terminal, in the case that screen is lighted, user can click on the upper end of quilt and carry out lower stroke of operation, now touches
Send out the animation of terminal display the second object, i.e. show the animation that quilt is raised downwards slowly with the paddling of user's finger, with
Quilt is raised slowly, and as shown in Figure 4, cartoon figure is frozen to shiver tears direct current, when user's finger 401 is along Fig. 4
Direction X paddling is to after below terminal screen, and quilt is raised completely, and terminal performs response events, i.e. terminal unlocking.
If the incidence relation establishing in step 203 between effect animation, trigger condition and the response events of the second object,
Then can be according to described incidence relation by the effect animation producing theme bag of described second object in this step.
Set up theme platform for making at server, it is achieved that theme makes zero code, The platform provides powerful easy-to-use
Visualization cartoon making interface and logic event edition interface, can allow designer freely carry out animation creation and screen cloth
Office, improves the efficiency that theme makes.Meanwhile, the animation of the second object is generated by capturing the exercise data of the first object, real
Show the diversity of animation.Further, optimize the theme under traditional Android system and draw mode, more save GPU (Graphics
Processing Unit, graphic process unit) internal memory, reduce system power dissipation.
In other embodiments of the invention, as it is shown in figure 5, after server generates theme bag, described method is also wrapped
Include:
What step 205, reception terminal sent is used for asking the download of described theme bag to be asked.
Theme bag can be saved in this locality after generating theme bag by server, then starts from another theme,
So, server can preserve multiple theme bag in this locality, and different themes bag can be identified by different numberings, and
Send the example of multiple theme bag carrying numbering to terminal.When terminal needs to change theme, can first browse multiple master
The example of topic, selects the theme needing to download, and then clicks on this theme, and now terminal to server sends the selected theme bag of request
Downloading request, the described request of downloading includes the numbering of selected theme bag.
Step 206, download request according to described, send described theme bag to described terminal.
Server, after receiving the download request that terminal sends, resolves this download request, obtains theme selected by user
Numbering, then inquire about theme bag selected by user according to this numbering, and this theme bag be sent to terminal, in order to terminal updates
Theme.
The subject generating method that the embodiment of the present invention provides, can arrange making by programming personnel main when initializing
The platform of topic, can be made the animation of the second object by this platform according to the exercise data of the first object, simultaneously can also be
Trigger condition and the response events of the animation of the second object is selected, it is not necessary to when making theme every time, be desirable that programming on this platform
Personnel's coding, reduces theme cost of manufacture, simplifies theme making step and shortens theme Production Time.
The embodiment of the present invention provides a kind of subject generating method, is applied to terminal, Fig. 6 be the method realize that flow process is illustrated
Figure, as shown in Figure 6, described method includes:
Step 601, terminal obtain the exercise data of the first object.
In the present embodiment, described first object can be any one object of nature, such as human body, animal, or work
Tool etc..Described exercise data can be the direction of motion of this first object, move distance, the angle of change, and movement locus
Etc. a series of parameters that can characterize the first object motion state.
During initialization, built-in 3D camera can be set in the terminal, when needing the exercise data obtaining the first object,
By described 3D camera towards the first object, record the sport video of the first object, from this sport video, then obtain first
The exercise data of object.For example, it is assumed that the first object is vollyball, need to obtain the data of paddling in the air after vollyball is impacted,
Now can be by the terminal of built-in 3D camera towards vollyball, shooting vollyball receives the sport video after human arm is clashed into, so
After from this sport video extract the first object exercise data.
Exercise data according to described first object of step 602, terminal and the preset model of the second object, generate described
The animation of two objects, wherein, described first object is different from described second object.
That in the embodiment of the present invention, the preset model of the exercise data according to described first object and the second object generates
In the animation of two objects, the second object is according to the action campaign of the first object.When terminal gets the motion number of the first object
According to afterwards, multiple calibration point can be set on the first object, the preset model of the second object arranges multiple impact point, and
According to preset rules set up between each calibration point of the first object and each impact point of described second object first corresponding
Relation, then obtains the movement locus of each calibration point from the exercise data of the first object so that arbitrary mesh of the second object
Punctuate moves according to the movement locus of calibration point corresponding with described impact point, and then generates the animation of the second object.It is assumed that the
One object is vollyball, and its exercise data describes the data by paddling in the air after human arm shock for the vollyball, the second object pre-
If model is cartoon character great Bai, multiple calibration point can be uniformly arranged on the outer surface of vollyball, then according to one_to_one corresponding
Principle, the outer surface of cartoon character great Bai arranges multiple impact point, then generates the animation of the second object so that cartoon
Each impact point on the outer surface of image great Bai all moves according to the movement locus of corresponding calibration point, therefore
The animation of the second object generating is the animation of paddling in the air after cartoon character great Bai is clashed into by human arm.
In other embodiments of the invention, after getting the animation of the second object, terminal can also obtain particle
The particle element that system relates to, and the effect of the second object according to the animation of described second object and described particle Element generation
Really animation.Described particIe system is the technology of the blooming simulating some particle elements in three dimensional computer graphics, and it sets
The particle element of meter includes fire, blast, cigarette, current, spark, fallen leaves, cloud, mist, snow, dirt, meteor trail or luminous track etc., adopts
Can simulate the abstract visual effect of particle element by this technology, the effect animation of described second object i.e. combines particle unit
The animation of element visual effect, picture effect is more rich, and Consumer's Experience is higher.
Step 603, terminal set up the incidence relation between the animation of described second object, trigger condition and response events,
Wherein said trigger condition is used for triggering described animated show and triggering described response events performing.
Illustratively, during initialization, can arrange theme platform for making as shown in Figure 3 in terminal, this platform includes moving
Drawing and making region 301 and setting area 302, designer can show according to cartoon making region 301 and setting area 302
Option the trigger condition of animation and the response events selecting the second object is set, set up animation, the trigger condition of the second object
And the incidence relation between response events.Process of setting up, with reference to described in step 203, is not detailed at this.
Step 604, terminal according to described incidence relation by the animation producing theme bag of described second object, wherein, described
Can indicate that terminal detection described trigger condition during theme contracted affreightment row and show described animation when meeting described trigger condition simultaneously
Perform described response events.
Illustratively, after the incidence relation between the animation, trigger condition and the response events that are provided with the second object,
Terminal can be according to this incidence relation by the animation producing theme bag of described second object, and for example, terminal can be first by second
The position attribution of each key element in the animation of object, the type of the animation of the second object, trigger condition and response events save as
Exclusive scene description document, then by related resources such as this exclusive scene description document, icon picture and preset models, generates
Theme bag simultaneously preserves.
Illustratively, it is assumed that the first object is up and down floating balloon in the air, and the second object is cartoon character little Huang
People, this little Huang people, with diver's helmet, can simulate scene in water for the little Huang people by particIe system, now by first
The animation of the second object that the exercise data of object generates is little Huang people's up and down floating animation, theme of generation in water
The corresponding response events of animation wrapping medium and small yellow people is terminal unlocking, the head that trigger condition is click little Huang people lower stroke.As
Shown in Fig. 7, after theme bag is applied to terminal, if terminal is detecting that user's finger 701 clicks on little Huang head part and along Fig. 7
When middle X-direction is slided, trigger and show little Huang people's up and down floating animation in water, and when bottom lower stroke of arrival, terminal
Unlock.
If the incidence relation establishing in step 603 between effect animation, trigger condition and the response events of the second object,
Then can be according to described incidence relation by the effect animation producing theme bag of described second object in this step.
In other embodiments of the invention, as shown in Figure 8, after terminal gets theme bag, described method is also wrapped
Include:
Step 605, terminal resolve theme bag, obtain between animation, trigger condition and the response events of described second object
Incidence relation.
The theme bag of generation is carried out local preservation by terminal, i.e. terminal can preserve multiple theme bag, when needs update main
During topic, the thumbnail of terminal demonstration multiple theme bag, user selects required theme as required and clicks on, and now terminal resolves
This theme bag, obtains the incidence relation between animation, trigger condition and the response events of the second object.
Step 606, terminal obtain the systematic parameter of described terminal.
Illustratively, described systematic parameter is for describing the parameter of terminal executive capability, such as processor model, video card model,
Operating system version etc..Owing to the processing speed of different processor models is different, if processor speed is relatively low, then cannot show
The bigger animation of data volume or picture;Resolution ratio that different video cards can show is different, if video card is less, then cannot show point
The bigger animation of resolution or picture;The operation that different operating system is able to carry out is different, if operating system is Android, then cannot
Perform the unlocking manner that iOS (iPhone Operating System, mobile phone operating system) operating system is suitable for;Different wash with watercolours
The rendering effect that dye system can show is different, if rendering engine when making animation is different from the rendering engine of terminal, then eventually
End cannot show animation.Therefore terminal is after resolving theme bag, can first obtain the systematic parameter of terminal, it is then determined that eventually
Animation that end can be shown or the picture of display, and the operation that terminal is able to carry out.It should be noted that it is good in order to ensure
Display effect, it is required that the rendering engine that the rendering engine and the terminal that use when theme makes are arranged keeps strict one
Cause.For example, (Tencent Operating System, rises can to select TOS during server making theme in the embodiment of the present invention
News operating system) rendering engine makes and beautifies animation, then and terminal is also required to arrange the identical TOS rendering engine of configuration, holding
The strict conformance of rendering engine bottom.
Step 607, terminal generate the dynamic of described second object according to the systematic parameter of described incidence relation and described terminal
Second corresponding relation of picture, trigger condition and response events.
Illustratively, when terminal determines the picture of animation that terminal can show or display, and the behaviour that terminal is able to carry out
After work, according to the systematic parameter of the incidence relation in theme bag and described terminal, give birth to the second couple that cost terminal can use
Second corresponding relation of the animation of elephant, trigger condition and response events.
For example, it is assumed that theme bag includes four animations, the respectively first animation, the second animation, the 3rd animation and the 4th
Animation, wherein the first corresponding trigger condition of animation is the first trigger condition and the first response events, and the second animation is corresponding to be touched
Clockwork spring part is the second trigger condition and the second response events, and the 3rd corresponding trigger condition of animation is the 3rd trigger condition and the 3rd
Response events, the 4th corresponding trigger condition of animation is the 4th trigger condition and the 4th response events, due to terminal CPU relatively
Little, and the data volume of the second animation is relatively big, therefore terminal cannot normal presentation the second animation, owing to the operating system of terminal is peace
Zhuo, and the 4th trigger condition and the 4th response events are applicable iOS operating system, therefore terminal can not detect the 4th triggering
Condition, can not perform the 4th response events.Now terminal can be joined according to the system of the incidence relation in theme bag and terminal
Number, select the incidence relation of the second animation, the second trigger condition and the second response events and the 3rd animation, the 3rd trigger condition and
The incidence relation of the 3rd response events, generates the second corresponding pass of animation, trigger condition and the response events of described second object
System, have recorded the second animation and corresponding second trigger condition of the second animation and the second response thing in described second corresponding relation
Part, and the 3rd animation and corresponding 3rd trigger condition of the 3rd animation and the 3rd response events.
In other embodiments of the invention, as it is shown in figure 9, terminal set up the animation of the second object, trigger condition and
After second corresponding relation of response events, described method also includes:
Step 608, terminal receive user operation.
After terminal operating theme bag, different functions can be performed according to the operation of user.For example, terminal can connect
Receiving the operation of user, the operation of described user can be paddling operation on the terminal screen of electricity for the user, it is also possible to be to use
The click to terminal function key for the family, this is not limited by the embodiment of the present invention.
Step 609, terminal determine corresponding first trigger condition of described user operation.
Illustratively, when terminal detects finger paddling on screen of user, the start bit of paddling can first be detected
Put and paddling trend, then determine corresponding first trigger condition of current user operation according to this original position and paddling trend.
With reference to Fig. 4 explanation, when user's finger 401 clicks on the top of quilt and during to X-direction paddling along Fig. 4, determine that terminal is currently grasped
Make corresponding first trigger condition and be the trigger condition of terminal unlocking.
Step 610, terminal according to the second corresponding relation of the animation of described second object, trigger condition and response events,
Show the described first corresponding animation of trigger condition, and perform the described first corresponding response events of trigger condition.
Illustratively, terminal is after determining corresponding first trigger condition of current user operation, can be corresponding according to second
Relation, determines this corresponding animation of the first trigger condition and response events, during user operation, shows that described first touches
The corresponding animation of clockwork spring part, and perform the described first corresponding response events of trigger condition, described to be shown as terminal demonstration dynamic
Picture, and show that the explanation current image of terminal screen is still image.
The subject generating method being applied to terminal that the embodiment of the present invention provides, during initialization, can arrange system in terminal
Making the platform of theme, this platform can be made the animation of the second object by the exercise data of the first object, it is not necessary to makes every time
Make to be required to during theme programming personnel's coding, reduce theme cost of manufacture, simplify theme making step and shortening
Theme Production Time.
The embodiment of the present invention provides a kind of subject generating method, and the present invention says as a example by being applied to server in this way
Bright, in the embodiment of the present invention, the first object is human body, and the second object is the game in online game.As shown in Figure 10, institute
State subject generating method to include:
Step 901, server obtain the human body movement data that 3D video camera captures, step 902.
Illustratively, as shown in figure 11, when obtaining human body movement data, first external 3D can be set in default place
Video camera, then people moves before camera, such as dancing etc., this scene as shown in Figure 11 dotted line frame 1001, people 1001a
Dance before 3D video camera 1001b.After 3D video camera has shot the video of adult motor, this video is sent to service
Device, server can obtain human body movement data from this video, and described human body movement data includes the direction of motion of people's limbs,
A series of parameters that can characterize human motion state such as move distance, the angle of change, and movement locus.
Step 902, server extract the skeleton motion data of described human body movement data, step 903.
Illustratively, server is after getting human body movement data, can obtain skeleton motion from this action data
Data, i.e. get each bone of human body in motion process towards and position.
Step 903, server, according to the preset model of described skeleton motion data and game, generate described game people
The animation of thing, step 904.
Illustratively, the preset model of described game also has the skeleton structure interconnecting bone composition, right respectively
Answer the limbs of game.After server gets skeleton motion data, set up the demarcation bone in skeleton data
And the corresponding relation between the target bone in the skeleton structure of preset model, the demarcation bone of usual skeleton data with pre-
If the target bone in the skeleton structure of model can be one-to-one relationship, i.e. in skeleton data, a bone is corresponding pre-
If one of skeleton structure of model bone, and then server can be according to the default mould of skeleton motion data and game
Type, makes the animation of game so that the target bone in preset model skeleton structure is according to corresponding skeleton
Moving towards with position of demarcation bone in data, and then obtain the animation of game.For example, it is assumed that skeleton motion
Data describe the scene (such as 1001 in Figure 11) that a people dances, and generate game if, with skeleton motion data and jump
During the animation waved, the action when action of animation is danced with people identical (such as 1002 in Figure 11).
The animation of described game is combined by step 904, server with particIe system, and the effect obtaining game is moved
Draw, step 905.
Illustratively, described particIe system is the skill of the blooming simulating some particle elements in three dimensional computer graphics
Art, the particle element of its design includes fire, blast, cigarette, current, spark, fallen leaves, cloud, mist, snow, dirt, meteor trail or luminous rail
Marks etc., use this technology can simulate the abstract visual effect of particle element.As shown in figure 12, the game of generation is dynamic
Being drawn in before particIe system combines, background 110 is blank background, and picture is more dull, only game, it appears picture is inadequate
Enrich;When the animation of game is after particIe system is combined, as shown in figure 13, simulated in background 110 by particIe system
The 120th, the changeable cloud of meteor 121 etc., enriches picture effect, thus improves sight.
Step 905, server set up the association between effect animation, trigger condition and the response events of described game
Relation, step 906.
During initialization, server can arrange theme platform for making as shown in Figure 3.When the animation getting game
Afterwards, the association between effect animation, trigger condition and the response events of described game can be set up by this platform to close
System, the process of foundation refer to described in step 203.
Step 906, server, perform the effect animation producing theme bag of described game according to described incidence relation
Step 907.
Illustratively, can be first by the animation of the position attribution of each element in the animation of game, game
Type, trigger condition and response events save as exclusive scene description document, and then by this exclusive scene description document, figure is marked on a map
The related resources such as the preset model of piece and game, generate theme bag.
Step 907, server receive the download request of the described theme bag of request that mobile phone sends, step 908.
Illustratively, server can preserve multiple theme bag, and different themes bag can be identified by different numberings, with
When can send the example of the multiple theme bag carrying numbering to terminal.When terminal needs to change theme, can first browse
The example of multiple themes, selects the theme needing to download, and then clicks on this theme, and now terminal to server transmission request is selected
Request downloaded by theme bag, and the described request of downloading includes the numbering of selected theme bag.
Step 908, server send described theme bag, step 909 to described terminal.
Illustratively, server is after receiving download request, resolves this download request, obtains the volume of theme selected by user
Number, then theme bag according to selected by this numbering inquires about user, and this theme bag is sent to terminal.
Step 909, terminal resolve described theme bag, obtain animation, trigger condition and the response events of described game
Between incidence relation, step 910.
Illustratively, owing to the preservation of theme bag has special form, therefore terminal if desired runs this theme bag, needs
It is first according to theme packet encoder mode and resolves described theme bag, obtain the animation of the game carried in this theme bag, and
Incidence relation between the animation of this game, trigger condition and response events.
Step 910, terminal obtain the systematic parameter of described terminal, and according to the system of described incidence relation and described terminal
Parameter generates the second corresponding relation of the animation of described game, trigger condition and response events, step 911.
Illustratively, described systematic parameter is for describing the parameter of terminal executive capability, such as processor model, video card model,
Operating system version etc..Owing to the processing speed of different processor models is different, the resolution ratio that different video cards can show
Difference, the operation that different operating system is able to carry out is different, and therefore terminal is after resolving theme bag, can first obtain end
The systematic parameter of end, it is then determined that the picture of the animation that can show of terminal or display, and the operation that terminal is able to carry out, so
Systematic parameter according to the incidence relation in theme bag and described terminal afterwards, moving of the second object that raw cost terminal can use
Second corresponding relation of picture, trigger condition and response events.
Step 911, terminal receive user operation, step 912.
Here, step 911 is referred to described in step 608.
Step 912, terminal determine corresponding first trigger condition of described user operation, step 913.
Here, step 912 can be with described in step 609.
Step 913, terminal according to the second corresponding relation of the animation of described game, trigger condition and response events,
Show the described first corresponding animation of trigger condition, and perform the described first corresponding response events of trigger condition.
Here, step 913 is referred to described in step 610.
The subject generating method that the embodiment of the present invention provides, can arrange making by programming personnel main when initializing
The platform of topic, can be made the animation of the second object by this platform according to the exercise data of the first object, simultaneously can also be
Trigger condition and the response events of the animation of the second object is selected, it is not necessary to when making theme every time, be desirable that programming on this platform
Personnel's coding, reduces theme cost of manufacture, simplifies theme making step and shortens theme Production Time.
The embodiment of the present invention provides a kind of application scenarios, and as shown in figure 14, this application scenarios includes end side 1301, WEB
(network) side 1302, and theme platform for making 1303, described WEB side 1302 also includes server 1302a.Wherein, JS (Java
Script) being the script of a kind of java applet, UI (User Interface) is user interface, OpenGL ES (OpenGL for
Embedded Systems, the OpenGL of embedded system) it is OpenGL (Open Graphics Library, open figure
Storehouse) subset of 3-D graphic API (Application Programming Interface, application programming interface), for
The embedded devices such as mobile phone, panel computer and game host and design.WebGL is a kind of 3D drafting standards, this drafting standards
Allow JavaScript and OpenGL ES 2.0 to be combined together, by increasing by one of OpenGL ES 2.0
JavaScript binds, and WebGL can provide hardware 3D to accelerate to render for HTML5Canvas, and such web developer is just permissible
Help of System video card shows 3D scene and model in browser more glibly, moreover it is possible to create complicated navigation and data vision
Change.
First, theme platform for making 1303 obtains animation and for making multiple basic objects of theme, then sets up dynamic
Incidence relation between picture, trigger condition and response events, and then by the position attribution of each object, the type of animation, trigger
Condition, response events etc. saves as the exclusive scene description document of TOS3DSCENE form, and according to described exclusive scene description
The related resources such as file, icon picture and 3D model, constitute the theme bag of entitled theme.zip according to Scene.xml agreement.So
This theme bag is loaded onto WEB-JS (WEB-Java Script) layer by rear WEB side 1302, by TOS rendering engine to theme bag
Animation carry out rendering and beautifying and preserve.Wherein, TOS rendering engine is combined by way of webgl by emscripten
Operate in WEB side 1302, by operating in mobile phone terminal after ROM system globe area, maintain the strict conformance of rendering engine bottom.
When end side 1301 needs to update theme, by the application software of Android system, theme bag theme.zip can be loaded onto
Android system layer, is resolved this theme bag theme.zip by application logic, obtains TOS3DSCENE in Theme.zip exclusive
Scene description document, is restored the visual scene of WEB side 1302, simultaneously to exclusive scene description document by TOS rendering engine
In incidence relation be analyzed, build the mapping graph of response events, trigger condition and animation, thus in vision and logic two
Individual aspect restores the design that designer makes in WEB side 1302 completely.
The subject generating method that the embodiment of the present invention provides, can arrange making by programming personnel main when initializing
The platform of topic, can be made the animation of the second object by this platform according to the exercise data of the first object, simultaneously can also be
Trigger condition and the response events of the animation of the second object is selected, it is not necessary to when making theme every time, be desirable that programming on this platform
Personnel's coding, reduces theme cost of manufacture, simplifies theme making step and shortens theme Production Time.
The embodiment of the present invention provides a kind of theme generating means 140, and as shown in figure 15, described device 140 includes:
First acquiring unit 1401, for obtaining the exercise data of the first object.
First signal generating unit 1402, for the preset model of the exercise data according to described first object and the second object,
Generating the animation of described second object, wherein, described first object is different from described second object.
First sets up unit 1403, for setting up between animation, trigger condition and the response events of described second object
Incidence relation, wherein said trigger condition is used for triggering described animated show and triggering described response events performing.
Second signal generating unit 1404, is used for the animation producing theme bag of described second object according to described incidence relation,
Described theme bag is used for enhancement system or software interface.
In other embodiments of the invention, described first signal generating unit 1402 is used for: obtain from described exercise data
The movement locus of multiple calibration points of described first object;Set up each calibration point of described first object and described second object
Each impact point between the first corresponding relation;Motion rail according to described first corresponding relation and each calibration point described
Mark, generates the animation of the second object so that the either objective point of described second object is according to demarcation corresponding with described impact point
The movement locus motion of point.
In other embodiments of the invention, the exercise data of described first object is human body movement data;Such as Figure 16 institute
Showing, described device 140 also includes: extraction unit 1405, for extracting described human motion from the exercise data of described human body
The skeleton motion data of data;Described first signal generating unit 1402 is used for: according to described skeleton motion data and described default mould
Type, generates the animation of described second object.
In other embodiments of the invention, as shown in figure 17, described device 140 also includes:
Second acquisition unit 1406, for obtaining the particle element that particIe system relates to, moving according to described second object
Draw the effect animation with the second object described in described particle Element generation;Described first set up unit 1403 for: set up described
Incidence relation between the effect animation of the second object, trigger condition and response events;Described second signal generating unit 1404 is used for:
According to described incidence relation by the effect animation producing theme bag of described second object.
In other embodiments of the invention, described first acquiring unit 1401 is used for: receives image acquisition component and captures
And the exercise data of the first object sending.
In other embodiments of the invention, described first acquiring unit 1401 is used for: send the first video to internet
Download request, described first video is the sport video of described first object;Receive the first video that described internet issues;
According to described first video, obtain the exercise data of described first object.
In other embodiments of the invention, as shown in figure 18, described device 140 also includes: the first receiving unit 1407,
For receiving the download request that terminal sends;First transmitting element 1408, for according to described download request, described server to
Described terminal sends described theme bag.
In other embodiments of the invention, as shown in figure 19, described device 140 also includes: running unit 1409, is used for
Run described theme bag;Second sets up unit 1410, for according to the association between described animation, trigger condition and response events
Relation, sets up the second corresponding pass of the available animation of described terminal, trigger condition and response events according to the systematic parameter of terminal
System.
In other embodiments of the invention, as shown in figure 20, described device 140 also includes: the second receiving unit 1411,
For receiving first operation of user;Determining unit 1412, is used for determining that described first operates corresponding first trigger condition;Hold
Row unit 1414, for the second corresponding relation according to described animation, trigger condition and response events, shows that described first triggers
The corresponding animation of condition, and perform the described first corresponding response events of trigger condition.
Embodiments provide a kind of theme generating means, when initializing, system can be set by programming personnel
Make the platform of theme, the animation of the second object can be made by this platform according to the exercise data of the first object, also may be used simultaneously
To select the trigger condition of animation of the second object and response events on the platform, it is not necessary to be desirable that when making theme every time
Programming personnel's coding, reduces theme cost of manufacture, simplifies theme making step and shortens theme Production Time.
The embodiment of the present invention provides a kind of theme to generate equipment 200, and as shown in figure 21, described equipment 200 includes that communication connects
Mouth 2001 and processor 2002, wherein said processor 2002, for obtaining the first object by described communication interface 2001
Exercise data;Exercise data according to described first object and the preset model of the second object, generate the dynamic of described second object
Drawing, wherein, described first object is different from described second object;Set up animation, trigger condition and the response of described second object
Incidence relation between event, wherein said trigger condition is used for triggering described animated show and triggering described response events holding
OK;According to described incidence relation by the animation producing theme bag of described second object, described theme bag be used for enhancement system or
Software interface.
In other embodiments of the invention, described processor 2002 is used for: obtain described from described exercise data
The movement locus of multiple calibration points of one object;Set up each of each calibration point of described first object and described second object
The first corresponding relation between impact point;According to the movement locus of described first corresponding relation and each calibration point described, generate
The animation of the second object so that the either objective point of described second object is according to the motion of calibration point corresponding with described impact point
Orbiting motion.
In other embodiments of the invention, the exercise data of described first object is human body movement data;Described process
Device 2002 is used for: extract the skeleton motion data of described human body movement data;According to described skeleton motion data and described preset
Model, generates the skeleton cartoon of described preset model;Set up the association between described skeleton cartoon, trigger condition and response events
Relation;According to described incidence relation, described skeleton cartoon is generated theme bag.
In other embodiments of the invention, described processor 2002 is used for: obtain the particle element that particIe system relates to,
According to described animation and described particle Element generation effect animation;Set up described effect animation, trigger condition and response events it
Between incidence relation;According to described incidence relation by described effect animation producing theme bag.
In other embodiments of the invention, described processor 2002 is additionally operable to receive image by communication interface 2001 and adopts
The exercise data of the first object that collection parts capture and send.
In other embodiments of the invention, processor 2002 is additionally operable to be sent out to internet by described communication interface 2001
The download sending the first video is asked, and described first video is the sport video of described first object;Receive described internet to issue
The first video;Described processor 2002 is used for: according to described first video, obtain the exercise data of described first object.
In other embodiments of the invention, described processor 2002 is additionally operable to receive terminal by communication interface 2001 and sends out
The download request sent, downloads request according to described, sends described theme bag by described 3rd external interface to described terminal.
In other embodiments of the invention, described processor 2002 is used for: run described theme bag;Move according to described
Incidence relation between picture, trigger condition and response events, sets up what described terminal can use according to the systematic parameter of described terminal
Second corresponding relation of animation, trigger condition and response events.
Described processor 2002 is additionally operable to be received first operation of user by communication interface 2001;Determine that described first grasps
Make corresponding first trigger condition;The second corresponding relation according to described animation, trigger condition and response events, shows described
The one corresponding animation of trigger condition, and perform the described first corresponding response events of trigger condition.
Embodiments provide a kind of theme and generate equipment, when initializing, system can be set by programming personnel
Make the platform of theme, the animation of the second object can be made by this platform according to the exercise data of the first object, also may be used simultaneously
To select the trigger condition of animation of the second object and response events on the platform, it is not necessary to be desirable that when making theme every time
Programming personnel's coding, reduces theme cost of manufacture, simplifies theme making step and shortens theme Production Time.
It need to be noted that: the description of above apparatus embodiments item, it is similar for describing with said method, has same
The identical beneficial effect of embodiment of the method, does not therefore repeat.For the ins and outs not disclosed in present device embodiment,
Those skilled in the art refer to the description of the inventive method embodiment and understands, for saving length, repeats no more here.
" embodiment " mentioned in should be understood that specification in the whole text or " embodiment " mean relevant with embodiment
Special characteristic, structure or characteristic include at least one embodiment of the present invention.Therefore, occur everywhere in entire disclosure
" in one embodiment " or " in one embodiment " not necessarily refers to identical embodiment.Additionally, these specific feature, knots
Structure or characteristic can combine in one or more embodiments in any suitable manner.Should be understood that the various enforcements in the present invention
In example, the size of the sequence number of above-mentioned each process is not meant to the priority of execution sequence, and the execution sequence of each process should be with its work(
Can determine with internal logic, and any restriction should not constituted to the implementation process of the embodiment of the present invention.The invention described above embodiment
Sequence number, just to describing, does not represent the quality of embodiment.
It should be noted that herein, term " includes ", "comprising" or its any other variant are intended to non-row
Comprising of his property, so that include that the process of a series of key element, method, article or device not only include those key elements, and
And also include other key elements being not expressly set out, or also include intrinsic for this process, method, article or device
Key element.In the case of there is no more restriction, the key element being limited by statement " including ... ", it is not excluded that including this
The process of key element, method, article or device there is also other identical element.
In several embodiments provided herein, it should be understood that disclosed equipment and method, can be passed through it
Its mode realizes.Apparatus embodiments described above is only schematically, for example, the division of described unit, it is only
A kind of logic function divides, and actual can have other dividing mode, such as when realizing: multiple unit or assembly can be in conjunction with, or
It is desirably integrated into another system, or some features can be ignored, or do not perform.In addition, shown or discussed each composition portion
Divide coupling each other or direct-coupling or communication connection can be by some interfaces, the INDIRECT COUPLING of equipment or unit
Or communication connection, can be electrical, machinery or other forms.
The above-mentioned unit illustrating as separating component can be or may not be physically separate, shows as unit
The parts showing can be or may not be physical location;Both may be located at a place, it is also possible to be distributed to multiple network list
In unit;Part or all of unit therein can be selected according to the actual needs to realize the purpose of the present embodiment scheme.In addition,
Each functional unit in various embodiments of the present invention can be fully integrated in a processing unit, it is also possible to is each unit difference
Separately as a unit, it is also possible to two or more unit are integrated in a unit;Above-mentioned integrated unit both may be used
To use the form of hardware to realize, it would however also be possible to employ the form that hardware adds SFU software functional unit realizes.
One of ordinary skill in the art will appreciate that: realize that all or part of step of said method embodiment can be passed through
The related hardware of programmed instruction completes, and aforesaid program can be stored in computer read/write memory medium, and this program exists
During execution, perform to include the step of said method embodiment;And aforesaid storage medium includes: movable storage device, read-only deposit
The various media that can store program code such as reservoir (Read Only Memory, ROM), magnetic disc or CD.Or, this
If bright above-mentioned integrated unit is using the form realization of software function module and as independent production marketing or use, it is possible to
To be stored in a computer read/write memory medium.Based on such understanding, the technical scheme essence of the embodiment of the present invention
On the part that in other words prior art contributed can embody with the form of software product, this computer software product
It is stored in a storage medium, including some instructions are with so that a computer equipment (can be personal computer, service
Device or the network equipment etc.) perform all or part of of method described in each embodiment of the present invention.And aforesaid storage medium bag
Include: the various media that can store program code such as movable storage device, ROM, magnetic disc or CD.
The above, the only detailed description of the invention of the present invention, but protection scope of the present invention is not limited thereto, and any
Those familiar with the art, in the technical scope that the invention discloses, can readily occur in change or replace, should contain
Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with described scope of the claims.
Claims (17)
1. a subject generating method, it is characterised in that described method includes:
Obtain the exercise data of the first object;
Exercise data according to described first object and the preset model of the second object, generate the animation of described second object, its
In, described first object is different from described second object;
Set up the incidence relation between animation, trigger condition and the response events of described second object, wherein said trigger condition
For triggering described animated show and triggering the execution of described response events;
According to described incidence relation by the animation producing theme bag of described second object, wherein, can during described theme contracted affreightment row
Instruction terminal detects described trigger condition and shows described animation when meeting described trigger condition and perform described response events.
2. method according to claim 1, it is characterised in that the described exercise data and second according to described first object
The preset model of object, generates the animation of described second object, comprising:
The movement locus of multiple calibration points of described first object is obtained from described exercise data;
Set up the first corresponding relation between each calibration point of described first object and each impact point of described second object;
According to the movement locus of described first corresponding relation and each calibration point described, generate the animation of the second object, wherein, institute
State the motion according to calibration point corresponding with described impact point for the either objective point of the second object described in the animation of the second object
Orbiting motion.
3. method according to claim 1, it is characterised in that the exercise data of described first object is the motion number of human body
According to;Described method also includes: extract described human skeleton exercise data from the exercise data of described human body;
The described exercise data according to described first object and the preset model of the second object, generate the animation of described second object
Including: according to the preset model of described skeleton motion data and described second object, generate the animation of described second object.
4. method according to claim 3, it is characterised in that described method also includes:
Obtain the particle element that relates to of particIe system, described in the animation according to described second object and described particle Element generation the
The effect animation of two objects;
Incidence relation between described animation, trigger condition and the response events setting up described second object includes: set up described
Incidence relation between the effect animation of the second object, trigger condition and response events;
Described according to described incidence relation, the animation producing theme bag of described second object is included: will according to described incidence relation
The effect animation producing theme bag of described second object.
5. the method according to any one of Claims 1-4, it is characterised in that the exercise data of described acquisition the first object
Including:
Obtain the exercise data of described first object from image acquisition component.
6. the method according to any one of Claims 1-4, it is characterised in that the exercise data of described acquisition the first object
Including:
Obtain the sport video of described first object from this locality, or, to the motion of described first object of video server request
Video;
Sport video according to described first object obtains the exercise data of described first object.
7. the method according to Claims 1-4 any one claim, it is characterised in that described method includes:
What server reception terminal sent is used for asking the download of described theme bag to be asked;
Downloading request according to described, described server sends described theme bag to described terminal.
8. the method according to any one of Claims 1-4, it is characterised in that described method includes:
Terminal resolves described theme bag, obtains the association between animation, trigger condition and the response events of described second object and closes
System;
Terminal obtains the systematic parameter of described terminal;
Terminal according to the systematic parameter of described incidence relation and described terminal generate the animation of described second object, trigger condition and
Second corresponding relation of response events.
9. method according to claim 8, it is characterised in that described method also includes:
Terminal receives user operation;
Terminal determines corresponding first trigger condition of described user operation;
Terminal, according to the second corresponding relation of the animation of described second object, trigger condition and response events, shows described first
The corresponding animation of trigger condition, and perform the described first corresponding response events of trigger condition.
10. a theme generating means, it is characterised in that described device includes:
First acquiring unit, for obtaining the exercise data of the first object;
First signal generating unit, for the preset model of the exercise data according to described first object and the second object, generates described
The animation of the second object, wherein, described first object is different from described second object;
First sets up unit, for setting up the incidence relation between animation, trigger condition and the response events of described second object,
Wherein said trigger condition is used for triggering described animated show and triggering described response events performing;
Second signal generating unit, is used for the animation producing theme bag of described second object, described theme according to described incidence relation
Bag is used for enhancement system or software interface.
11. devices according to claim 10, it is characterised in that described first signal generating unit is used for: from described motion number
Movement locus according to the middle multiple calibration points obtaining described first object;
Set up the first corresponding relation between each calibration point of described first object and each impact point of described second object;
According to the movement locus of described first corresponding relation and each calibration point described, generate the animation of the second object so that institute
The either objective point stating the second object moves according to the movement locus of calibration point corresponding with described impact point.
12. devices according to claim 10, it is characterised in that the exercise data of described first object is human motion number
According to;
Described device also includes: extraction unit, for extracting described human body movement data from the exercise data of described human body
Skeleton motion data;
Described first signal generating unit is used for: according to the preset model of described skeleton motion data and described second object, generates institute
State the animation of the second object.
13. devices according to claim 12, it is characterised in that described device also includes:
Second acquisition unit, for obtaining the particle element that particIe system relates to, the animation according to described second object and described
The effect animation of the second object described in particle Element generation;
Described first set up unit for: set up between effect animation, trigger condition and the response events of described second object
Incidence relation;
Described second signal generating unit is used for: according to described incidence relation by the effect animation producing theme bag of described second object.
14. 1 kinds of themes generate equipment, it is characterised in that described equipment includes communication interface and processor, wherein said process
Device, for being obtained the exercise data of the first object by described communication interface;Exercise data according to described first object and
The preset model of two objects, generates the animation of described second object, and wherein, described first object is different from described second object;
Setting up the incidence relation between animation, trigger condition and the response events of described second object, wherein said trigger condition is used for
Trigger described animated show and trigger the execution of described response events;According to described incidence relation, the animation of described second object is raw
Becoming theme bag, described theme bag is used for enhancement system or software interface.
15. equipment according to claim 14, it is characterised in that described processor is used for:
The movement locus of multiple calibration points of described first object is obtained from described exercise data;Set up described first object
The first corresponding relation between each impact point of each calibration point and described second object;According to described first corresponding relation and
The movement locus of each calibration point described, generates the animation of the second object, wherein, described in the animation of described second object second
The either objective point of object moves according to the movement locus of calibration point corresponding with described impact point.
16. equipment according to claim 14, it is characterised in that the exercise data of described first object is human motion number
According to;
Described processor is used for: extract the skeleton motion data of described human body movement data from the exercise data of described human body;
According to the preset model of described skeleton motion data and described second object, generate the animation of described second object.
17. equipment according to claim 16, it is characterised in that described processor is used for: obtain what particIe system related to
The effect animation of the second object described in particle element, the animation according to described second object and described particle Element generation;Set up
Incidence relation between the effect animation of described second object, trigger condition and response events;According to described incidence relation by institute
State the effect animation producing theme bag of the second object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610404634.2A CN106097417B (en) | 2016-06-07 | 2016-06-07 | Subject generating method, device, equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610404634.2A CN106097417B (en) | 2016-06-07 | 2016-06-07 | Subject generating method, device, equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106097417A true CN106097417A (en) | 2016-11-09 |
CN106097417B CN106097417B (en) | 2018-07-27 |
Family
ID=57227597
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610404634.2A Active CN106097417B (en) | 2016-06-07 | 2016-06-07 | Subject generating method, device, equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106097417B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109107160A (en) * | 2018-08-27 | 2019-01-01 | 广州要玩娱乐网络技术股份有限公司 | Animation exchange method, device, computer storage medium and terminal |
CN109544665A (en) * | 2018-11-21 | 2019-03-29 | 万翼科技有限公司 | Generation method, device and the storage medium of animation poster |
CN109636884A (en) * | 2018-10-25 | 2019-04-16 | 阿里巴巴集团控股有限公司 | Animation processing method, device and equipment |
CN109657233A (en) * | 2018-11-23 | 2019-04-19 | 东软集团股份有限公司 | Generate method, apparatus, storage medium and the electronic equipment of theme |
WO2019141126A1 (en) * | 2018-01-19 | 2019-07-25 | 北京市商汤科技开发有限公司 | Method and apparatus for generating special effect program file package, method and apparatus for generating special effect, and electronic device |
CN110175061A (en) * | 2019-05-20 | 2019-08-27 | 北京大米科技有限公司 | Exchange method, device and electronic equipment based on animation |
CN110569096A (en) * | 2019-08-20 | 2019-12-13 | 上海沣沅星科技有限公司 | System, method, medium, and apparatus for decoding human-computer interaction interface |
CN112328277A (en) * | 2020-10-19 | 2021-02-05 | 武汉木仓科技股份有限公司 | Resource updating method and device of application and server |
CN112543352A (en) * | 2019-09-23 | 2021-03-23 | 腾讯科技(深圳)有限公司 | Animation loading method, device, terminal, server and storage medium |
CN113538637A (en) * | 2020-04-21 | 2021-10-22 | 阿里巴巴集团控股有限公司 | Method, device, storage medium and processor for generating animation |
US11368746B2 (en) | 2018-02-08 | 2022-06-21 | Beijing Sensetime Technology Development Co., Ltd. | Method and device for generating special effect program file package, method and device for generating special effect, and electronic device |
CN115115316A (en) * | 2022-07-25 | 2022-09-27 | 天津市普迅电力信息技术有限公司 | Method for simulating storage material flowing-out and warehousing operation direction-guiding type animation based on Cesium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101192308A (en) * | 2007-03-28 | 2008-06-04 | 腾讯科技(深圳)有限公司 | Roles animations accomplishing method and system |
CN104978758A (en) * | 2015-06-29 | 2015-10-14 | 世优(北京)科技有限公司 | Animation video generating method and device based on user-created images |
CN105468353A (en) * | 2015-11-06 | 2016-04-06 | 网易(杭州)网络有限公司 | Implementation method and apparatus for interface animation, mobile terminal, and computer terminal |
-
2016
- 2016-06-07 CN CN201610404634.2A patent/CN106097417B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101192308A (en) * | 2007-03-28 | 2008-06-04 | 腾讯科技(深圳)有限公司 | Roles animations accomplishing method and system |
CN104978758A (en) * | 2015-06-29 | 2015-10-14 | 世优(北京)科技有限公司 | Animation video generating method and device based on user-created images |
CN105468353A (en) * | 2015-11-06 | 2016-04-06 | 网易(杭州)网络有限公司 | Implementation method and apparatus for interface animation, mobile terminal, and computer terminal |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019141126A1 (en) * | 2018-01-19 | 2019-07-25 | 北京市商汤科技开发有限公司 | Method and apparatus for generating special effect program file package, method and apparatus for generating special effect, and electronic device |
US11521389B2 (en) | 2018-01-19 | 2022-12-06 | Beijing Sensetime Technology Development Co., Ltd. | Method for generating special effect program file package, method for generating special effect, electronic device, and storage medium |
US11368746B2 (en) | 2018-02-08 | 2022-06-21 | Beijing Sensetime Technology Development Co., Ltd. | Method and device for generating special effect program file package, method and device for generating special effect, and electronic device |
CN109107160B (en) * | 2018-08-27 | 2021-12-17 | 广州要玩娱乐网络技术股份有限公司 | Animation interaction method and device, computer storage medium and terminal |
CN109107160A (en) * | 2018-08-27 | 2019-01-01 | 广州要玩娱乐网络技术股份有限公司 | Animation exchange method, device, computer storage medium and terminal |
CN109636884A (en) * | 2018-10-25 | 2019-04-16 | 阿里巴巴集团控股有限公司 | Animation processing method, device and equipment |
CN109544665A (en) * | 2018-11-21 | 2019-03-29 | 万翼科技有限公司 | Generation method, device and the storage medium of animation poster |
CN109657233A (en) * | 2018-11-23 | 2019-04-19 | 东软集团股份有限公司 | Generate method, apparatus, storage medium and the electronic equipment of theme |
CN109657233B (en) * | 2018-11-23 | 2024-01-02 | 东软集团股份有限公司 | Method and device for generating theme, storage medium and electronic equipment |
CN110175061B (en) * | 2019-05-20 | 2022-09-09 | 北京大米科技有限公司 | Animation-based interaction method and device and electronic equipment |
CN110175061A (en) * | 2019-05-20 | 2019-08-27 | 北京大米科技有限公司 | Exchange method, device and electronic equipment based on animation |
CN110569096A (en) * | 2019-08-20 | 2019-12-13 | 上海沣沅星科技有限公司 | System, method, medium, and apparatus for decoding human-computer interaction interface |
CN112543352A (en) * | 2019-09-23 | 2021-03-23 | 腾讯科技(深圳)有限公司 | Animation loading method, device, terminal, server and storage medium |
CN113538637A (en) * | 2020-04-21 | 2021-10-22 | 阿里巴巴集团控股有限公司 | Method, device, storage medium and processor for generating animation |
CN112328277A (en) * | 2020-10-19 | 2021-02-05 | 武汉木仓科技股份有限公司 | Resource updating method and device of application and server |
CN115115316A (en) * | 2022-07-25 | 2022-09-27 | 天津市普迅电力信息技术有限公司 | Method for simulating storage material flowing-out and warehousing operation direction-guiding type animation based on Cesium |
CN115115316B (en) * | 2022-07-25 | 2023-01-06 | 天津市普迅电力信息技术有限公司 | Method for simulating storage material flowing-out and warehousing operation direction-guiding type animation based on Cesium |
Also Published As
Publication number | Publication date |
---|---|
CN106097417B (en) | 2018-07-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106097417B (en) | Subject generating method, device, equipment | |
US11948260B1 (en) | Streaming mixed-reality environments between multiple devices | |
US11373354B2 (en) | Techniques for rendering three-dimensional animated graphics from video | |
KR20210123357A (en) | body posture estimation | |
CN110147231A (en) | Combine special efficacy generation method, device and storage medium | |
CN106575446B (en) | Facial motion driven animation communication system | |
KR20230107844A (en) | Personalized avatar real-time motion capture | |
CN109641153A (en) | Object modeling and replacement in video flowing | |
CN106385591A (en) | Video processing method and video processing device | |
JP6750046B2 (en) | Information processing apparatus and information processing method | |
CN112738408A (en) | Selective identification and ordering of image modifiers | |
US11514690B2 (en) | Scanning of 3D objects with a second screen device for insertion into a virtual environment | |
CN112037311A (en) | Animation generation method, animation playing method and related device | |
CN107670279A (en) | The development approach and system of 3D web games based on WebGL | |
CN110852942B (en) | Model training method, and media information synthesis method and device | |
US10885691B1 (en) | Multiple character motion capture | |
KR20230107654A (en) | Real-time motion delivery for prosthetic rims | |
CN103761085A (en) | Mixed reality holographic object development | |
CN109254650A (en) | A kind of man-machine interaction method and device | |
CN109815462A (en) | A kind of document creation method and terminal device | |
WO2018135246A1 (en) | Information processing system and information processing device | |
CN110517340A (en) | A kind of facial model based on artificial intelligence determines method and apparatus | |
CN109495616A (en) | A kind of photographic method and terminal device | |
CN109462768A (en) | A kind of caption presentation method and terminal device | |
CN112118397A (en) | Video synthesis method, related device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |