CN106097417B - Subject generating method, device, equipment - Google Patents

Subject generating method, device, equipment Download PDF

Info

Publication number
CN106097417B
CN106097417B CN201610404634.2A CN201610404634A CN106097417B CN 106097417 B CN106097417 B CN 106097417B CN 201610404634 A CN201610404634 A CN 201610404634A CN 106097417 B CN106097417 B CN 106097417B
Authority
CN
China
Prior art keywords
animation
trigger condition
theme
terminal
exercise data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610404634.2A
Other languages
Chinese (zh)
Other versions
CN106097417A (en
Inventor
符乐安
赵松龄
黄彬
龙振海
沈志鹏
李世宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201610404634.2A priority Critical patent/CN106097417B/en
Publication of CN106097417A publication Critical patent/CN106097417A/en
Application granted granted Critical
Publication of CN106097417B publication Critical patent/CN106097417B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/603D [Three Dimensional] animation of natural phenomena, e.g. rain, snow, water or plants
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2213/00Indexing scheme for animation
    • G06T2213/08Animation software package

Abstract

The invention discloses a kind of subject generating method, the method includes:Obtain the exercise data of the first object;According to the preset model of the exercise data and the second object of first object, the animation of second object is generated, wherein first object is different from second object;The incidence relation between animation, trigger condition and the response events of second object is established, wherein the trigger condition is executed for triggering the animated show and triggering the response events;According to the incidence relation by the animation producing theme packet of second object, wherein when the theme packet operation can instruction terminal detect the trigger condition and show the animation when meeting the trigger condition and execute the response events.Further, the embodiment of the present invention also provides a kind of theme generating means and equipment.

Description

Subject generating method, device, equipment
Technical field
The present invention relates to the information processing technology more particularly to a kind of subject generating method, device, equipment.
Background technology
With the development of electronic product, the function of mobile phone is also more and more, and user not only focuses on mobile phone when selecting mobile phone Call performance, also become more concerned with mobile phone additional properties whether meet demand, wherein the diversified theme of mobile phone become year A light generation selects the important indicator of mobile phone.
Current theme makes, it is necessary first to which text design personnel first use PS (Adobe Photoshop) etc. to design work Tool, which designs multiple, can reflect the picture of continuous action, then preserve and be independent PNG (Portable Network Graphic Format, portable network figure format) or JPGE (Joint Photographic Experts Group, joint image are special Group of family) format picture, program is then write by programming personnel again so that above-mentioned plurality of pictures shown according to demand, Animation is generated, and then is again combined above-mentioned animation with theme program by programming personnel, theme packet is generated, later again by the master Topic packet is manually imported into mobile phone, and current theme is changed in order to which mobile phone can run the theme packet.
But theme is made using the above method, text design personnel and programming personnel's mutual cooperation is needed to complete, it makes Cost is higher, simultaneously as theme is complex the step of making, error is easy to make, so rework rate is high;In addition, single set The period that theme makes is long, not high so as to cause producing efficiency.
Invention content
In view of this, the embodiment of the present invention is solves the problems, such as existing in the prior art at least one to provide a kind of master Generation method, device, equipment are inscribed, theme cost of manufacture can be reduced, shortens theme fabrication cycle, reduces theme and makes mistake Rate.
The technical proposal of the invention is realized in this way:
In a first aspect, the embodiment of the present invention provides a kind of subject generating method, the method includes:
Obtain the exercise data of the first object;
According to the preset model of the exercise data and the second object of first object, the dynamic of second object is generated It draws, wherein first object is different from second object;
The incidence relation between animation, trigger condition and the response events of second object is established, wherein the triggering Condition is executed for triggering the animated show and triggering the response events;
According to the incidence relation by the animation producing theme packet of second object, wherein when the theme packet operation Can instruction terminal detect the trigger condition and show the animation when meeting the trigger condition and execute the response Event.
Second aspect, the embodiment of the present invention provide a kind of theme generating means, and described device includes:
First acquisition unit, the exercise data for obtaining the first object;
First generation unit is used for the preset model of the exercise data and the second object according to first object, generates The animation of second object, wherein first object is different from second object;
First establishing unit, the association between animation, trigger condition and response events for establishing second object Relationship, wherein the trigger condition is executed for triggering the animated show and triggering the response events;
Second generation unit, for according to the incidence relation by the animation producing theme packet of second object, it is described Theme packet is used for enhancement system or software interface.
The third aspect, the embodiment of the present invention provide a kind of theme generation equipment, which is characterized in that the equipment includes communication Interface & processor, wherein the processor, the exercise data for obtaining the first object by the communication interface;According to institute The preset model of the exercise data and the second object of the first object is stated, generates the animation of second object, wherein described first Object is different from second object;The association established between animation, trigger condition and the response events of second object is closed System, wherein the trigger condition is executed for triggering the animated show and triggering the response events;It is closed according to the association The animation producing theme packet of second object, the theme packet are used for enhancement system or software interface by system.
An embodiment of the present invention provides a kind of subject generating method, device, equipment, the method includes:Obtain first pair The exercise data of elephant generates second object according to the preset model of the exercise data and the second object of first object Animation, wherein first object is different from second object, establish the animation of second object, trigger condition and Incidence relation between response events, wherein the trigger condition is for triggering the animated show and triggering the response events It executes, according to the incidence relation by the animation producing theme packet of second object, wherein can when the theme packet operation Instruction terminal detects the trigger condition and shows the animation when meeting the trigger condition and execute the response events. Compared to the prior art, can the platform for making theme be set by programming personnel in initialization, it being capable of root by the platform The animation of the second object is made according to the exercise data of the first object, while can also select the animation of the second object on the platform Trigger condition and response events, require that programming personnel writes program when need not make theme every time, reduce theme system Make cost, simplify theme making step and shortens theme Production Time.
Description of the drawings
Fig. 1 is a kind of schematic diagram of implementation environment involved by the embodiment of the present invention;
Fig. 2 is a kind of implementation process schematic diagram one of subject generating method provided in an embodiment of the present invention;
Fig. 3 is a kind of schematic diagram of theme platform for making provided in an embodiment of the present invention;
Fig. 4 is a kind of animation schematic diagram one provided in an embodiment of the present invention;
Fig. 5 is a kind of implementation process schematic diagram two of subject generating method provided in an embodiment of the present invention;
Fig. 6 is a kind of implementation process schematic diagram three of subject generating method provided in an embodiment of the present invention;
Fig. 7 is a kind of animation schematic diagram two provided in an embodiment of the present invention;
Fig. 8 is a kind of implementation process schematic diagram four of subject generating method provided in an embodiment of the present invention;
Fig. 9 is a kind of implementation process schematic diagram five of subject generating method provided in an embodiment of the present invention;
Figure 10 is a kind of implementation process schematic diagram six of subject generating method provided in an embodiment of the present invention;
Figure 11 is a kind of schematic diagram of a scenario obtaining video provided in an embodiment of the present invention;
Figure 12 is a kind of animation schematic diagram three provided in an embodiment of the present invention;
Figure 13 is a kind of animation schematic diagram four provided in an embodiment of the present invention;
Figure 14 is a kind of hardware logic schematic diagram provided in an embodiment of the present invention;
Figure 15 is a kind of structural schematic diagram one of theme generating means provided in an embodiment of the present invention;
Figure 16 is a kind of structural schematic diagram two of theme generating means provided in an embodiment of the present invention;
Figure 17 is a kind of structural schematic diagram three of theme generating means provided in an embodiment of the present invention;
Figure 18 is a kind of structural schematic diagram four of theme generating means provided in an embodiment of the present invention;
Figure 19 is a kind of structural schematic diagram five of theme generating means provided in an embodiment of the present invention;
Figure 20 is a kind of structural schematic diagram six of theme generating means provided in an embodiment of the present invention;
Figure 21 is a kind of structural schematic diagram seven of theme generating means provided in an embodiment of the present invention.
Specific implementation mode
In order to solve the problems, such as that background technology, the embodiment of the present invention provide a kind of subject generating method, Ke Yiying For server, the server is usually provided by operator or third party, can be a server, can also be more clothes The server cluster or a cloud computing service center of business device composition.Fig. 1 is implementation when this method is applied to server Environment schematic, as shown in Figure 1, the implementation environment includes:First terminal 11 and the server 12 in network side is set, described the One terminal 11 can be mobile terminal, such as mobile phone, tablet computer etc.;Can also be fixed terminal such as fixed-line telephone, ATM machine Deng.First terminal 11 is connect with server 12 by wireless network or cable network, and server 12 can be according to first terminal 11 Request the theme packet of generation is sent to first terminal 11, in order to first terminal 11 apply the theme packet.The theme generates Method can also be applied to terminal, and the terminal can be mobile terminal, such as mobile phone, tablet computer etc., which is equipped with It is able to carry out the platform of the subject generating method, after generating theme packet using the platform, which can be stored in It is local, and run the theme packet.
The technical solution of the present invention is further elaborated in the following with reference to the drawings and specific embodiments.
The embodiment of the present invention provides a kind of subject generating method, is applied to server, and Fig. 2 is that the implementation process of this method is shown It is intended to, as shown in Fig. 2, the method includes:
Step 201, the exercise data for obtaining the first object.
In the present embodiment, first object can be any one object of nature, such as human body, animal, Huo Zhegong Tool etc..The exercise data can be the direction of motion of first object, move distance, the angle and movement locus of change Etc. a series of parameters that can characterize the first object motion state.
The first optional acquisition modes:The movement of first object can be obtained by external image acquisition component Data, described image acquisition component can be camera, video camera, a series of energy such as 3D (3Dimensions, three-dimensional) video camera The equipment for enough shooting object picture or video.For example, when obtaining the exercise data of the first object, 3D can be set and found a view Region, the viewfinder area are provided with multiple video cameras, and the multiple video camera is configured according to circular rings around seat in the plane, and formation takes It Jing Quan and connect with sending device, then the first object is arranged inside the circle of finding a view, when first object moves, Start multiple cameras to shoot around the first object, the sport video in the multiple orientation of the first object of record, then by sending device The sport video in the multiple orientation of the first object is sent to server, server can be from the fortune in the multiple orientation of the first object The exercise data of the first object is extracted in dynamic video.
Alternatively, 3D video cameras can also be set up, the 3D video cameras are that usually have using the video camera of 3D camera lenses manufacture More than two pick-up lens, the spacing between two neighboring camera is close with human eye spacing, can shoot similar human eye The seen different images for Same Scene.When the first object moves, starts 3D video cameras and shot, obtain first pair The 3D motion video of elephant, is then sent to server by the 3D motion video, and server can be carried from the 3D motion video Take the exercise data of the first object.
Second of optional acquisition modes:It can be from the local sport video for obtaining first object, alternatively, to video Server asks the sport video of first object, then obtains described first pair according to the sport video of first object The exercise data of elephant.Illustratively, server local preserves the sport video of multiple and different objects, for example, the video that people dances, The video that animal is run, the video etc. of ball rolling, when obtaining the exercise data of the first object, it is first determined the first object Then type can indicate the fortune of the first object of inquiry character conjunction user demand from the multiple videos locally preserved according to user Dynamic video, then extracts the exercise data of the first object from selected video.Alternatively, when initialization, protected on video server The sport video for having a large amount of different types of objects, when obtaining the exercise data of the first object, it is first determined the first object Type, then server to video server send inquiry request, video server according to the inquiry request allow server The sport video preserved is checked, server selects the sport video and point of suitable first object according to user demand Download is hit, then the sport video of selected first object is sent to server by video server, and server is from receiving The first object sport video in extract the exercise data of the first object.Alternatively, after the type of determining first object, server It can also indicate that the download that the first video is sent to the video server being connect with internet is asked according to user, described first regards Frequency is the sport video of first object, then receives the first video for being issued by internet of video server, and from connecing The exercise data of the first object is extracted in the first video received.
Step 202, the preset model according to the exercise data and the second object of first object, generate described second pair The animation of elephant, wherein first object is different from second object.
The generated according to the preset model of the exercise data of first object and the second object in the embodiment of the present invention In the animation of two objects the second object according to the first object movement campaign, for example, it is assumed that the first object be cat, first pair The exercise data of elephant has recorded an exercise data when cat predation, including jump height, direction of jumping, the direction of motion of claw And the data of the motion frequency of claw, the second object are cartoon character little Huang people, the exercise data and little Huang preyed on according to cat The preset model of people can generate the animation that little Huang people is preyed on according to the action that cat preys on.
A kind of optional generating mode, is arranged multiple calibration points on the first object first, is arranged on the second object more Then a target point obtains the movement locus of multiple calibration point from the sport video of the first object, and according to preset rules The first correspondence between each calibration point of first object and each target point of second object is established, usually The preset rules indicate that the calibration point is corresponded with the target point, and then according to first correspondence and described The movement locus of each calibration point, generates the animation of the second object, wherein the second object described in the animation of second object Either objective point according to calibration point corresponding with the target point movement locus move.The preset rules also can indicate that One calibration point is corresponding with multiple target points or multiple calibration points and a target point it is corresponding, when preset rules indicate a mark Pinpoint with multiple target points to it is corresponding when the second object animation in multiple target points can be according to corresponding calibration point Movement locus moved;When preset rules indicate multiple calibration points and a target point to it is corresponding when the second object animation In any one target point can be according to the combination campaign of corresponding multiple calibration point movement locus, for example, false If a target point corresponds to the first calibration point and the second calibration point respectively, the movement locus of first calibration point is the first calibration The vertical movement of point, the movement locus of the second calibration point are the horizontal movement of the second calibration point, then target point can be oblique along 45° angle It moves upwards.
It is different according to the type of the first object during implementation, different calibration points can be set.Illustratively, may be used First, in accordance with the feature of object, different objects is divided into multiple classifications.For example, object can be divided into human body class, animal Class, toys and other classes.If the first object is toys, it is assumed that be balloon, the exercise data of the first object describes balloon The scene of up and down floating in the air;Second object is other class classes, it is assumed that preset model is honey peach, first can be in gas It is uniformly arranged multiple calibration points on the outer surface of ball, and extracts the movement rail of each calibration point from the exercise data of the first object Target point is arranged in the corresponding position of honey peach then according to preset rules in mark, and establish each calibration point of the first object with The first correspondence between each target point of second object, then makes animation so that each target point of honey peach is pressed It is moved according to the calibration point of corresponding balloon, you can get the animation of honey peach up and down floating in the air.
Alternatively, if the first object is the bouncing ball of other classes, the second object is the little Huang people of cartoon character class, and bouncing ball exists When being squeezed by external force, contour line can change, therefore calibration point can be uniformly arranged on the outer surface of bouncing ball, Assuming that the scene that the exercise data instruction bouncing ball of the first object is bounced and fallen from ground several times, from the movement number of bouncing ball The movement locus of each calibration point can be extracted in, and then according to preset rules, target is set in the corresponding position of little Huang people Point, and the first correspondence between each calibration point of the first object and each target point of the second object is established, then obtain Take the animation of little Huang people so that each target point is moved according to the movement locus of corresponding calibration point, is obtained at this time The animation for bouncing and falling from ground several times to little Huang people, and refined the shape that is squeezed of each little Huang people when being contacted with ground State.
In an embodiment of the present invention, after the animation for getting the second object, particIe system can also be obtained The particle element being related to, and it is dynamic according to the effect of the second object described in the animation of second object and the particle Element generation It draws.The particIe system is the technology for the blooming that some particle elements are simulated in three dimensional computer graphics, design Particle element includes fire, explosion, cigarette, flow, spark, fallen leaves, cloud, mist, snow, dirt, meteor trail or luminous track etc., using this Technology can simulate the abstract visual effect of particle element, and the effect animation of second object combines particle element and regards Feel the animation of effect, picture effect is more rich, user experience higher.
Incidence relation between step 203, the animation for establishing second object, trigger condition and response events, wherein The trigger condition is executed for triggering the animated show and triggering the response events.
When initialization, theme platform for making as shown in Figure 3 can be set on the server, the platform includes animation system Make region 301 and setting area 302.After the animation for getting the second object, the animation of second object is in cartoon making Region 301 shows that the animation of the second object may also be longer, and user can click the setting of cartoon making region 301 as needed Starting point 3011 and end point 3012, server can according to user demand intercept needed for length animation.It is closed getting After the animation of the second suitable object, the corresponding trigger condition of the animation and response events can be set in setting area 302, The incidence relation between animation, trigger condition and the response events of the second object is established, the trigger condition, which is the theme, to be applied to Triggering terminal shows the condition of the animation of the second object, such as user's various strokes on the touchscreen or user to end after terminal Hold the click etc. of function key can be as the trigger condition of the animation of the second object of terminal display;The response events are the theme Applied to the operation that terminal after terminal is executed according to trigger condition, such as application is opened, open short message interface, improves ringing volume Deng;The animation that the incidence relation indicates the second object is corresponded with the trigger condition of setting and response events respectively.
Illustratively, refering to what is shown in Fig. 3, assuming that the animation for the second object that the platform is shown in cartoon making region 301 is retouched It has stated a cartoon figure sleeping, unexpected quilt is raised downwards, the scene for the tears direct current that cartoon figure is frozen, this is dynamic Draw includes the multiple elements being arranged in multiple figure layers, such as quilt, alarm clock, cartoon figure, clothes, sheet etc..Setting area 302 The size that the key frame of the animation of the second object is shown in middle dotted line frame 3021, with the letters such as the relative position at terminal screen center Breath, designer can be adjusted the key frame of the second animation, when the key frame of the animation is that animation does not start displaying The static images that terminal is shown could be provided as the wallpaper of terminal.The trigger condition and sound of terminal are shown in dotted line frame 3022 Event options are answered, and to each animated element into the option of edlin, designer can click response events option, at this time Platform shows drop-down menu, wherein enumerating the component for multiple response events that terminal can be completed, designer is as needed Corresponding component is selected, and then platform shows the design element of the trigger condition of the corresponding response events of selected component, if for example, Designer selects solution lock set, solves show release region (UA) option and response region (RA) option below lock set at this time, The release region is the region terminated of finger stroke of preset user, when lower stroke of user's finger and when reaching the region, end End executes unlock operation;The starting point when response region is lower stroke of user, the starting point usually correspond to animation one want Element will be set as release region below terminal screen, work as user for example, response region can be set the upper end of quilt to Finger clicks quilt upper end, and when downward stroke, and quilt follows the movement of user's finger to raise slowly, when finger stroke to terminal When below screen, quilt is raised completely.The editing options of each element are also shown in dotted line frame 3022, designer can lead to The option for clicking animated element is crossed, animated element is replaced or changes the color of element, decorative pattern, ratio etc..It is logical in designer Cross the platform be provided with the second object animation trigger condition and response events after, that is, establish the dynamic of the second object It draws, the incidence relation between trigger condition and response events, and then theme packet can be generated according to the incidence relation.
In other embodiments of the invention, if having got the effect animation of the second object, this step after step 202 In can establish the incidence relation between effect animation, trigger condition and the response events of the second object, the mode of foundation can adopt Described in above-described embodiment.
Step 204, according to the incidence relation by the animation producing theme packet of second object, wherein the theme When packet operation can instruction terminal detect the trigger condition and show the animation when meeting the trigger condition and execute The response events.
Illustratively, after the incidence relation being provided between animation, trigger condition and the response events of the second object, It can be according to the incidence relation by the animation producing theme packet of second object, for example, can be first by the dynamic of the second object The position attribution of each element in picture, the second object type, trigger condition and the response events of animation save as exclusive scene File is described, then by the exclusive scene description document, the related resources such as icon picture and preset model generate theme packet, And the theme packet is sent to terminal, when the terminal operating theme packet, terminal can detect the trigger condition and meet The animation is shown when the trigger condition and executes the response events, for example, being explained in conjunction with step 203, works as theme After packet is applied to terminal, in the case where screen is lighted, user can click the upper end of quilt and carry out down drawing operation, touch at this time The animation of the second object of terminal display is sent out, that is, shows the animation that quilt is raised downwards slowly with the stroke of user's finger, with Quilt is raised slowly, as shown in figure 4, cartoon figure is frozen to shiver tears direct current, when user's finger 401 is along Fig. 4 After below direction X strokes to terminal screen, quilt is raised completely, and terminal executes response events, i.e. terminal unlocking.
If establishing the incidence relation between effect animation, trigger condition and the response events of the second object in step 203, It then can be according to the incidence relation by the effect animation producing theme packet of second object in this step.
Theme platform for making is set up in server, theme is realized and makes zero code, The platform provides powerful easy-to-use Cartoon making interface and logic event edition interface are visualized, designer can be allowed freely to carry out animation creation and screen cloth Office improves the efficiency of theme making.Meanwhile the animation of the second object is generated by capturing the exercise data of the first object, it is real The diversity of animation is showed.Also, it optimizes the theme under traditional Android system and draws mode, more save GPU (Graphics Processing Unit, graphics processor) memory, reduce system power dissipation.
In other embodiments of the invention, as shown in figure 5, after server generates theme packet, the method is also wrapped It includes:
What step 205, reception terminal were sent is used to that the download of the theme packet to be asked to be asked.
Theme packet can be stored in local by server after generating theme packet, then start from another theme, So, server can locally preserve multiple theme packets, and different themes packet can be identified with different numbers, and The example of multiple theme packets for carrying number is sent to terminal.When terminal needs replacing theme, multiple masters can be browsed first The example of topic, selection need the theme downloaded, then click the theme, and terminal to server sends the selected theme packet of request at this time Request is downloaded, it is described to download the number that request includes selected theme packet.
Step 206 is asked according to the download, and the theme packet is sent to the terminal.
Server parses download request, obtains theme selected by user after receiving the download request of terminal transmission Number, theme packet selected by user is then inquired according to the number, and the theme packet is sent to terminal, in order to which terminal updates Theme.
Subject generating method provided in an embodiment of the present invention can be arranged by programming personnel in initialization and make master The platform of topic can make the animation of the second object by the platform according to the exercise data of the first object, while can also be The trigger condition and response events of the animation of the second object are selected on the platform, and programming is required when need not make theme every time Personnel write program, reduce theme cost of manufacture, simplify theme making step and shorten theme Production Time.
The embodiment of the present invention provides a kind of subject generating method, is applied to terminal, and Fig. 6 is that the implementation process of this method is illustrated Figure, as shown in fig. 6, the method includes:
Step 601, terminal obtain the exercise data of the first object.
In the present embodiment, first object can be any one object of nature, such as human body, animal, Huo Zhegong Tool etc..The exercise data can be the direction of motion of first object, move distance, the angle and movement locus of change Etc. a series of parameters that can characterize the first object motion state.
When initialization, built-in 3D cameras can be set in the terminal, when needing to obtain the exercise data of the first object, By the 3D cameras towards the first object, the sport video of the first object is recorded, then obtains first from the sport video The exercise data of object.For example, it is assumed that the first object is vollyball, the data for obtaining stroke in the air after vollyball is impacted are needed, At this time can be by the terminal of built-in 3D cameras towards vollyball, shooting vollyball receives the sport video after human arm is hit, so The exercise data of the first object is extracted from the sport video afterwards.
Step 602, terminal generate described the according to the preset model of the exercise data and the second object of first object The animation of two objects, wherein first object is different from second object.
The generated according to the preset model of the exercise data of first object and the second object in the embodiment of the present invention In the animation of two objects the second object according to the first object action campaign.When terminal gets the movement number of the first object According to later, multiple calibration points can be set on the first object, multiple target points are set on the preset model of the second object, and It is established according to preset rules first corresponding between each calibration point of the first object and each target point of second object Then relationship obtains the movement locus of each calibration point from the exercise data of the first object so that any mesh of the second object Punctuate is moved according to the movement locus of calibration point corresponding with the target point, and then generates the animation of the second object.It is assumed that the An object is vollyball, and exercise data describes the data of stroke in the air after vollyball is hit by human arm, the second object it is pre- If model is cartoon character great Bai, multiple calibration points can be uniformly arranged on the outer surface of vollyball, then according to one-to-one correspondence Principle, multiple target points are set on the outer surface of cartoon character great Bai, then generate the animation of the second object so that cartoon Each target point on the outer surface of vivid great Bai is moved according to the movement locus of corresponding calibration point, therefore The animation of the second object generated is the animation of stroke in the air after cartoon character great Bai is hit by human arm.
In other embodiments of the invention, after the animation for getting the second object, terminal can also obtain particle The particle element that system is related to, and according to the effect of the second object described in the animation of second object and the particle Element generation Fruit animation.The particIe system is the technology for the blooming that some particle elements are simulated in three dimensional computer graphics, is set The particle element of meter includes fire, explosion, cigarette, flow, spark, fallen leaves, cloud, mist, snow, dirt, meteor trail or luminous track etc., is adopted The abstract visual effect of particle element can be simulated with the technology, the effect animation of second object combines particle member The animation of plain visual effect, picture effect is more rich, user experience higher.
Step 603, terminal establish the incidence relation between animation, trigger condition and the response events of second object, The wherein described trigger condition is executed for triggering the animated show and triggering the response events.
Illustratively, when initialization, theme platform for making as shown in Figure 3 can be set in terminal, which includes dynamic It draws and makes region 301 and setting area 302, designer can show according to cartoon making region 301 and setting area 302 Option setting selection the second object animation trigger condition and response events, establish animation, the trigger condition of the second object Incidence relation between response events.Process is established with reference to described in step 203, it is not described here in detail.
Step 604, terminal are according to the incidence relation by the animation producing theme packet of second object, wherein described Theme packet when running can instruction terminal detect the trigger condition and show the animation when meeting the trigger condition simultaneously Execute the response events.
Illustratively, after the incidence relation being provided between animation, trigger condition and the response events of the second object, Terminal can be according to the incidence relation by the animation producing theme packet of second object, for example, terminal can be first by second The position attribution of each element in the animation of object, the type of the animation of the second object, trigger condition and response events save as Exclusive scene description document, then by the exclusive scene description document, the related resources such as icon picture and preset model generate Theme packet simultaneously preserves.
Illustratively, it is assumed that the first object is the balloon of up and down floating in the air, and the second object is cartoon character little Huang People, the little Huang people can be simulated the scene of little Huang people in water by particIe system, pass through first at this time with diver's helmet The animation for the second object that the exercise data of object generates is the animation of little Huang people's up and down floating in water, the theme of generation The corresponding response events of animation for wrapping medium and small yellow people are terminal unlocking, and trigger condition is the head of click little Huang people and lower stroke.Such as Shown in Fig. 7, after theme packet is applied to terminal, if terminal is detecting that user's finger 701 clicks little Huang head parts and along Fig. 7 When the sliding of middle X-direction, the animation of triggering displaying little Huang people up and down floating in water, and when reaching at lower stroke bottom, terminal Unlock.
If establishing the incidence relation between effect animation, trigger condition and the response events of the second object in step 603, It then can be according to the incidence relation by the effect animation producing theme packet of second object in this step.
In other embodiments of the invention, as shown in figure 8, after terminal gets theme packet, the method is also wrapped It includes:
Step 605, terminal parse theme packet, obtain between animation, trigger condition and the response events of second object Incidence relation.
The theme packet of generation is carried out local preservation by terminal, i.e. terminal can preserve multiple theme packets, is led when needing to update When topic, terminal shows the thumbnail of multiple theme packets, the user's required theme of selection and click as needed, at this time terminal parsing The theme packet obtains the incidence relation between animation, trigger condition and the response events of the second object.
Step 606, terminal obtain the systematic parameter of the terminal.
Illustratively, the systematic parameter is to describe the parameter of terminal executive capability, such as processor model, video card model, Operating system version etc..Since the processing speed of different processor models is different, if processor speed is relatively low, can not show Data volume larger animation or picture;Resolution ratio that different video cards can be shown is different, if video card is smaller, can not show point Resolution larger animation or picture;The operation that different operating system is able to carry out is different, can not if operating system is Android Execute the applicable unlocking manner of iOS (iPhone Operating System, mobile phone operating system) operating system;Different wash with watercolours The rendering effect that dye system can be shown is different, if rendering engine when making animation is different from the rendering engine of terminal, eventually End can not show animation.Therefore terminal can obtain the systematic parameter of terminal first after parsing theme packet, then determine eventually The operation that the picture and terminal for the animation or display that end can be shown are able to carry out.It should be noted that good in order to ensure Display effect, it is required that the rendering engine that uses and the rendering engine of terminal setting keep stringent one when theme makes It causes.Can selecting TOS when for example, server making theme in the embodiment of the present invention, (Tencent Operating System, rise News operating system) rendering engine makes and beautifies animation, then and terminal is also required to setting and configures identical TOS rendering engines, holding The strict conformance of rendering engine bottom.
Step 607, terminal generate the dynamic of second object according to the systematic parameter of the incidence relation and the terminal It draws, the second correspondence of trigger condition and response events.
Illustratively, when terminal determines the behaviour that the picture of the animation that terminal can be shown or display and terminal are able to carry out After work, according to the systematic parameter of incidence relation and the terminal in theme packet, the second couple that cost terminal can use is given birth to The animation of elephant, the second correspondence of trigger condition and response events.
For example, it is assumed that theme packet includes four animations, respectively the first animation, the second animation, third animation and the 4th Animation, wherein the corresponding trigger condition of the first animation is the first trigger condition and the first response events, the second animation is corresponding to be touched Clockwork spring part is the second trigger condition and the second response events, and the corresponding trigger condition of third animation is third trigger condition and third Response events, the corresponding trigger condition of the 4th animation be the 4th trigger condition and the 4th response events, due to terminal CPU compared with It is small, and the data volume of the second animation is larger, thus terminal can not the second animation of normal presentation, since the operating system of terminal is peace Zhuo, and the 4th trigger condition and the 4th response events are applicable iOS operating systems, therefore terminal cannot detect the 4th triggering Condition can not execute the 4th response events.Terminal can join according to the system of incidence relation and terminal in theme packet at this time Number, select the second animation, the incidence relation and third animation of the second trigger condition and the second response events, third trigger condition and The incidence relation of third response events generates the animation of second object, the second correspondence pass of trigger condition and response events It is that the second animation and corresponding second trigger condition of the second animation and the second response thing are had recorded in second correspondence Part and the corresponding third trigger condition of third animation and third animation and third response events.
In other embodiments of the invention, as shown in figure 9, terminal establish the animation of the second object, trigger condition and After second correspondence of response events, the method further includes:
Step 608, terminal receive user's operation.
After terminal operating theme packet, different functions can be executed depending on the user's operation.For example, terminal can connect The operation of user is received, the operation of the user can be paddling operation of the user on the terminal screen of electricity, can also be use Click of the family to terminal function key, it is not limited in the embodiment of the present invention.
Step 609, terminal determine corresponding first trigger condition of the user's operation.
Illustratively, when terminal detects the finger stroke on the screen of user, the start bit of stroke can be detected first Set with stroke trend, corresponding first trigger condition of current user operation is then determined according to the initial position and stroke trend. Illustrate with reference to figure 4, when user's finger 401 clicks the top of quilt and to when the X-direction stroke along Fig. 4, determines that terminal is currently grasped Make the trigger condition that corresponding first trigger condition is terminal unlocking.
Step 610, terminal according to the second correspondence of the animation of second object, trigger condition and response events, It shows the corresponding animation of first trigger condition, and executes the corresponding response events of first trigger condition.
Illustratively, terminal can correspond to after determining corresponding first trigger condition of current user operation according to second Relationship determines the corresponding animation of the first trigger condition and response events, and during user's operation, displaying described first is touched The corresponding animation of clockwork spring part, and the corresponding response events of first trigger condition are executed, it is described to be shown as terminal display dynamic Picture, and show illustrate the current image of terminal screen be still image.
Subject generating method provided in an embodiment of the present invention applied to terminal when initialization, can be arranged in terminal and make Make the platform of theme, which can make the animation of the second object by the exercise data of the first object, need not make every time Make to be required to programming personnel when theme to write program, reduces theme cost of manufacture, simplify theme making step and shortening Theme Production Time.
The embodiment of the present invention provides a kind of subject generating method, and the present invention is applied to say for server in this way Bright, the first object is human body in the embodiment of the present invention, and the second object is the game charater in online game.As shown in Figure 10, institute Stating subject generating method includes:
Step 901, server obtain the human body movement data of 3D video cameras crawl, execute step 902.
Illustratively, as shown in figure 11, when obtaining human body movement data, can external 3D be set in default place first Video camera, then people moves before camera, such as dancing etc., and the scene is as shown in Figure 11 dotted line frames 1001, people 1001a It dances before 3D video cameras 1001b.After 3D video cameras have shot the video of adult motor, which is sent to service Device, server can obtain human body movement data from the video, and the human body movement data includes the direction of motion of people's limbs, Move distance, a series of parameters that can characterize human motion state such as the angle of change and movement locus.
Step 902, server extract the skeleton motion data of the human body movement data, execute step 903.
Illustratively, server can obtain skeleton motion after getting human body movement data from the action data Data get each bone of human body direction during the motion and position.
Step 903, server generate the game people according to the preset model of the skeleton motion data and game charater The animation of object executes step 904.
Illustratively, the preset model of the game charater also has the skeleton structure for interconnecting bone composition, right respectively Answer the limbs of game charater.After server gets skeleton motion data, the calibration bone in skeleton data is established With the correspondence between the target bone in the skeleton structure of preset model, the calibration bone of usual skeleton data with it is pre- If the target bone in the skeleton structure of model can be one-to-one relationship, i.e., a bone corresponds to pre- in skeleton data If a bone in the skeleton structure of model, and then server can be according to the default mould of skeleton motion data and game charater Type makes the animation of game charater so that the target bone in preset model skeleton structure is according to corresponding skeleton The direction of calibration bone in data and position are moved, and then obtain the animation of game charater.For example, it is assumed that skeleton motion Data describe the scene (1001 in such as Figure 11) of people dancing, are jumped if generating game charater using skeleton motion data When the animation of dance, the action when action of animation is danced with people is identical (1002 in such as Figure 11).
Step 904, server are combined the animation of the game charater with particIe system, and the effect for obtaining game charater is dynamic It draws, executes step 905.
Illustratively, the particIe system is the skill for the blooming that some particle elements are simulated in three dimensional computer graphics The particle element of art, design includes fire, explosion, cigarette, flow, spark, fallen leaves, cloud, mist, snow, dirt, meteor trail or luminous rail Mark etc. can simulate the abstract visual effect of particle element using the technology.As shown in figure 12, the game charater of generation is dynamic It is drawn in before particIe system combination, background 110 is blank background, and picture is more dull, only game charater, it appears picture is inadequate It enriches;After the animation of game charater is combined with particIe system, as shown in figure 13, simulated in background 110 by particIe system Meteor 120, changeable cloud 121 etc., enrich picture effect, to improve ornamental value.
Step 905, server establish the association between effect animation, trigger condition and the response events of the game charater Relationship executes step 906.
When initialization, theme platform for making as shown in Figure 3 can be arranged in server.When the animation for getting game charater Later, the association that can be established by the platform between effect animation, trigger condition and the response events of the game charater is closed System, the process of foundation please refer to described in step 203.
Step 906, server, by the effect animation producing theme packet of the game charater, are executed according to the incidence relation Step 907.
It illustratively, can be first by the position attribution of each element in the animation of game charater, the animation of game charater Type, trigger condition and response events save as exclusive scene description document, then by the exclusive scene description document, icon figure The related resources such as piece and the preset model of game charater generate theme packet.
Step 907, server receive the download request for the request theme packet that mobile phone is sent, and execute step 908.
Illustratively, server can preserve multiple theme packets, and different themes packet can be identified with different numbers, together When can be sent to terminal it is multiple carry number theme packets example.When terminal needs replacing theme, can browse first The example of multiple themes, selection need the theme downloaded, then click the theme, and terminal to server is sent selected by request at this time Theme packet downloads request, described to download the number that request includes selected theme packet.
Step 908, server send the theme packet to the terminal, execute step 909.
Illustratively, server parses download request, obtains the volume of theme selected by user after receiving download request Number, theme packet selected by user is then inquired according to the number, and the theme packet is sent to terminal.
Step 909, terminal parse the theme packet, obtain animation, trigger condition and the response events of the game charater Between incidence relation, execute step 910.
Illustratively, since there is special format, terminal if desired to run the theme packet, need for the preservation of theme packet The theme packet is parsed first, in accordance with theme packet encoder mode, obtains the animation of the game charater carried in the theme packet, and Incidence relation between the animation of the game charater, trigger condition and response events.
Step 910, terminal obtain the systematic parameter of the terminal, and according to the system of the incidence relation and the terminal Parameter generate the animation of the game charater, trigger condition and response events the second correspondence, execute step 911.
Illustratively, the systematic parameter is to describe the parameter of terminal executive capability, such as processor model, video card model, Operating system version etc..Since the processing speed of different processor models is different, the resolution ratio that different video cards can be shown Difference, the operation that different operating system is able to carry out is different, therefore terminal can obtain end first after parsing theme packet Then the systematic parameter at end determines the operation that the picture for the animation or display that terminal can be shown and terminal are able to carry out, so The second object that can be used according to the systematic parameter of incidence relation and the terminal in theme packet, raw cost terminal afterwards moves It draws, the second correspondence of trigger condition and response events.
Step 911, terminal receive user's operation, execute step 912.
Here, step 911 can refer to described in step 608.
Step 912, terminal determine corresponding first trigger condition of the user's operation, execute step 913.
Here, step 912 can be with described in step 609.
Step 913, terminal according to the second correspondence of the animation of the game charater, trigger condition and response events, It shows the corresponding animation of first trigger condition, and executes the corresponding response events of first trigger condition.
Here, step 913 can refer to described in step 610.
Subject generating method provided in an embodiment of the present invention can be arranged by programming personnel in initialization and make master The platform of topic can make the animation of the second object by the platform according to the exercise data of the first object, while can also be The trigger condition and response events of the animation of the second object are selected on the platform, and programming is required when need not make theme every time Personnel write program, reduce theme cost of manufacture, simplify theme making step and shorten theme Production Time.
The embodiment of the present invention provides a kind of application scenarios, and as shown in figure 14, which includes end side 1301, WEB (network) side 1302 and theme platform for making 1303, the sides WEB 1302 further include server 1302a.Wherein, JS (Java Script) it is a kind of script of java applet, UI (User Interface) is user interface, OpenGL ES (OpenGL for Embedded Systems, the OpenGL of embedded system) it is OpenGL (Open Graphics Library, open figure Library) 3-D graphic API (Application Programming Interface, application programming interface) subset, for The embedded devices such as mobile phone, tablet computer and game host and design.WebGL is a kind of 3D drafting standards, this drafting standards Permission is combined together JavaScript and OpenGL ES 2.0, by one that increases OpenGL ES 2.0 JavaScript is bound, and WebGL can provide hardware 3D for HTML5Canvas and accelerate to render, such web developer 3D scenes and model are shown more glibly in browser by system video card, moreover it is possible to create complicated navigation and data vision Change.
First, theme platform for making 1303 obtains animation and multiple basic objects for making theme, then establishes dynamic It draws, the incidence relation between trigger condition and response events, and then by the position attribution of each object, the type of animation triggers Condition, response events etc. save as the exclusive scene description document of TOS3DSCENE formats, and according to the exclusive scene description The related resources such as file, icon picture and 3D models constitute the theme packet of entitled theme.zip according to Scene.xml agreements.So The theme packet is loaded onto WEB-JS (WEB-Java Script) layer by the sides WEB 1302 afterwards, by TOS rendering engines to theme packet Animation rendered and beautified and preserved.Wherein, TOS rendering engines are by way of emscripten combinations webgl The sides WEB 1302 are operated in, by with operating in mobile phone terminal after ROM system globe areas, maintaining the strict conformance of rendering engine bottom. When end side 1301 needs to update theme, theme packet theme.zip can be loaded onto by the application software of Android system Android system layer parses theme packet theme.zip by application logic, and it is exclusive to obtain TOS3DSCENE in Theme.zip Scene description document restores the visual scene of the sides WEB 1302 by TOS rendering engines, while to exclusive scene description document In incidence relation analyzed, the mapping graph of response events, trigger condition and animation is built, in vision and logic two A level restores the design that designer makes in the sides WEB 1302 completely.
Subject generating method provided in an embodiment of the present invention can be arranged by programming personnel in initialization and make master The platform of topic can make the animation of the second object by the platform according to the exercise data of the first object, while can also be The trigger condition and response events of the animation of the second object are selected on the platform, and programming is required when need not make theme every time Personnel write program, reduce theme cost of manufacture, simplify theme making step and shorten theme Production Time.
The embodiment of the present invention provides a kind of theme generating means 140, and as shown in figure 15, described device 140 includes:
First acquisition unit 1401, the exercise data for obtaining the first object.
First generation unit 1402 is used for the preset model of the exercise data and the second object according to first object, Generate the animation of second object, wherein first object is different from second object.
First establishing unit 1403, for establishing between animation, trigger condition and the response events of second object Incidence relation, wherein the trigger condition is executed for triggering the animated show and triggering the response events.
Second generation unit 1404, for according to the incidence relation by the animation producing theme packet of second object, The theme packet is used for enhancement system or software interface.
In other embodiments of the invention, first generation unit 1402 is used for:It is obtained from the exercise data The movement locus of multiple calibration points of first object;Establish each calibration point of first object and second object Each target point between the first correspondence;According to the movement rail of first correspondence and each calibration point Mark generates the animation of the second object so that the either objective point of second object is according to calibration corresponding with the target point The movement locus movement of point.
In other embodiments of the invention, the exercise data of first object is human body movement data;Such as Figure 16 institutes Show, described device 140 further includes:Extraction unit 1405, for extracting the human motion from the exercise data of the human body The skeleton motion data of data;First generation unit 1402 is used for:According to the skeleton motion data and the default mould Type generates the animation of second object.
In other embodiments of the invention, as shown in figure 17, described device 140 further includes:
Second acquisition unit 1406, the particle element being related to for obtaining particIe system, according to the dynamic of second object Draw the effect animation with the second object described in the particle Element generation;The first establishing unit 1403 is used for:Described in foundation Incidence relation between the effect animation of second object, trigger condition and response events;Second generation unit 1404 is used for: According to the incidence relation by the effect animation producing theme packet of second object.
In other embodiments of the invention, the first acquisition unit 1401 is used for:Receive image acquisition component crawl And the exercise data of the first object sent.
In other embodiments of the invention, the first acquisition unit 1401 is used for:The first video is sent to internet Download request, first video be first object sport video;Receive the first video that the internet issues; According to first video, the exercise data of first object is obtained.
In other embodiments of the invention, as shown in figure 18, described device 140 further includes:First receiving unit 1407, Download request for receiving terminal transmission;First transmission unit 1408, for according to downloads request, the server to The terminal sends the theme packet.
In other embodiments of the invention, as shown in figure 19, described device 140 further includes:Running unit 1409, is used for Run the theme packet;Second establishes unit 1410, for according to the association between the animation, trigger condition and response events Relationship, establish the available animation of the terminal, trigger condition and response events according to the systematic parameter of terminal second correspond to pass System.
In other embodiments of the invention, as shown in figure 20, described device 140 further includes:Second receiving unit 1411, The first operation for receiving user;Determination unit 1412, for determining corresponding first trigger condition of first operation;It holds Row unit 1414, for the second correspondence according to the animation, trigger condition and response events, displaying first triggering The corresponding animation of condition, and execute the corresponding response events of first trigger condition.
An embodiment of the present invention provides a kind of theme generating means, can be arranged by programming personnel and made in initialization The platform for making theme can make the animation of the second object according to the exercise data of the first object by the platform, while may be used also To select the trigger condition and response events of the animation of the second object on the platform, required when need not make theme every time Programming personnel writes program, reduces theme cost of manufacture, simplifies theme making step and shortens theme Production Time.
The embodiment of the present invention provides a kind of theme generation equipment 200, and as shown in figure 21, the equipment 200 includes that communication connects Mouth 2001 and processor 2002, wherein the processor 2002, the first object is obtained for passing through the communication interface 2001 Exercise data;According to the preset model of the exercise data and the second object of first object, the dynamic of second object is generated It draws, wherein first object is different from second object;Establish animation, trigger condition and the response of second object Incidence relation between event, wherein the trigger condition is held for triggering the animated show and triggering the response events Row;According to the incidence relation by the animation producing theme packet of second object, the theme packet for enhancement system or Software interface.
In other embodiments of the invention, the processor 2002 is used for:Described is obtained from the exercise data The movement locus of multiple calibration points of an object;Establish each of each calibration point of first object and second object The first correspondence between target point;According to the movement locus of first correspondence and each calibration point, generate The animation of second object so that the either objective point of second object according to calibration point corresponding with the target point movement Track moves.
In other embodiments of the invention, the exercise data of first object is human body movement data;The processing Device 2002 is used for:Extract the skeleton motion data of the human body movement data;According to the skeleton motion data and described default Model generates the skeleton cartoon of the preset model;Establish the association between the skeleton cartoon, trigger condition and response events Relationship;The skeleton cartoon is generated into theme packet according to the incidence relation.
In other embodiments of the invention, the processor 2002 is used for:The particle element that particIe system is related to is obtained, According to the animation and the particle Element generation effect animation;Establish the effect animation, trigger condition and response events it Between incidence relation;According to the incidence relation by the effect animation producing theme packet.
In other embodiments of the invention, the processor 2002 is additionally operable to adopt by the reception image of communication interface 2001 The exercise data of the first object that collection component is captured and sent.
In other embodiments of the invention, processor 2002 is additionally operable to send out to internet by the communication interface 2001 The download of the first video is sent to ask, first video is the sport video of first object;The internet is received to issue The first video;The processor 2002 is used for:According to first video, the exercise data of first object is obtained.
In other embodiments of the invention, the processor 2002 is additionally operable to receive terminal hair by communication interface 2001 The download request sent, is asked according to the download, and the theme packet is sent to the terminal by the third external interface.
In other embodiments of the invention, the processor 2002 is used for:Run the theme packet;According to described dynamic It draws, the incidence relation between trigger condition and response events, it is available to establish the terminal according to the systematic parameter of the terminal Second correspondence of animation, trigger condition and response events.
The processor 2002 is additionally operable to receive the first operation of user by communication interface 2001;Determine first behaviour Make corresponding first trigger condition;According to the second correspondence of the animation, trigger condition and response events, displaying described the The corresponding animation of one trigger condition, and execute the corresponding response events of first trigger condition.
An embodiment of the present invention provides a kind of themes to generate equipment, can be arranged by programming personnel and made in initialization The platform for making theme can make the animation of the second object according to the exercise data of the first object by the platform, while may be used also To select the trigger condition and response events of the animation of the second object on the platform, required when need not make theme every time Programming personnel writes program, reduces theme cost of manufacture, simplifies theme making step and shortens theme Production Time.
It need to be noted that be:The description of the above apparatus embodiments item is similar with above method description, is had same The identical advantageous effect of embodiment of the method, therefore do not repeat.For undisclosed technical detail in present device embodiment, Those skilled in the art please refers to the description of the method for the present invention embodiment and understands, to save length, which is not described herein again.
It should be understood that " one embodiment " or " embodiment " that specification is mentioned in the whole text mean it is related with embodiment A particular feature, structure, or characteristic includes at least one embodiment of the present invention.Therefore, occur everywhere in the whole instruction " in one embodiment " or " in one embodiment " not necessarily refer to identical embodiment.In addition, these specific feature, knots Structure or characteristic can in any suitable manner combine in one or more embodiments.It should be understood that in the various implementations of the present invention In example, size of the sequence numbers of the above procedures is not meant that the order of the execution order, and the execution sequence of each process should be with its work( It can determine that the implementation process of the embodiments of the invention shall not be constituted with any limitation with internal logic.The embodiments of the present invention Serial number is for illustration only, can not represent the quality of embodiment.
It should be noted that herein, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that process, method, article or device including a series of elements include not only those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including this There is also other identical elements in the process of element, method, article or device.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it Its mode is realized.Apparatus embodiments described above are merely indicative, for example, the division of the unit, only A kind of division of logic function, formula that in actual implementation, there may be another division manner, such as:Multiple units or component can combine, or It is desirably integrated into another system, or some features can be ignored or not executed.In addition, shown or discussed each composition portion It can be the INDIRECT COUPLING by some interfaces, equipment or unit to divide mutual coupling or direct-coupling or communication connection Or communication connection, can be electrical, mechanical or other forms.
The above-mentioned unit illustrated as separating component can be or may not be and be physically separated, aobvious as unit The component shown can be or may not be physical unit;Both it can be located at a place, may be distributed over multiple network lists In member;Some or all of wherein unit can be selected according to the actual needs to achieve the purpose of the solution of this embodiment.In addition, Each functional unit in various embodiments of the present invention can be fully integrated into a processing unit, can also be each unit difference It, can also be during two or more units be integrated in one unit separately as a unit;Above-mentioned integrated unit both may be used It realizes, can also be realized in the form of hardware adds SFU software functional unit in the form of using hardware.
One of ordinary skill in the art will appreciate that:Realize that all or part of step of above method embodiment can pass through The relevant hardware of program instruction is completed, and program above-mentioned can be stored in computer read/write memory medium, which exists When execution, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned includes:Movable storage device read-only is deposited The various media that can store program code such as reservoir (Read Only Memory, ROM), magnetic disc or CD.Alternatively, this hair If bright above-mentioned integrated unit is realized in the form of software function module and when sold or used as an independent product, also may be used To be stored in a computer read/write memory medium.Based on this understanding, the technical solution essence of the embodiment of the present invention On in other words the part that contributes to existing technology can be expressed in the form of software products, the computer software product It is stored in a storage medium, including some instructions are used so that a computer equipment (can be personal computer, service Device or the network equipment etc.) execute all or part of each embodiment the method for the present invention.And storage medium packet above-mentioned It includes:The various media that can store program code such as movable storage device, ROM, magnetic disc or CD.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.

Claims (14)

1. a kind of subject generating method, which is characterized in that the method includes:
Obtain the exercise data of the first object;
According to the preset model of the exercise data and the second object of first object, the animation of second object is generated, In, first object is different from second object;
The incidence relation between animation, trigger condition and the response events of second object is established, wherein the trigger condition It is executed for triggering the animated show and triggering the response events;
According to the incidence relation by the animation producing theme packet of second object, wherein can when the theme packet operation Instruction terminal detects the trigger condition and shows the animation when meeting the trigger condition and execute the response events;
Wherein, the preset model of the exercise data and the second object according to first object generates second object Animation, including:The movement locus of multiple calibration points of first object is obtained from the exercise data;Establish described The first correspondence between each calibration point of an object and each target point of second object;According to described first pair It should be related to the movement locus with each calibration point, generate the animation of the second object, wherein in the animation of second object The either objective point of second object is moved according to the movement locus of calibration point corresponding with the target point.
2. according to the method described in claim 1, it is characterized in that, the exercise data of first object is the movement number of human body According to;The method further includes:The human skeleton exercise data is extracted from the exercise data of the human body;
The preset model of the exercise data and the second object according to first object generates the animation of second object Including:According to the preset model of the skeleton motion data and second object, the animation of second object is generated.
3. according to the method described in claim 2, it is characterized in that, the method further includes:
The particle element that is related to of particIe system is obtained, according to described in the animation of second object and the particle Element generation the The effect animation of two objects;
Incidence relation between the animation for establishing second object, trigger condition and response events includes:Described in foundation Incidence relation between the effect animation of second object, trigger condition and response events;
It is described to include by the animation producing theme packet of second object according to the incidence relation:It will according to the incidence relation The effect animation producing theme packet of second object.
4. method according to any one of claims 1 to 3, which is characterized in that the exercise data for obtaining the first object Including:
The exercise data of first object is obtained from image acquisition component.
5. method according to any one of claims 1 to 3, which is characterized in that the exercise data for obtaining the first object Including:
From the local sport video for obtaining first object, alternatively, asking the movement of first object to video server Video;
The exercise data of first object is obtained according to the sport video of first object.
6. according to the method described in any one of claims 1 to 3 claim, which is characterized in that the method includes:
What server receiving terminal was sent is used to that the download of the theme packet to be asked to be asked;
It is asked according to the download, the server sends the theme packet to the terminal.
7. method according to any one of claims 1 to 3, which is characterized in that the method includes:
Terminal parses the theme packet, and the association obtained between animation, trigger condition and the response events of second object is closed System;
Terminal obtains the systematic parameter of the terminal;
Terminal generated according to the systematic parameter of the incidence relation and the terminal animation of second object, trigger condition and Second correspondence of response events.
8. the method according to the description of claim 7 is characterized in that the method further includes:
Terminal receives user's operation;
Terminal determines corresponding first trigger condition of the user's operation;
Terminal is according to the second correspondence of the animation of second object, trigger condition and response events, displaying described first The corresponding animation of trigger condition, and execute the corresponding response events of first trigger condition.
9. a kind of theme generating means, which is characterized in that described device includes:
First acquisition unit, the exercise data for obtaining the first object;
First generation unit, for the preset model according to the exercise data and the second object of first object, described in generation The animation of second object, wherein first object is different from second object;
First establishing unit, the incidence relation between animation, trigger condition and response events for establishing second object, The wherein described trigger condition is executed for triggering the animated show and triggering the response events;
Second generation unit, for according to the incidence relation by the animation producing theme packet of second object, the theme Packet is used for enhancement system or software interface;
First generation unit is additionally operable to:The movement of multiple calibration points of first object is obtained from the exercise data Track;Establish the first corresponding pass between each calibration point of first object and each target point of second object System;According to the movement locus of first correspondence and each calibration point, the animation of the second object is generated so that described The either objective point of second object is moved according to the movement locus of calibration point corresponding with the target point.
10. device according to claim 9, which is characterized in that the exercise data of first object is human motion number According to;
Described device further includes:Extraction unit, for extracting the human body movement data from the exercise data of the human body Skeleton motion data;
First generation unit is used for:According to the preset model of the skeleton motion data and second object, institute is generated State the animation of the second object.
11. device according to claim 10, which is characterized in that described device further includes:
Second acquisition unit, the particle element being related to for obtaining particIe system, according to the animation of second object and described The effect animation of second object described in particle Element generation;
The first establishing unit is used for:It establishes between effect animation, trigger condition and the response events of second object Incidence relation;
Second generation unit is used for:According to the incidence relation by the effect animation producing theme packet of second object.
12. a kind of theme generates equipment, which is characterized in that the equipment includes communication interface and processor, wherein the processing Device, the exercise data for obtaining the first object by the communication interface;According to the exercise data of first object and The preset model of two objects generates the animation of second object, wherein first object is different from second object; The incidence relation between animation, trigger condition and the response events of second object is established, wherein the trigger condition is used for It triggers the animated show and triggers the response events and execute;The animation of second object is given birth to according to the incidence relation At theme packet, the theme packet is used for enhancement system or software interface;
The processor is additionally operable to:The movement locus of multiple calibration points of first object is obtained from the exercise data; Establish the first correspondence between each calibration point of first object and each target point of second object;According to The movement locus of first correspondence and each calibration point generates the animation of the second object, wherein described second pair The either objective point of second object described in the animation of elephant is moved according to the movement locus of calibration point corresponding with the target point.
13. equipment according to claim 12, which is characterized in that the exercise data of first object is human motion number According to;
The processor is used for:The skeleton motion data of the human body movement data is extracted from the exercise data of the human body; According to the preset model of the skeleton motion data and second object, the animation of second object is generated.
14. equipment according to claim 13, which is characterized in that the processor is used for:Obtain what particIe system was related to Particle element, according to the effect animation of the second object described in the animation of second object and the particle Element generation;It establishes Incidence relation between the effect animation of second object, trigger condition and response events;According to the incidence relation by institute State the effect animation producing theme packet of the second object.
CN201610404634.2A 2016-06-07 2016-06-07 Subject generating method, device, equipment Active CN106097417B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610404634.2A CN106097417B (en) 2016-06-07 2016-06-07 Subject generating method, device, equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610404634.2A CN106097417B (en) 2016-06-07 2016-06-07 Subject generating method, device, equipment

Publications (2)

Publication Number Publication Date
CN106097417A CN106097417A (en) 2016-11-09
CN106097417B true CN106097417B (en) 2018-07-27

Family

ID=57227597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610404634.2A Active CN106097417B (en) 2016-06-07 2016-06-07 Subject generating method, device, equipment

Country Status (1)

Country Link
CN (1) CN106097417B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108259496B (en) * 2018-01-19 2021-06-04 北京市商汤科技开发有限公司 Method and device for generating special-effect program file package and special effect, and electronic equipment
CN108388434B (en) 2018-02-08 2021-03-02 北京市商汤科技开发有限公司 Method and device for generating special-effect program file package and special effect, and electronic equipment
CN109107160B (en) * 2018-08-27 2021-12-17 广州要玩娱乐网络技术股份有限公司 Animation interaction method and device, computer storage medium and terminal
CN109636884A (en) * 2018-10-25 2019-04-16 阿里巴巴集团控股有限公司 Animation processing method, device and equipment
CN109544665A (en) * 2018-11-21 2019-03-29 万翼科技有限公司 Generation method, device and the storage medium of animation poster
CN109657233B (en) * 2018-11-23 2024-01-02 东软集团股份有限公司 Method and device for generating theme, storage medium and electronic equipment
CN110175061B (en) * 2019-05-20 2022-09-09 北京大米科技有限公司 Animation-based interaction method and device and electronic equipment
CN110569096B (en) * 2019-08-20 2022-10-18 上海沣沅星科技有限公司 System, method, medium, and apparatus for decoding human-computer interaction interface
CN112543352B (en) * 2019-09-23 2022-07-08 腾讯科技(深圳)有限公司 Animation loading method, device, terminal, server and storage medium
CN112328277B (en) * 2020-10-19 2023-04-07 武汉木仓科技股份有限公司 Resource updating method and device of application and server
CN115115316B (en) * 2022-07-25 2023-01-06 天津市普迅电力信息技术有限公司 Method for simulating storage material flowing-out and warehousing operation direction-guiding type animation based on Cesium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101192308A (en) * 2007-03-28 2008-06-04 腾讯科技(深圳)有限公司 Roles animations accomplishing method and system
CN104978758A (en) * 2015-06-29 2015-10-14 世优(北京)科技有限公司 Animation video generating method and device based on user-created images
CN105468353A (en) * 2015-11-06 2016-04-06 网易(杭州)网络有限公司 Implementation method and apparatus for interface animation, mobile terminal, and computer terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101192308A (en) * 2007-03-28 2008-06-04 腾讯科技(深圳)有限公司 Roles animations accomplishing method and system
CN104978758A (en) * 2015-06-29 2015-10-14 世优(北京)科技有限公司 Animation video generating method and device based on user-created images
CN105468353A (en) * 2015-11-06 2016-04-06 网易(杭州)网络有限公司 Implementation method and apparatus for interface animation, mobile terminal, and computer terminal

Also Published As

Publication number Publication date
CN106097417A (en) 2016-11-09

Similar Documents

Publication Publication Date Title
CN106097417B (en) Subject generating method, device, equipment
US11948260B1 (en) Streaming mixed-reality environments between multiple devices
US11494993B2 (en) System and method to integrate content in real time into a dynamic real-time 3-dimensional scene
EP3306444A1 (en) Controls and interfaces for user interactions in virtual spaces using gaze tracking
CN112037311B (en) Animation generation method, animation playing method and related devices
CN110147231A (en) Combine special efficacy generation method, device and storage medium
CN109598777A (en) Image rendering method, device, equipment and storage medium
KR20230096043A (en) Side-by-side character animation from real-time 3D body motion capture
CN111640202B (en) AR scene special effect generation method and device
KR20230107655A (en) Body animation sharing and remixing
CN106575446A (en) Facial gesture driven animation communication system
KR20230107654A (en) Real-time motion delivery for prosthetic rims
CN101127621A (en) Virtual Internet cross-media system
CN109254650A (en) A kind of man-machine interaction method and device
CN112891943B (en) Lens processing method and device and readable storage medium
CN116710182A (en) Avatar customization system
Gillis The Matrix Trilogy: Cyberpunk Reloaded
US11673054B2 (en) Controlling AR games on fashion items
US20220366653A1 (en) Full Body Virtual Reality Utilizing Computer Vision From a Single Camera and Associated Systems and Methods
US10885691B1 (en) Multiple character motion capture
CN109462768A (en) A kind of caption presentation method and terminal device
CN110711388B (en) Virtual object control method, device and storage medium
WO2023055825A1 (en) 3d upper garment tracking
Cheok et al. Social and physical interactive paradigms for mixed-reality entertainment
Tang et al. Emerging human-toy interaction techniques with augmented and mixed reality

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant