CN105719330B - Animation curve generation method and device - Google Patents

Animation curve generation method and device Download PDF

Info

Publication number
CN105719330B
CN105719330B CN201410740719.9A CN201410740719A CN105719330B CN 105719330 B CN105719330 B CN 105719330B CN 201410740719 A CN201410740719 A CN 201410740719A CN 105719330 B CN105719330 B CN 105719330B
Authority
CN
China
Prior art keywords
state value
key frame
time
state
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410740719.9A
Other languages
Chinese (zh)
Other versions
CN105719330A (en
Inventor
罗琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Beijing Co Ltd
Original Assignee
Tencent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Beijing Co Ltd filed Critical Tencent Technology Beijing Co Ltd
Priority to CN201410740719.9A priority Critical patent/CN105719330B/en
Publication of CN105719330A publication Critical patent/CN105719330A/en
Application granted granted Critical
Publication of CN105719330B publication Critical patent/CN105719330B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method and a device for generating an animation curve. Wherein, the method comprises the following steps: a method for generating an animation curve, comprising: acquiring the starting time and the ending time of a target animation, the state value of a starting key frame of the target animation and the state value of an ending key frame of the target animation; obtaining the state values of the target animation at all times within the range from the starting time to the ending time according to the state values of the starting key frame and the ending key frame; generating an animation curve of the target animation according to the state values at all the moments; displaying the animation curve in a screen. The invention solves the technical problem of low animation curve generation efficiency caused by the fact that two-dimensional, three-dimensional and other complex animation curves need to be labeled frame by designers.

Description

Animation curve generation method and device
Technical Field
The invention relates to the field of image processing, in particular to a method and a device for generating an animation curve.
Background
With the rapid development of the animation industry, people's preference for animation is also increased. In the process of changing the static picture into the moving picture, the processing needs to be performed by means of an animation curve. In the prior art, a simple animation curve, such as a one-dimensional animation curve, can be obtained by linear difference calculation.
However, for complex animation curves, such as two-dimensional and three-dimensional animation curves, the complex animation curves need to be obtained by labeling the complex animation curves frame by designer, which consumes a lot of time to obtain the complex animation curves, such as two-dimensional and three-dimensional animation curves, and reduces the generation efficiency of the animation curves.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for generating an animation curve, which are used for at least solving the technical problem of low animation curve generation efficiency caused by the fact that two-dimensional, three-dimensional and other complex animation curves need to be labeled frame by designers.
According to an aspect of an embodiment of the present invention, there is provided a method for generating an animation curve, including: acquiring the starting time and the ending time of a target animation, the state value of a starting key frame of the target animation and the state value of an ending key frame of the target animation; obtaining the state values of the target animation at all times within the range from the starting time to the ending time according to the state values of the starting key frame and the ending key frame; generating an animation curve of the target animation according to the state values at all the moments; displaying the animation curve in a screen.
According to another aspect of the embodiments of the present invention, there is also provided an animation curve generation apparatus, including: the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring the starting time and the ending time of a target animation, the state value of a starting key frame of the target animation and the state value of an ending key frame of the target animation; the calculating unit is used for obtaining the state value of the target animation at each moment within the range from the starting moment to the ending moment according to the state value of the starting key frame and the state value of the ending key frame; the generating unit is used for generating an animation curve of the target animation according to the state values at all the moments; and the display unit is used for displaying the animation curve in a screen.
In the embodiment of the invention, a mode of automatically generating an animation curve is adopted, the state values of the target animation at all times within the range from the starting time to the ending time are obtained by acquiring the starting time and the ending time of the target animation, the state value of the starting key frame of the target animation and the state value of the ending key frame of the target animation according to the state values of the starting key frame and the ending key frame, and then the animation curve of the target animation is generated according to the state values at all times, so that the aim of automatically generating the animation curve is fulfilled, the technical effects of low consumption productivity and high production efficiency are realized, and the technical problem of low animation curve generation efficiency caused by that complex animation curves of two-dimensional, three-dimensional and the like need to be labeled by designers frame by frame is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of an application scenario of an alternative animation curve generation method according to an embodiment of the present invention;
FIG. 2 is a flow chart diagram illustrating an alternative animation curve generation method according to an embodiment of the invention;
FIG. 3 is a flow chart diagram illustrating an alternative animation curve generation method according to an embodiment of the invention;
FIG. 4 is a schematic diagram of an alternative animation curve according to an embodiment of the invention;
FIG. 5 is a schematic diagram of an alternative animation curve according to an embodiment of the invention;
FIG. 6 is a schematic diagram of an alternative animation curve according to an embodiment of the invention;
FIG. 7 is a schematic diagram of an alternative animation curve according to an embodiment of the invention;
FIG. 8 is a schematic diagram of an alternative animation curve according to an embodiment of the invention;
FIG. 9 is a schematic diagram of an alternative animation curve according to an embodiment of the invention;
FIG. 10 is a schematic diagram of an alternative animation curve according to an embodiment of the invention;
FIG. 11 is a schematic diagram of an alternative animation curve according to an embodiment of the invention;
FIG. 12 is a schematic diagram of an alternative animation curve according to an embodiment of the invention;
FIG. 13 is a schematic diagram of an alternative animation curve according to an embodiment of the invention;
FIG. 14 is a schematic diagram of an alternative animation curve according to an embodiment of the invention;
FIG. 15 is a schematic diagram of an alternative animation curve according to an embodiment of the invention;
FIG. 16 is a schematic diagram of an alternative animation curve according to an embodiment of the invention;
FIG. 17 is a schematic diagram of an alternative animation curve according to an embodiment of the invention;
FIG. 18 is a schematic diagram of an alternative animation curve according to an embodiment of the invention;
FIG. 19 is a schematic diagram of an alternative animation curve according to an embodiment of the invention;
FIG. 20 is a schematic diagram of an alternative "micro-view" client display interface in accordance with embodiments of the present invention;
FIG. 21 is a schematic diagram of an alternative "micro-view" client display interface in accordance with embodiments of the invention;
FIG. 22 is a block diagram of an alternative animation curve generation apparatus according to an embodiment of the invention;
FIG. 23 is a schematic diagram of an alternative animation curve generation apparatus according to an embodiment of the invention;
FIG. 24 is a schematic structural diagram of an alternative animation curve generation apparatus according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to an embodiment of the present invention, there is provided a method for generating an animation curve, and in this embodiment, the method for generating an animation curve may be applied to a hardware environment of a terminal 102 provided with an animation curve generation tool as shown in fig. 1. As shown in fig. 1, the terminal 102 may include, but is not limited to, one of the following: cell-phone, panel computer. When generating an animation curve, the animation curve generation tool of the terminal 102 may generate an animation curve of the target animation according to the start time and the end time of the target animation, the state value of the start key frame of the target animation, and the state value of the end key frame of the target animation.
According to an embodiment of the present invention, there is provided a method for generating an animation curve, as shown in fig. 2, the method including:
s202: acquiring the starting time and the ending time of the target animation, the state value of the starting key frame of the target animation and the state value of the ending key frame of the target animation;
s204: obtaining the state values of the target animation at all moments within the range from the starting moment to the ending moment according to the state values of the starting key frame and the ending key frame;
s206: generating an animation curve of the target animation according to the state values at all the moments;
s208: an animation curve is displayed in the screen.
It should be noted that the animation curve in the embodiment of the present invention may be a one-dimensional animation curve, or may also be a complex animation curve such as a two-dimensional animation curve and a three-dimensional animation curve, which is not limited in this embodiment of the present invention.
Under the above circumstances, according to the animation curve generation method provided by the embodiment of the present invention, in step S202, it is necessary to obtain the start time and the end time of the target animation, the state value of the start key frame of the target animation, and the state value of the end key frame of the target animation.
In the embodiment of the invention, the key frames refer to the first frame at which the target animation starts and the last frame at which the target animation ends; the starting time of the target animation is the time corresponding to the first frame at which the target animation starts; the ending time of the target animation refers to the time corresponding to the last frame of the ending of the target animation; the state values can be the length, width, height, transparency and the like of the object, for example, if the length, width, height and transparency of the object are changed by the target animation, the state values of the target animation are considered to be 4, which are the length, width, height and transparency of the object respectively; animation value, which is a state value v of each frame of target animation change and a value p (t, v) formed by animation frame time t; the animation curve is a curve composed of a series of animation values p (t, v) generated by using the state value v as an ordinate and the time t of each frame of the target animation as an abscissa t.
Alternatively, the animation curve generating device may receive the duration of the target animation input by the designer, and determine the starting time and the ending time of the target animation according to the duration of the target animation. For example, the duration of the target animation inputted by the designer is 5s, and the animation curve generating device may determine the start time of the target animation to be 0s and the end time of the target animation to be 5s based on the 5 s.
Alternatively, the state value of the start key frame of the target animation and the state value of the end key frame of the target animation may be input to the animation curve generation device by the designer, for example, if the designer wants to change the transparency of the picture a from opaque to transparent, the state value V of the start key frame of the target animation may be input01, the state value V of the termination key frame of the target animation1Is 0.
Under the above circumstances, according to the animation curve generation method provided by the embodiment of the present invention, in step S204, the state values of the target animation at each time point within the range from the start time point to the end time point may be obtained according to the state value of the start key frame and the state value of the end key frame. Further, in step S206, an animation curve of the target animation is generated from the state values at the respective times.
In the embodiment of the present invention, after the start time and the end time of the target animation, the state value of the start key frame of the target animation, and the state value of the end key frame of the target animation are obtained, the state values of the target animation at each time within the range from the start time to the end time can be obtained according to the above parameters. For example, as shown in fig. 3, before step S204, the method further includes:
s302: and normalizing each time in the range from the starting time to the ending time.
Optionally, obtaining the state value of the target animation at each time within the range from the start time to the end time according to the state value of the start key frame and the state value of the end key frame, including: and obtaining the state value of the target animation at each moment according to the state value of the starting key frame, the state value of the ending key frame and each moment after normalization processing.
The normalization is a dimensionless processing means, so that the absolute value of the physical system value becomes a certain relative value relation, i.e. a dimensionless expression is transformed into a dimensionless expression and becomes a highlight. For example, in a range from 0s as the start time of the target animation to 5s as the end time of the target animation, the value obtained by normalizing the 3 rd second is 3/5-0.6.
In a first possible implementation manner, obtaining a state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing includes:
by the formula V ═ V0+T×(V1-V0) Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
For example, the state value V of the starting key frame of the target animation 01, the state value V of the termination key frame of the target animation1T is the time after which the normalization processing is to be performed for each time in the range from the start time 0s to the end time 5s, then for example:
when T-0/5-0, V-1 +0 × (0-1) -1;
when T is 1/5 ═ 0.2, V is 1+0.2 × (0-1) ═ 0.8;
when T is 2/5 ═ 0.4, V is 1+0.4 × (0-1) ═ 0.6;
when T3/5 is 0.6, V is 1+0.6 × (0-1) is 0.4;
when T4/5 is 0.8, V is 1+0.8 × (0-1) is 0.2;
when T is 5/5 is 1, V is 1+1 × (0-1) is 0.
By analogy, the state values at the respective times are obtained, and then the animation curve shown in fig. 4 is generated according to the state values at the respective times. For the target animation, the transparency of the picture A is changed from non-transparent uniform speed to transparent between 0s and 5 s.
The first possible implementation manner is a linear interpolation algorithm.
In a second possible implementation manner, obtaining the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing includes:
by the formula V ═ V1-V0)×T×T+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
For example, the state value V of the starting key frame of the target animation 01, the state value V of the termination key frame of the target animation1T is the time after which the normalization processing is to be performed for each time in the range from the start time 0s to the end time 5s, then for example:
when T-0/5-0, V- × 0 × 0+ 1-1;
when T1/5 is 0.2, V × 0.2.2 0.2 × 0.2.2 +1 is 0.96;
when T-2/5-0.4, V- × 0.4-0.4 × 0.4.4 + 1-0.84;
when T3/5 is 0.6, V × 0.6.6 0.6 × 0.6.6 +1 is 0.64;
when T-4/5-0.8, V- × 0.8-0.8 × 0.8.8 + 1-0.36;
when T-5/5-1, V- × 1 × 1+ 1-0.
By analogy, state values at various times are obtained, and an animation curve as shown in fig. 5 is generated according to the state values at various times. For the target animation, the transparency of the picture A is gradually increased from opaque to transparent between 0s and 5 s.
In a third possible implementation manner, obtaining the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing includes:
by the formula V ═ V1-V0)×T×(T-2)+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
For example, the state value V of the starting key frame of the target animation 01, the state value V of the termination key frame of the target animation1T is the time after which the normalization processing is to be performed for each time in the range from the start time 0s to the end time 5s, then for example:
when T-0/5-0, V- (0-1) × 0 × (0-2) + 1-1;
when T is 1/5 ═ 0.2, V ═ 0-1) × 0.2 × (0.2-2) +1 ═ 0.64;
when T2/5 is 0.4, V × 0.4.4 0.4 × (0.4-2) +1 is 0.36;
when T is 3/5 ═ 0.6, V ═ 0-1) × 0.6 × (0.6-2) +1 ═ 0.16;
when T is 4/5 ═ 0.8, V ═ 0-1) × 0.8 × (0.8-2) +1 ═ 0.04;
when T is 5/5 ═ 1, V ═ 0-1) × 1 × (1-2) +1 ═ 0.
By analogy, state values at various times are obtained, and an animation curve as shown in fig. 6 is generated according to the state values at various times. For the target animation, the transparency of picture a gradually slows from opaque to transparent between 0s and 5 s.
In a fourth possible implementation manner, obtaining the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing includes:
by the formula
Figure BDA0000626257740000091
Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
For example, the state value V of the starting key frame of the target animation 01, the state value V of the termination key frame of the target animation1T is the time after which the normalization processing is to be performed for each time in the range from the start time 0s to the end time 5s, then for example:
when T-0/5-0, V-1/2 × 2 × 0 × 2 × 0+ 1-1;
when T1/5 is 0.2, V is (0-1)/2 × 2 × 0.2 × 2 × 0.2+1 is 0.92;
when T2/5 is 0.4, V is (0-1)/2 × 2 × 0.4 × 2 × 0.4+1 is 0.68;
when T is 0.6, V is 0.32 ═ 3/5 ═ 0-1)/2 × [ (2 × 0.6-1) × (2 × 0.6.6-3) -1] +1 ═ 0.32;
when T4/5 is 0.8, V ═ 0-1)/2 × [ (2 × 0.8-1) × (2 × 0.8-3) -1] +1 ═ 0.08;
when T is 5/5 ═ 1, V ═ 0-1)/2 × [ (2 × 1-1) × (2 × 1-3) -1] +1 ═ 0.
By analogy, state values at various times are obtained, and an animation curve as shown in fig. 7 is generated according to the state values at various times. For the target animation, the transparency of picture a changes from opaque from slow to fast to slow to transparent between 0s to 5 s.
The second possible implementation manner to the fourth possible implementation manner are quadratic interpolation algorithms.
In a fifth possible implementation manner, obtaining the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing includes:
by the formula V ═ V1-V0)×T×T×T+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
For example, the state value V of the starting key frame of the target animation 01, the state value V of the termination key frame of the target animation1T is the time after which the normalization processing is to be performed for each time in the range from the start time 0s to the end time 5s, then for example:
when T-0/5-0, V- × 0 × 0 × 0+ 1-1;
when T1/5 is 0.2, V × 0.2.2 0.2 × 0.2.2 0.2 × 0.2.2 +1 is 0.992;
when T2/5 is 0.4, V × 0.4.4 0.4 × 0.4.4 0.4 × 0.4.4 +1 is 0.936;
when T-3/5-0.6, V- × 0.6 × 0.6.6 0.6 × 0.6.6 + 1-0.784;
when T4/5 is 0.8, V × 0.8.8 0.8 × 0.8.8 0.8 × 0.8.8 +1 is 0.488;
when T-5/5-1, V- × 1 × 1 × 1+ 1-0.
By analogy, state values at various times are obtained, and an animation curve as shown in fig. 8 is generated according to the state values at various times. For the target animation, the transparency of the picture A is gradually increased from opaque to transparent between 0s and 5 s.
In a sixth possible implementation manner, obtaining the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing includes:
by the formula V ═ V1-V0)×[(T-1)×(T-1)×(T-1)+1]+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
For example, the state value V of the starting key frame of the target animation 01, the state value V of the termination key frame of the target animation1T is the time after which the normalization processing is to be performed for each time in the range from the start time 0s to the end time 5s, then for example:
when T-0/5-0, V-1- × [ (0-1) × (0-1) × (0-1) +1] + 1-1;
when T1/5 is 0.2, V (0-1) × [ (0.2-1) × (0.2-1) × (0.2-1) +1] +1 is 0.512;
when T2/5 is 0.4, V (0-1) × [ (0.4-1) × (0.4-1) × (0.4-1) +1] +1 is 0.216;
when T3/5 is 0.6, V × [ (0.6-1) × (0.6-1) × (0.6-1) +1] +1 is 0.064;
when T4/5 is 0.8, V (0-1) × [ (0.8-1) × (0.8-1) × (0.8-1) +1] +1 is 0.008;
when T is 5/5 ═ 1, V ═ 0-1) × [ (1-1) × (1-1) × (1-1) +1] +1 ═ 0.
By analogy, state values at various times are obtained, and an animation curve as shown in fig. 9 is generated according to the state values at various times. For the target animation, the transparency of picture a gradually slows from opaque to transparent between 0s and 5 s.
In a seventh possible implementation manner, obtaining the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing includes:
by the formula
Figure BDA0000626257740000111
Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
For example, the state value V of the starting key frame of the target animation 01, the state value V of the termination key frame of the target animation1T is the time after which the normalization processing is to be performed for each time in the range from the start time 0s to the end time 5s, then for example:
when T-0/5-0, V-1/2 × 2 × 0 × 2 × 0 × 2 × 0+ 1-1;
when T-1/5-0.2, V-0-1/2 × 2 × 0.2 × 2 × 0.2 × 2 × 0.2+ 1-0.968;
when T-2/5-0.4, V-0-1/2 × 2 × 0.4 × 2 × 0.4 × 2 × 0.4+ 1-0.744;
when T is 0.6, V is 0.256 ═ 3/5 ═ 0-1)/2 × [ (2 × 0.6-2) × (2 × 0.6.6-2) × (2 × 0.6-2) +2] +1 ═ 0.256;
when T4/5 is 0.8, V is (0-1)/2 × [ (2 × 0.8-2) × (2 × 0.8-2) × (2 × 0.8-2) +2] +1 is 0.032;
when T is 5/5 ═ 1, V ═ 0-1)/2 × [ (2 × 1-2) × (2 × 1-2) × (2 × 1-2) +2] +1 ═ 0.
By analogy, state values at various times are obtained, and an animation curve as shown in fig. 10 is generated according to the state values at various times. For the target animation, the transparency of picture a changes from opaque from slow to fast to slow to transparent between 0s to 5 s.
The fifth possible implementation manner to the seventh possible implementation manner are cubic interpolation algorithms.
Similar to the above method, in an eighth possible implementation manner, obtaining a state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing includes:
by the formula V ═ V1-V0)×T×T×T×T+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the formula, and an animation curve as shown in fig. 11 is generated from the state values at the respective times. For the target animation, the transparency of the picture A is gradually increased from opaque to transparent between 0s and 5 s.
Similar to the above method, in a ninth possible implementation manner, obtaining a state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing includes:
by the formula V ═ V0-V1)×[(T-1)×(T-1)×(T-1)×(T-1)-1]+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the formula, and an animation curve as shown in fig. 12 is generated from the state values at the respective times. For the target animation, the transparency of picture a gradually slows from opaque to transparent between 0s and 5 s.
Similar to the above method, in a tenth possible implementation manner, obtaining a state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing includes:
by the formula
Figure BDA0000626257740000131
Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the formula, and an animation curve as shown in fig. 13 is generated from the state values at the respective times. For the target animation, the transparency of picture a changes from opaque from slow to fast to slow to transparent between 0s to 5 s.
Similar to the above method, in an eleventh possible implementation manner, obtaining a state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing includes:
by the formula V ═ V1-V0)×cos[T/1×(π/2)]+(V1-V0)+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the formula, and an animation curve as shown in fig. 14 is generated from the state values at the respective times. For the target animation, the transparency of the picture A is gradually increased from opaque to transparent between 0s and 5 s.
Similar to the above method, in a twelfth possible implementation manner, obtaining a state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing includes:
by the formula V ═ V1-V0)×sin[T/1×(π/2)]+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the formula, and an animation curve as shown in fig. 15 is generated from the state values at the respective times. For the target animation, the transparency of picture a gradually slows from opaque to transparent between 0s and 5 s.
Similar to the above method, in a thirteenth possible implementation manner, obtaining a state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing includes:
by the formula V ═ V1-V0)/2×[cos(π×2T/1)-1]+V0Calculating a state value at each time, wherein V is represented atNormalization of the state value at time T, V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the formula, and an animation curve as shown in fig. 16 is generated from the state values at the respective times. For the target animation, the transparency of picture a changes from opaque to transparent according to the animation curve between 0s and 5 s.
Similar to the above method, in a fourteenth possible implementation manner, obtaining a state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing includes:
by the formula V ═ V1-V0)×pow[2,10×(T/1-1)]+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the formula, and an animation curve as shown in fig. 17 is generated from the state values at the respective times. For the target animation, the transparency of picture a changes from opaque to transparent according to the animation curve between 0s and 5 s.
Similar to the above method, in a fifteenth possible implementation manner, obtaining a state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing includes:
by the formula V ═ V1-V0)×[pow(2,-10×T/1)+1)]+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the formula, and an animation curve as shown in fig. 18 is generated from the state values at the respective times. For the target animation, the transparency of picture a changes from opaque to transparent according to the animation curve between 0s and 5 s.
Similar to the above method, in a sixteenth possible implementation manner, obtaining the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing includes:
by the formula
Figure BDA0000626257740000151
Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the equation, and an animation curve as shown in fig. 19 is generated from the state values at the respective times. For the target animation, the transparency of picture a changes from opaque to transparent according to the animation curve between 0s and 5 s.
It should be noted that in the various possible implementations described above, sin (x) refers to a function defined in the Math library of the C language, which is used to find the sine value of a given value, and the prototype is: double sin (double x); cos (x) is a function defined in the Math library of C language, which is used to solve the cosine value of a given value, and the prototype is: double cos (double x); pow (x, y) refers to a function defined in the Math library of C language, which is used to power x to y, and its prototype is: double pow (double x, double y), wherein x, y represent variables.
The method for generating the animation curve can be applied to a client for making an animation special effect, and the client can process images, videos and the like by using different animation curves to form a specific animation effect. For example, the "micro-vision" client may be used to produce a dynamic album, as shown in fig. 20, after clicking a "dynamic album" button, the user may select a photo for producing the dynamic album, and then jump to a dynamic album display interface with an animation special effect as shown in fig. 21, taking a "beach" effect as an example, in the "beach" effect, each photo moves along with the movement of a photo frame in a beach, so that each photo needs an animation with a changed position, then a quadratic interpolation algorithm in the second possible implementation manner, that is, a slow movement is performed first, and then the speed is gradually increased, so as to form a desired animation effect. Of course, different scenarios may be selected by the designer as appropriate animation curves, which is not limited in this embodiment of the present invention.
Optionally, after step S204, the method may further include:
s1: and storing the state values at all the time points in a json format.
Specifically, the animation curve generation device according to the embodiment of the present invention may store the state values at each time point in json format as a file, where the format of the file may be { "value": [ V0, …, V1], "keytimies": [ T0, …, T1] }, which is convenient for designers to directly apply to the target animation.
Under the above circumstances, according to the generation method of the animation curve provided by the embodiment of the present invention, in step S208, the animation curve may be displayed in the screen.
In the above embodiments, the transparency of the picture a is taken as an example, and the method for generating the animation curve is exemplarily described, but the method can be applied to other state values, such as length, width, height, position, and the like, and is not described herein again.
In the embodiment of the invention, by adopting a mode of automatically generating an animation curve and acquiring the starting time and the ending time of the target animation, the state value of the starting key frame of the target animation and the state value of the ending key frame of the target animation, obtaining the state values of the target animation at each moment within the range from the starting moment to the ending moment according to the state value of the starting key frame and the state value of the ending key frame, further generating the animation curve of the target animation according to the state values at all times, achieving the purposes of various interpolation algorithms, automatically generating the animation curve, facilitating designers to quickly realize complex animation, thereby realizing the technical effects of automation, low consumption productivity and greatly improving the production efficiency compared with the prior art in which all frames of a designer are marked, and the technical problem of low animation curve generation efficiency caused by the fact that two-dimensional, three-dimensional and other complex animation curves need to be labeled frame by designers is solved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
According to an embodiment of the present invention, an animation curve generation apparatus is provided, and in this embodiment, the animation curve generation apparatus may be applied to a hardware environment of a terminal 102 provided with an animation curve generation tool as shown in fig. 1. As shown in fig. 1, the terminal 102 may include, but is not limited to, one of the following: cell-phone, panel computer. When generating an animation curve, the animation curve generation tool of the terminal 102 may generate an animation curve of the target animation according to the start time and the end time of the target animation, the state value of the start key frame of the target animation, and the state value of the end key frame of the target animation.
According to an embodiment of the present invention, there is also provided an animation curve generation apparatus for implementing the animation curve generation method described above, as shown in fig. 22, the apparatus including:
an obtaining unit 2002 for obtaining a start time and an end time of the target animation, a state value of a start key frame of the target animation, and a state value of an end key frame of the target animation;
a calculating unit 2004, configured to obtain state values of the target animation at each time within a range from the start time to the end time according to the state value of the start key frame and the state value of the end key frame;
a generating unit 2006 configured to generate an animation curve of the target animation according to the state values at the respective times;
the display unit 2008 is configured to display the animation curve in the screen.
It should be noted that the animation curve in the embodiment of the present invention may be a one-dimensional animation curve, or may also be a complex animation curve such as a two-dimensional animation curve and a three-dimensional animation curve, which is not limited in this embodiment of the present invention.
In the embodiment of the invention, the key frames refer to the first frame at which the target animation starts and the last frame at which the target animation ends; the starting time of the target animation is the time corresponding to the first frame at which the target animation starts; the ending time of the target animation refers to the time corresponding to the last frame of the ending of the target animation; the state values can be the length, width, height, transparency and the like of the object, for example, if the length, width, height and transparency of the object are changed by the target animation, the state values of the target animation are considered to be 4, which are the length, width, height and transparency of the object respectively; animation value, which is a state value v of each frame of target animation change and a value p (t, v) formed by animation frame time t; the animation curve is a curve composed of a series of animation values p (t, v) generated by using the state value v as an ordinate and the time t of each frame of the target animation as an abscissa t.
Alternatively, the animation curve generating device may receive the duration of the target animation input by the designer, and determine the starting time and the ending time of the target animation according to the duration of the target animation. For example, the duration of the target animation inputted by the designer is 5s, and the animation curve generating device may determine the start time of the target animation to be 0s and the end time of the target animation to be 5s based on the 5 s.
Alternatively, the state value of the start key frame of the target animation and the state value of the end key frame of the target animation may be input to the animation curve generation device by the designer, for example, if the designer wants to change the transparency of the picture a from opaque to transparent, the state value V of the start key frame of the target animation may be input 01, the state value V of the termination key frame of the target animation1Is 0.
In the embodiment of the present invention, after the start time and the end time of the target animation, the state value of the start key frame of the target animation, and the state value of the end key frame of the target animation are obtained, the state values of the target animation at each time within the range from the start time to the end time can be obtained according to the above parameters. For example, optionally, as shown in fig. 23, the apparatus further includes:
a processing unit 2102 for performing normalization processing for each time within a range from a start time to an end time;
the calculating unit 2004 is configured to perform the following steps to obtain the state values of the target animation at each time within the range from the start time to the end time according to the state value of the start key frame and the state value of the end key frame: and obtaining the state value of the target animation at each moment according to the state value of the starting key frame, the state value of the ending key frame and each moment after normalization processing.
The normalization is a dimensionless processing means, so that the absolute value of the physical system value becomes a certain relative value relation, i.e. a dimensionless expression is transformed into a dimensionless expression and becomes a highlight. For example, in a range from 0s as the start time of the target animation to 5s as the end time of the target animation, the value obtained by normalizing the 3 rd second is 3/5-0.6.
Alternatively, the meterThe calculating unit 2004 is configured to perform the following steps to obtain the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization process: by the formula V ═ V0+T×(V1-V0) Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
For example, the state value V of the starting key frame of the target animation 01, the state value V of the termination key frame of the target animation1T is the time after which the normalization processing is to be performed for each time in the range from the start time 0s to the end time 5s, then for example:
when T-0/5-0, V-1 +0 × (0-1) -1;
when T is 1/5 ═ 0.2, V is 1+0.2 × (0-1) ═ 0.8;
when T is 2/5 ═ 0.4, V is 1+0.4 × (0-1) ═ 0.6;
when T3/5 is 0.6, V is 1+0.6 × (0-1) is 0.4;
when T4/5 is 0.8, V is 1+0.8 × (0-1) is 0.2;
when T is 5/5 is 1, V is 1+1 × (0-1) is 0.
By analogy, the state values at the respective times are obtained, and then the animation curve shown in fig. 4 is generated according to the state values at the respective times. For the target animation, the transparency of the picture A is changed from non-transparent uniform speed to transparent between 0s and 5 s.
Optionally, the calculating unit 2004 is configured to perform the following steps to obtain the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing: by the formula V ═ V1-V0)×T×T+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
For example, the state value V of the starting key frame of the target animation 01, the state value V of the termination key frame of the target animation1T is the time after which the normalization processing is to be performed for each time in the range from the start time 0s to the end time 5s, then for example:
when T-0/5-0, V- × 0 × 0+ 1-1;
when T1/5 is 0.2, V × 0.2.2 0.2 × 0.2.2 +1 is 0.96;
when T-2/5-0.4, V- × 0.4-0.4 × 0.4.4 + 1-0.84;
when T3/5 is 0.6, V × 0.6.6 0.6 × 0.6.6 +1 is 0.64;
when T-4/5-0.8, V- × 0.8-0.8 × 0.8.8 + 1-0.36;
when T-5/5-1, V- × 1 × 1+ 1-0.
By analogy, state values at various times are obtained, and an animation curve as shown in fig. 5 is generated according to the state values at various times. For the target animation, the transparency of the picture A is gradually increased from opaque to transparent between 0s and 5 s.
Optionally, the calculating unit 2004 is configured to perform the following steps to obtain the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing: by the formula V ═ V1-V0)×T×(T-2)+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
For example, the state value V of the starting key frame of the target animation 01, the state value V of the termination key frame of the target animation1T is the time after which the normalization processing is to be performed for each time in the range from the start time 0s to the end time 5s, then for example:
when T-0/5-0, V- (0-1) × 0 × (0-2) + 1-1;
when T is 1/5 ═ 0.2, V ═ 0-1) × 0.2 × (0.2-2) +1 ═ 0.64;
when T2/5 is 0.4, V × 0.4.4 0.4 × (0.4-2) +1 is 0.36;
when T is 3/5 ═ 0.6, V ═ 0-1) × 0.6 × (0.6-2) +1 ═ 0.16;
when T is 4/5 ═ 0.8, V ═ 0-1) × 0.8 × (0.8-2) +1 ═ 0.04;
when T is 5/5 ═ 1, V ═ 0-1) × 1 × (1-2) +1 ═ 0.
By analogy, state values at various times are obtained, and an animation curve as shown in fig. 6 is generated according to the state values at various times. For the target animation, the transparency of picture a gradually slows from opaque to transparent between 0s and 5 s.
Optionally, the calculating unit 2004 is configured to perform the following steps to obtain the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing: by the formula
Figure BDA0000626257740000221
Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
For example, the state value V of the starting key frame of the target animation 01, the state value V of the termination key frame of the target animation1T is the time after which the normalization processing is to be performed for each time in the range from the start time 0s to the end time 5s, then for example:
when T-0/5-0, V-1/2 × 2 × 0 × 2 × 0+ 1-1;
when T1/5 is 0.2, V is (0-1)/2 × 2 × 0.2 × 2 × 0.2+1 is 0.92;
when T2/5 is 0.4, V is (0-1)/2 × 2 × 0.4 × 2 × 0.4+1 is 0.68;
when T is 0.6, V is 0.32 ═ 3/5 ═ 0-1)/2 × [ (2 × 0.6-1) × (2 × 0.6.6-3) -1] +1 ═ 0.32;
when T4/5 is 0.8, V ═ 0-1)/2 × [ (2 × 0.8-1) × (2 × 0.8-3) -1] +1 ═ 0.08;
when T is 5/5 ═ 1, V ═ 0-1)/2 × [ (2 × 1-1) × (2 × 1-3) -1] +1 ═ 0.
By analogy, state values at various times are obtained, and an animation curve as shown in fig. 7 is generated according to the state values at various times. For the target animation, the transparency of picture a changes from opaque from slow to fast to slow to transparent between 0s to 5 s.
Optionally, the calculating unit 2004 is configured to perform the following steps to obtain the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing: by the formula V ═ V1-V0)×T×T×T+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
For example, the state value V of the starting key frame of the target animation 01, the state value V of the termination key frame of the target animation1T is the time after which the normalization processing is to be performed for each time in the range from the start time 0s to the end time 5s, then for example:
when T-0/5-0, V- × 0 × 0 × 0+ 1-1;
when T1/5 is 0.2, V × 0.2.2 0.2 × 0.2.2 0.2 × 0.2.2 +1 is 0.992;
when T2/5 is 0.4, V × 0.4.4 0.4 × 0.4.4 0.4 × 0.4.4 +1 is 0.936;
when T-3/5-0.6, V- × 0.6 × 0.6.6 0.6 × 0.6.6 + 1-0.784;
when T4/5 is 0.8, V × 0.8.8 0.8 × 0.8.8 0.8 × 0.8.8 +1 is 0.488;
when T-5/5-1, V- × 1 × 1 × 1+ 1-0.
By analogy, state values at various times are obtained, and an animation curve as shown in fig. 8 is generated according to the state values at various times. For the target animation, the transparency of the picture A is gradually increased from opaque to transparent between 0s and 5 s.
Optionally, the computing unit 2004 is for performing the following steps to achieve the dependency relationshipAnd obtaining the state value of the target animation at each moment by the state value of the key frame, the state value of the termination key frame and each moment after normalization processing: by the formula V ═ V1-V0)×[(T-1)×(T-1)×(T-1)+1]+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
For example, the state value V of the starting key frame of the target animation 01, the state value V of the termination key frame of the target animation1T is the time after which the normalization processing is to be performed for each time in the range from the start time 0s to the end time 5s, then for example:
when T-0/5-0, V-1- × [ (0-1) × (0-1) × (0-1) +1] + 1-1;
when T1/5 is 0.2, V (0-1) × [ (0.2-1) × (0.2-1) × (0.2-1) +1] +1 is 0.512;
when T2/5 is 0.4, V (0-1) × [ (0.4-1) × (0.4-1) × (0.4-1) +1] +1 is 0.216;
when T3/5 is 0.6, V × [ (0.6-1) × (0.6-1) × (0.6-1) +1] +1 is 0.064;
when T4/5 is 0.8, V (0-1) × [ (0.8-1) × (0.8-1) × (0.8-1) +1] +1 is 0.008;
when T is 5/5 ═ 1, V ═ 0-1) × [ (1-1) × (1-1) × (1-1) +1] +1 ═ 0.
By analogy, state values at various times are obtained, and an animation curve as shown in fig. 9 is generated according to the state values at various times. For the target animation, the transparency of picture a gradually slows from opaque to transparent between 0s and 5 s.
Optionally, the calculating unit is configured to execute the following steps to obtain the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing: by the formula
Figure BDA0000626257740000241
Calculating a state value at each time, whichIn the equation, V represents a state value at time T after normalization, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
For example, the state value V of the starting key frame of the target animation 01, the state value V of the termination key frame of the target animation1T is the time after which the normalization processing is to be performed for each time in the range from the start time 0s to the end time 5s, then for example:
when T-0/5-0, V-1/2 × 2 × 0 × 2 × 0 × 2 × 0+ 1-1;
when T-1/5-0.2, V-0-1/2 × 2 × 0.2 × 2 × 0.2 × 2 × 0.2+ 1-0.968;
when T-2/5-0.4, V-0-1/2 × 2 × 0.4 × 2 × 0.4 × 2 × 0.4+ 1-0.744;
when T is 0.6, V is 0.256 ═ 3/5 ═ 0-1)/2 × [ (2 × 0.6-2) × (2 × 0.6.6-2) × (2 × 0.6-2) +2] +1 ═ 0.256;
when T4/5 is 0.8, V is (0-1)/2 × [ (2 × 0.8-2) × (2 × 0.8-2) × (2 × 0.8-2) +2] +1 is 0.032;
when T is 5/5 ═ 1, V ═ 0-1)/2 × [ (2 × 1-2) × (2 × 1-2) × (2 × 1-2) +2] +1 ═ 0.
By analogy, state values at various times are obtained, and an animation curve as shown in fig. 10 is generated according to the state values at various times. For the target animation, the transparency of picture a changes from opaque from slow to fast to slow to transparent between 0s to 5 s.
Optionally, the calculating unit 2004 is configured to perform the following steps to obtain the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing: by the formula V ═ V1-V0)×T×T×T×T+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the formula, and an animation curve as shown in fig. 11 is generated from the state values at the respective times. For the target animation, the transparency of the picture A is gradually increased from opaque to transparent between 0s and 5 s.
Optionally, the calculating unit 2004 is configured to perform the following steps to obtain the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing: by the formula V ═ V0-V1)×[(T-1)×(T-1)×(T-1)×(T-1)-1]+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the formula, and an animation curve as shown in fig. 12 is generated from the state values at the respective times. For the target animation, the transparency of picture a gradually slows from opaque to transparent between 0s and 5 s.
Optionally, the calculating unit is configured to execute the following steps to obtain the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing: by the formula
Figure BDA0000626257740000251
Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the formula, and an animation curve as shown in fig. 13 is generated from the state values at the respective times. For the target animation, the transparency of picture a changes from opaque from slow to fast to slow to transparent between 0s to 5 s.
Optionally, the calculating unit 2004 is configured to perform the following steps to obtain the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing:by the formula V ═ V1-V0)×cos[T/1×(π/2)]+(V1-V0)+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the formula, and an animation curve as shown in fig. 14 is generated from the state values at the respective times. For the target animation, the transparency of the picture A is gradually increased from opaque to transparent between 0s and 5 s.
Optionally, the calculating unit 2004 is configured to perform the following steps to obtain the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing: by the formula V ═ V1-V0)×sin[T/1×(π/2)]+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the formula, and an animation curve as shown in fig. 15 is generated from the state values at the respective times. For the target animation, the transparency of picture a gradually slows from opaque to transparent between 0s and 5 s.
Optionally, the calculating unit 2004 is configured to perform the following steps to obtain the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing: by the formula V ═ V1-V0)/2×[cos(π×2T/1)-1]+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the formula, and an animation curve as shown in fig. 16 is generated from the state values at the respective times. For the target animation, the transparency of picture a changes from opaque to transparent according to the animation curve between 0s and 5 s.
Optionally, the calculating unit 2004 is configured to perform the following steps to obtain the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing: by the formula V ═ V1-V0)×pow[2,10×(T/1-1)]+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the formula, and an animation curve as shown in fig. 17 is generated from the state values at the respective times. For the target animation, the transparency of picture a changes from opaque to transparent according to the animation curve between 0s and 5 s.
Optionally, the calculating unit 2004 is configured to perform the following steps to obtain the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing: by the formula V ═ V1-V0)×[pow(2,-10×T/1)+1)]+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the formula, and an animation curve as shown in fig. 18 is generated from the state values at the respective times. For the target animation, the transparency of picture a changes from opaque to transparent according to the animation curve between 0s and 5 s.
Optionally, the calculating unit 2004 is configured to perform the following steps to obtain the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after the normalization processing: by the formula
Figure BDA0000626257740000271
Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
The state values at the respective times are obtained from the equation, and an animation curve as shown in fig. 19 is generated from the state values at the respective times. For the target animation, the transparency of picture a changes from opaque to transparent according to the animation curve between 0s and 5 s.
It should be noted that in the various possible implementations described above, sin (x) refers to a function defined in the Math library of the C language, which is used to find the sine value of a given value, and the prototype is: double sin (double x); cos (x) is a function defined in the Math library of C language, which is used to solve the cosine value of a given value, and the prototype is: double cos (double x); pow (x, y) refers to a function defined in the Math library of C language, which is used to power x to y, and its prototype is: double pow (double x, double y), wherein x, y represent variables.
Optionally, as shown in fig. 24, the apparatus for generating an animation curve according to an embodiment of the present invention further includes:
a storage unit 2202 stores the state values at the respective times in json format.
Specifically, the animation curve generation device according to the embodiment of the present invention may store the state values at each time point in json format as a file, where the format of the file may be { "value": [ V0, …, V1], "keytimies": [ T0, …, T1] }, wherein keytimes represent each time, and value represents a state value at each time, so that the designer can apply the target animation directly.
In the embodiment of the invention, by adopting a mode of automatically generating an animation curve and acquiring the starting time and the ending time of the target animation, the state value of the starting key frame of the target animation and the state value of the ending key frame of the target animation, obtaining the state values of the target animation at each moment within the range from the starting moment to the ending moment according to the state value of the starting key frame and the state value of the ending key frame, further generating the animation curve of the target animation according to the state values at all times, achieving the purposes of various interpolation algorithms, automatically generating the animation curve, facilitating designers to quickly realize complex animation, thereby realizing the technical effects of automation, low consumption productivity and greatly improving the production efficiency compared with the prior art in which all frames of a designer are marked, and the technical problem of low animation curve generation efficiency caused by the fact that two-dimensional, three-dimensional and other complex animation curves need to be labeled frame by designers is solved.
Example 3
According to the embodiment of the invention, a terminal for implementing the animation curve generation method is also provided, and the terminal can be applied to a hardware environment as shown in fig. 1.
As shown in fig. 1, the terminal 102 may include, but is not limited to, one of the following: cell-phone, panel computer. When generating an animation curve, the animation curve generation tool of the terminal 102 may generate an animation curve of the target animation according to the start time and the end time of the target animation, the state value of the start key frame of the target animation, and the state value of the end key frame of the target animation.
Optionally, in this embodiment, the terminal includes:
1) a memory configured to store a start time and an end time of a target animation, a state value of a start key frame of the target animation, and a state value of an end key frame of the target animation;
2) a processor configured to acquire a start time and an end time of a target animation, a state value of a start key frame of the target animation, and a state value of an end key frame of the target animation; obtaining the state values of the target animation at all moments within the range from the starting moment to the ending moment according to the state values of the starting key frame and the ending key frame; generating an animation curve of the target animation according to the state values at all the moments; an animation curve is displayed in the screen.
Optionally, in this embodiment, the memory may be further configured to store other data stored in the determination process in embodiment 1.
Optionally, the specific examples in this embodiment may refer to the examples described in embodiment 1 and embodiment 2, and this embodiment is not described herein again.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be substantially or partially implemented in the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and including instructions for causing one or more computer devices (which may be personal computers, servers, or network devices) to execute all or part of the steps of the method according to the embodiments of the present invention.
Example 4
Embodiments of the present invention also provide a storage medium, which can be applied to a hardware environment as shown in fig. 1.
As shown in fig. 1, the terminal 102 may include, but is not limited to, one of the following: cell-phone, panel computer. When generating an animation curve, the animation curve generation tool of the terminal 102 may generate an animation curve of the target animation according to the start time and the end time of the target animation, the state value of the start key frame of the target animation, and the state value of the end key frame of the target animation.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
s1, acquiring the starting time and the ending time of the target animation, the state value of the starting key frame of the target animation and the state value of the ending key frame of the target animation;
s2, obtaining the state value of the target animation at each moment within the range from the starting moment to the ending moment according to the state value of the starting key frame and the state value of the ending key frame;
s3, generating an animation curve of the target animation according to the state values at each moment;
s4, displaying the animation curve on the screen.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, normalization processing is performed for each time point within the range from the start time point to the end time point.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
and S1, obtaining the state value of the target animation at each moment according to the state value of the start key frame, the state value of the end key frame and each moment after the normalization processing.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, represented by the formula V ═ V0+T×(V1-V0) Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, by the formula V ═ V (V)1-V0)×T×T+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, by the formula V ═ - (V)1-V0)×T×(T-2)+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, by formula
Figure BDA0000626257740000311
Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, by the formula V ═ V (V)1-V0)×T×T×T+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, by the formula V ═ V (V)1-V0)×[(T-1)×(T-1)×(T-1)+1]+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, by formula
Figure BDA0000626257740000312
Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, by the formula V ═ V (V)1-V0)×T×T×T×T+V0Calculating the state value at each time, wherein V represents the state value at the normalizationState value at time T after conversion processing, V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, by the formula V ═ - (V)0-V1)×[(T-1)×(T-1)×(T-1)×(T-1)-1]+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, by formula
Figure BDA0000626257740000321
Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, by the formula V ═ - (V)1-V0)×cos[T/1×(π/2)]+(V1-V0)+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, by the formula V ═ V (V)1-V0)×sin[T/1×(π/2)]+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, by the formula V ═ - (V)1-V0)/2×[cos(π×2T/1)-1]+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, by the formula V ═ V (V)1-V0)×pow[2,10×(T/1-1)]+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, by the formula V ═ V (V)1-V0)×[pow(2,-10×T/1)+1)]+V0Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, by formula
Figure BDA0000626257740000331
Calculating a state value at each time, wherein V represents the state value at the time T after the normalization processing, and V0State value, V, representing the starting key frame1A state value indicating a termination key frame.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s1, the state values at the respective times are stored in json format.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Optionally, the specific examples in this embodiment may refer to the examples described in embodiment 1 and embodiment 2, and this embodiment is not described herein again.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (13)

1. A method for generating an animation curve, which is applied to a client for making an animation special effect, comprises the following steps:
selecting a photo for creating a motion picture album;
acquiring the starting time and the ending time of a target animation, the state value of a starting key frame of the target animation and the state value of an ending key frame of the target animation;
normalizing each time within the range from the starting time to the ending time;
obtaining the state values of the target animation at all times within the range from the starting time to the ending time according to the state values of the starting key frame and the ending key frame; wherein, each target animation in the range from the starting time to the ending time is obtained according to the state value of the starting key frame and the state value of the ending key frameThe state values at the time include: obtaining the state value of the target animation at each moment in the range from the starting moment to the ending moment through a quadratic interpolation algorithm, a cubic interpolation algorithm, a quartic interpolation algorithm, a trigonometric function interpolation algorithm or an exponential interpolation algorithm according to the state value of the starting key frame, the state value of the ending key frame and each moment after normalization processing; wherein, the obtaining the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame and each time after normalization processing further comprises: by the formula V ═ V0+T×(V1-V0) Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame;
after the state values of the target animation at all times within the range from the starting time to the ending time are obtained according to the state values of the starting key frame and the state values of the ending key frame, storing the state values at all the times into files in a json format, wherein the files stored in the json format are directly applied to the target animation;
generating a two-dimensional animation curve or a three-dimensional animation curve of the target animation according to the state values at all the moments;
displaying the two-dimensional animation curve or the three-dimensional animation curve in a screen;
wherein the method further comprises: and processing the selected photo for making the dynamic album by using the two-dimensional animation curve or the three-dimensional animation curve to make the dynamic album with animation effect.
2. The method according to claim 1, wherein the obtaining the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after normalization processing, further comprises:
by the formula V ═ V1-V0)×T×T+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
by the formula V ═ V1-V0)×T×(T-2)+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
by the formula
Figure FDA0002450369820000021
Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame.
3. The method according to claim 1, wherein the obtaining the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after normalization processing, further comprises:
by the formula V ═ V1-V0)×T×T×T+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
by the formula V ═ V1-V0)×[(T-1)×(T-1)×(T-1)+1]+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
by the formula
Figure FDA0002450369820000031
Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame.
4. The method according to claim 1, wherein the obtaining the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after normalization processing, further comprises:
by the formula V ═ V1-V0)×T×T×T×T+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
by the formula V ═ V0-V1)×[(T-1)×(T-1)×(T-1)×(T-1)-1]+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
by the formula
Figure FDA0002450369820000032
Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame.
5. The method according to claim 1, wherein the obtaining the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after normalization processing, further comprises:
by the formula V ═ V1-V0)×cos[T/1×(π/2)]+(V1-V0)+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
by the formula V ═ V1-V0)×sin[T/1×(π/2)]+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
by the formula V ═ V1-V0)/2×[cos(π×2T/1)-1]+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame.
6. The method according to claim 1, wherein the obtaining the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after normalization processing, further comprises:
by the formula V ═ V1-V0)×pow[2,10×(T/1-1)]+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
by the formula V ═ V1-V0)×[pow(2,-10×T/1)+1)]+V0Calculating the respective time instantsWhere V denotes the state value at time T after the normalization process, V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
by the formula
Figure FDA0002450369820000041
Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame.
7. An animation curve generation device, which is applied to a client for making an animation special effect, comprises:
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring the starting time and the ending time of a target animation, the state value of a starting key frame of the target animation and the state value of an ending key frame of the target animation;
the calculating unit is used for obtaining the state value of the target animation at each moment within the range from the starting moment to the ending moment according to the state value of the starting key frame and the state value of the ending key frame; wherein the obtaining the state value of the target animation at each time within the range from the starting time to the ending time according to the state value of the starting key frame and the state value of the ending key frame comprises: obtaining the state value of the target animation at each moment within the range from the starting moment to the ending moment through a quadratic interpolation algorithm, a cubic interpolation algorithm, a quadratic interpolation algorithm, a trigonometric function interpolation algorithm or an exponential interpolation algorithm according to the state value of the starting key frame and the state value of the ending key frame;
the generating unit is used for generating a two-dimensional animation curve or a three-dimensional animation curve of the target animation according to the state values at all the moments;
the display unit is used for displaying the two-dimensional animation curve or the three-dimensional animation curve in a screen;
the storage unit is used for storing the state values at all the moments into files in a json format, wherein the files stored in the json format are directly applied to the target animation;
wherein the apparatus further comprises: the processing unit is used for carrying out normalization processing on each time in the range from the starting time to the ending time;
the calculation unit is used for executing the following steps to obtain the state values of the target animation at each moment within the range from the starting moment to the ending moment according to the state value of the starting key frame and the state value of the ending key frame: obtaining the state value of the target animation at each moment according to the state value of the starting key frame, the state value of the ending key frame and each moment after normalization processing;
the calculating unit is further configured to execute the following steps to obtain the state value of the target animation at each time according to the state value of the start key frame, the state value of the end key frame, and each time after normalization processing: by the formula V ═ V0+T×(V1-V0) Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame;
wherein the apparatus is further configured to: selecting a photo for creating a motion picture album; and processing the selected photo for making the dynamic album by using the two-dimensional animation curve or the three-dimensional animation curve to make the dynamic album with animation effect.
8. The apparatus of claim 7,
the computing unit is used for executing the following steps to realize the normalization processing according to the state value of the starting key frame, the state value of the ending key frame and the timeAnd obtaining the state values of the target animation at all the moments: by the formula V ═ V1-V0)×T×T+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
the calculation unit is used for executing the following steps to obtain the state value of the target animation at each moment according to the state value of the starting key frame, the state value of the ending key frame and each moment after normalization processing: by the formula V ═ V1-V0)×T×(T-2)+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
the calculation unit is used for executing the following steps to obtain the state value of the target animation at each moment according to the state value of the starting key frame, the state value of the ending key frame and each moment after normalization processing: by the formula
Figure FDA0002450369820000071
Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame.
9. The apparatus of claim 7,
the calculation unit is used for executing the following steps to obtain the state value of the target animation at each moment according to the state value of the starting key frame, the state value of the ending key frame and each moment after normalization processing: by the formula V ═ V1-V0)×T×T×T+V0Calculating the saidA state value at each time, where V represents a state value at time T after normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
the calculation unit is used for executing the following steps to obtain the state value of the target animation at each moment according to the state value of the starting key frame, the state value of the ending key frame and each moment after normalization processing: by the formula V ═ V1-V0)×[(T-1)×(T-1)×(T-1)+1]+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
the calculation unit is used for executing the following steps to obtain the state value of the target animation at each moment according to the state value of the starting key frame, the state value of the ending key frame and each moment after normalization processing: by the formula
Figure FDA0002450369820000072
Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame.
10. The apparatus of claim 7,
the calculation unit is used for executing the following steps to obtain the state value of the target animation at each moment according to the state value of the starting key frame, the state value of the ending key frame and each moment after normalization processing: by the formula V ═ V1-V0)×T×T×T×T+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0Represents the initial stateState value of key frame, V1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
the calculation unit is used for executing the following steps to obtain the state value of the target animation at each moment according to the state value of the starting key frame, the state value of the ending key frame and each moment after normalization processing: by the formula V ═ V0-V1)×[(T-1)×(T-1)×(T-1)×(T-1)-1]+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
the calculation unit is used for executing the following steps to obtain the state value of the target animation at each moment according to the state value of the starting key frame, the state value of the ending key frame and each moment after normalization processing: by the formula
Figure FDA0002450369820000081
Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame.
11. The apparatus of claim 7,
the calculation unit is used for executing the following steps to obtain the state value of the target animation at each moment according to the state value of the starting key frame, the state value of the ending key frame and each moment after normalization processing: by the formula V ═ V1-V0)×cos[T/1×(π/2)]+(V1-V0)+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
the calculation unit is used for executing the following steps to obtain the state value of the target animation at each moment according to the state value of the starting key frame, the state value of the ending key frame and each moment after normalization processing: by the formula V ═ V1-V0)×sin[T/1×(π/2)]+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
the calculation unit is used for executing the following steps to obtain the state value of the target animation at each moment according to the state value of the starting key frame, the state value of the ending key frame and each moment after normalization processing: by the formula V ═ V1-V0)/2×[cos(π×2T/1)-1]+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame.
12. The apparatus of claim 7,
the calculation unit is used for executing the following steps to obtain the state value of the target animation at each moment according to the state value of the starting key frame, the state value of the ending key frame and each moment after normalization processing: by the formula V ═ V1-V0)×pow[2,10×(T/1-1)]+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
the computing unit is used for executing the following steps to obtain the state value of the starting key frame, the state value of the ending key frame and each time after normalization processingTo the state value of the target animation at each moment: by the formula V ═ V1-V0)×[pow(2,-10×T/1)+1)]+V0Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame; alternatively, the first and second electrodes may be,
the calculation unit is used for executing the following steps to obtain the state value of the target animation at each moment according to the state value of the starting key frame, the state value of the ending key frame and each moment after normalization processing: by the formula
Figure FDA0002450369820000101
Calculating the state values at the time points, wherein V represents the state value at the time point T after the normalization processing, and V0A state value, V, representing the starting key frame1A state value representing the termination key frame.
13. A storage medium, characterized in that the storage medium comprises a stored program, wherein the program when executed performs the method of any of the preceding claims 1 to 6.
CN201410740719.9A 2014-12-05 2014-12-05 Animation curve generation method and device Active CN105719330B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410740719.9A CN105719330B (en) 2014-12-05 2014-12-05 Animation curve generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410740719.9A CN105719330B (en) 2014-12-05 2014-12-05 Animation curve generation method and device

Publications (2)

Publication Number Publication Date
CN105719330A CN105719330A (en) 2016-06-29
CN105719330B true CN105719330B (en) 2020-07-28

Family

ID=56144525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410740719.9A Active CN105719330B (en) 2014-12-05 2014-12-05 Animation curve generation method and device

Country Status (1)

Country Link
CN (1) CN105719330B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857812A (en) * 2020-07-29 2020-10-30 珠海天燕科技有限公司 Method and device for transferring animation curves in game interface
CN112634409B (en) * 2020-12-28 2022-04-19 稿定(厦门)科技有限公司 Custom animation curve generation method and device
CN116894893A (en) * 2023-09-11 2023-10-17 山东捷瑞数字科技股份有限公司 Nonlinear animation regulation and control method and system based on three-dimensional engine

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102194245A (en) * 2010-03-18 2011-09-21 微软公司 Stateless animation, such as bounce easing

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000251086A (en) * 1999-02-26 2000-09-14 Sony Corp Device and method for generating curve and program providing medium
US7483030B2 (en) * 2005-01-26 2009-01-27 Pixar Interactive spacetime constraints: wiggly splines
CN101441773B (en) * 2008-11-11 2011-09-14 宇龙计算机通信科技(深圳)有限公司 Cartoon implementing method, system and mobile terminal
CN102169595B (en) * 2010-02-26 2015-09-23 新奥特(北京)视频技术有限公司 A kind of many arrowhead path animation implementation methods and device
CN102376100A (en) * 2010-08-20 2012-03-14 北京盛开互动科技有限公司 Single-photo-based human face animating method
CN102682458A (en) * 2011-03-15 2012-09-19 新奥特(北京)视频技术有限公司 Synchronous regulating method of multi-stunt multi-parameter of key frame animation curve
US9524651B2 (en) * 2011-07-25 2016-12-20 Raymond Fix System and method for electronic communication using a voiceover in combination with user interaction events on a selected background
CN102902533A (en) * 2012-09-17 2013-01-30 乐视网信息技术(北京)股份有限公司 Frame revealing system and method of generating diagram by combining Java and HTML5 (Hypertxt Markup Language)
CN103824059B (en) * 2014-02-28 2017-02-15 东南大学 Facial expression recognition method based on video image sequence
CN103838842A (en) * 2014-02-28 2014-06-04 北京奇虎科技有限公司 Method and device for loading new tab page
CN104091360A (en) * 2014-07-28 2014-10-08 周立刚 Method and device for generating movement data through dynamic cinema

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102194245A (en) * 2010-03-18 2011-09-21 微软公司 Stateless animation, such as bounce easing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
浅析关键帧动画曲线的应用;王颖志;《电视字幕(特效与动画)》;20020715;第44-46页 *
计算机动画中运动生成与控制问题研究;李丹;《中国博士学位论文全文数据库信息科技辑》;20090515;I138-47 *

Also Published As

Publication number Publication date
CN105719330A (en) 2016-06-29

Similar Documents

Publication Publication Date Title
CN107992304B (en) Method and device for generating display interface
CN109344352B (en) Page loading method and device and electronic equipment
WO2017035971A1 (en) Method and device for generating emoticon
CN109783757B (en) Method, device and system for rendering webpage, storage medium and electronic device
CN111383308B (en) Method for generating animation expression and electronic equipment
CN105719330B (en) Animation curve generation method and device
WO2017032078A1 (en) Interface control method and mobile terminal
JP2010541045A5 (en)
Vizireanu et al. Visual-oriented morphological foreground content grayscale frames interpolation method
CN103427789A (en) Library graphic and text information denoising filter based on fractional order calculating equation
WO2017101390A1 (en) Picture display method and apparatus
US20140325404A1 (en) Generating Screen Data
Spina et al. Point cloud segmentation for cultural heritage sites
Schmidt Part-based representation and editing of 3d surface models
CN111158840B (en) Image carousel method and device
CN115908116A (en) Image processing method, device, equipment and storage medium
CN110990104B (en) Texture rendering method and device based on Unity3D
CN109933749B (en) Method and device for generating information
CN106843472B (en) Gesture recognition method and device, virtual reality equipment and programmable equipment
US10318796B2 (en) Age progression of subject facial image
Rosman et al. Articulated motion segmentation of point clouds by group-valued regularization
US11868701B1 (en) Template for creating content item
Colton Stroke matching for paint dances
CN106775222B (en) Dimension information display method and device
Luo et al. The Method for Micro Expression Recognition Based on Improved Light-Weight CNN

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant