CN112116688B - Method and device for realizing line animation - Google Patents

Method and device for realizing line animation Download PDF

Info

Publication number
CN112116688B
CN112116688B CN202011001669.4A CN202011001669A CN112116688B CN 112116688 B CN112116688 B CN 112116688B CN 202011001669 A CN202011001669 A CN 202011001669A CN 112116688 B CN112116688 B CN 112116688B
Authority
CN
China
Prior art keywords
track point
time frame
animation
current
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011001669.4A
Other languages
Chinese (zh)
Other versions
CN112116688A (en
Inventor
季益明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision System Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision System Technology Co Ltd filed Critical Hangzhou Hikvision System Technology Co Ltd
Priority to CN202011001669.4A priority Critical patent/CN112116688B/en
Publication of CN112116688A publication Critical patent/CN112116688A/en
Application granted granted Critical
Publication of CN112116688B publication Critical patent/CN112116688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method for realizing line animation, which comprises the following steps: loading track point data of a given track line, wherein for any track point, the track point data comprises a track point time frame, and the track point time frame is used for representing the number of frames required by an animation end to reach the track point at a given frame rate; determining a current global time frame, wherein the current global time frame is used for representing a time frame to which an animation end for drawing a current animation line belongs, and determining a target track point under the current global time frame according to the relation between the current global time frame and the track point time frame; determining a drawing position according to a target track point time frame and the global time frame, wherein the target track point time frame is a time frame for drawing the target track point; and drawing a brake drawing line according to the drawing position. The invention simplifies the processing process of the line animation and improves the processing efficiency of the animation, thereby realizing the large-scale animation of the line animation of millions.

Description

Method and device for realizing line animation
Technical Field
The present invention relates to the field of line animation, and in particular, to a method and apparatus for implementing line animation.
Background
Dynamic visualization of computer data includes a migration map (O-D map) that expresses the migration law of things in two dimensions of location and time, such as population migration, urban traffic, typhoon trajectories, etc. The line animation is one of the O-D diagrams, and is formed by moving an animation line with the attributes of length, speed and the like along a predetermined track, and visually represents the effect of dynamic line flow. The length, speed, etc. attributes of the animation lines may be custom or derived from the data in the O-D map. As shown in fig. 1, the line connecting four points A, B, C, D constitutes a predetermined trajectory line, and an animation line is constituted by the line connecting a starting point (head) (also referred to as an animation end) and an ending point (end) (also referred to as an animation end).
The traditional implementation method of the line animation is as follows:
1. and interpolating the established track line needed to be subjected to the line animation according to the animation speed of the line animation, so that the interpolation points are densely distributed, all the interpolation points are stored into a plurality of groups, and the interpolation points needed by the length of the animation line per se are calculated.
2. Setting a time parameter which increases every frame by frame for each animation line to be drawn, then taking out all interpolation points between the animation line starting track point and the animation line ending track point from the interpolation point array, and storing the interpolation points as animation position points of the current animation line.
3. And sequentially connecting the animation position points to obtain animation line data to be drawn in the current frame.
4. And outputting the animation line data to be drawn frame by frame to display the current frame, so as to obtain the visual line animation.
The traditional line animation calculation process is complex, requires complex calculation steps and data dependence, is mainly in a CPU, has low animation efficiency, and particularly has unsatisfied calculation capability on the realization of large-scale line animation, such as realization of millions of animation lines. In general, when a browser refresh Frame Per Second (FPS) is 30 frames or more, the animation effect appears smooth, and when the FPS is below 20 frames, the animation effect appears stuck and discontinuous. When the traditional line animation realization method is applied to a browser and a webpage, the refresh frequency of the browser is greatly reduced, namely the transmission frame number per second is greatly reduced, and the animation effect is represented as serious clamping.
Disclosure of Invention
The invention provides a method and a device for realizing line animation, which are used for improving the animation efficiency of the line animation.
The method for realizing the line animation provided by the invention is realized as follows:
A method of implementing a line animation, the method comprising,
loading track point data of a given track line, wherein for any track point, the track point data comprises a track point time frame, and the track point time frame is used for representing the number of frames required by an animation end to reach the track point at a given frame rate;
determining a current global time frame, wherein the current global time frame is used for representing a time frame to which an animation end for drawing a current animation line belongs,
determining a target track point under the current global time frame according to the relation between the current global time frame and the track point time frame;
determining a drawing position according to a target track point time frame and the global time frame, wherein the target track point time frame is a time frame for drawing the target track point;
and drawing a brake drawing line according to the drawing position.
The invention provides a device for realizing line animation, which comprises,
the track point data acquisition module is configured to load track point data of a given track line, wherein for any track point, the track point data comprises a track point time frame, and the track point time frame is used for representing the number of frames required by an animation end to reach the track point at a given frame rate;
A global time frame determining module configured to determine a current global time frame for representing a time frame to which an animation end for drawing a current animation line belongs,
the drawing module is configured to determine a target track point under the current global time frame according to the relation between the current global time frame and the track point time frame;
the drawing module is further configured to determine a drawing position according to a target track point time frame and the global time frame, wherein the target track point time frame is a time frame for drawing the target track point;
and the drawing module is further configured to draw a brake drawing line according to the drawing position.
The invention also provides an electronic device comprising a memory storing a computer program and a processor configured to perform the steps of the method of implementing a line animation described above.
The present invention further provides a computer storage medium having stored therein a computer program which, when executed by a processor, causes the processor to perform the steps of the method for implementing a line animation as described above.
Based on the scheme, interpolation calculation of the established track line is not needed, but only the relation between each track point time frame and the global time frame in the established track line is needed to be judged, and the drawing position can be determined according to the judging result, so that the drawing of the animation line is realized according to the drawing position, the drawing process of the animation line is simplified, and the processing efficiency of the line animation is improved. If the judging process is processed in parallel in the GPU, the CPU can be liberated, so that the CPU has more free computing power to process tasks such as non-animation line tracks. When the method is applied to a browser end, the display performance of the animation of the browser is improved, and the problem of blocking and unsmooth large-scale animation display is avoided.
Drawings
FIG. 1 is a schematic diagram of a line animation.
Fig. 2 is an exemplary flow chart of a method for implementing line animation in one embodiment of the invention.
FIG. 3 is a flow chart of the method of FIG. 2 for implementing large-scale line animation based on a GPU.
FIG. 4a is a schematic diagram of a vertex attribute suitable for use in the process of FIG. 3.
FIG. 4b is a diagram illustrating the integration of all vertex data into a vertex attribute array.
FIG. 5 is a schematic diagram of a specific process for determining a rendering location based on a shader mechanism in the process shown in FIG. 3.
FIG. 6 is a schematic diagram of processing vertices at global time frames based on the embodiment of FIG. 5.
FIG. 7 is a schematic diagram of the current animation progress and vertex relationship of a line animation.
FIG. 8 is a schematic diagram of the method of FIG. 2 for implementing large-scale line animation based on a GPU.
Fig. 9 is a line animation effect of a screen-copying when the method shown in fig. 2 is applied to a browser end.
Fig. 10 is a schematic diagram of an electronic device for implementing line animation according to an embodiment of the present invention.
FIG. 11 is a schematic diagram of an apparatus for implementing line animation according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical means and advantages of the present application more apparent, the present application is further described in detail below with reference to the accompanying drawings.
FIG. 2 is an exemplary flow diagram of a method of implementing line animation in one embodiment of the present application. Referring to fig. 2, in this embodiment, a method for implementing line animation may include:
in step 201, track point data of a predetermined track line is loaded, wherein for any track point, the track point data includes a track point time frame, and the track point time frame is used for representing the number of frames required for an animation end to reach the track point at a predetermined frame rate.
Step 202, determining a current global time frame, where the current global time frame is used to represent a time frame to which an animation end for drawing a current animation line belongs.
Step 203, determining the target track point under the current global time frame according to the relationship between the current global time frame and the track point time frame.
Step 204, determining a drawing position according to the target track point time frame and the global time frame, wherein the target track point time frame is the time frame for drawing the target track point.
Step 205, drawing a brake drawing line according to the drawing position.
By using the method for realizing the line animation, interpolation calculation of the established track line is not needed, but the relation between the time frame of each track point in the established track line and the global time frame is judged, and the drawing position can be determined according to the judgment result, so that the drawing of the animation line is realized according to the drawing position, the drawing process of the animation line is simplified, and the processing efficiency of the line animation is improved. For the situation that a large number of track points exist, the drawing mechanism of the GPU is utilized to determine and draw the target track points, so that the large number of track points can be processed in the GPU in parallel, a CPU can be liberated, and the CPU has more free computing power to process tasks such as non-animation line tracks. When the method is applied to a browser end, the display performance of the animation of the browser is improved, and the problem of blocking and unsmooth large-scale animation display is avoided.
The following description will be given by taking an embodiment of implementing a line animation using a graphic processor.
In connection with the processing procedure of the graphics processor (Graphics Processing Unit, GPU), the GPU may identify the trace points as vertices, and the trace point data used in the method shown in fig. 2 may be loaded into the GPU as vertex data (vertex) suitable for GPU processing, i.e., for each line animation, each frame of trace data of the line animation is disassembled into vertex data (i.e., trace point data) which are uniform in type, have no logical relationship with each other and are suitable for GPU processing, and the vertex data (i.e., trace point data) are calculated in parallel by the GPU, and after determining the drawing position, a shader (geometry shader) of the GPU is utilized to draw a brake drawing line, so that, for example, a plurality of vertices (i.e., trace points) in the same time frame in a plurality of animation lines may be drawn simultaneously, thereby improving the rate of drawing a plurality of animation lines by the browser, and avoiding blocking and unsmooth drawing when large-scale animation display (i.e., drawing large-scale animation lines). Wherein the vertex is the minimum unit of graphic drawing, and the drawing of the animation line is performed on the basis of the vertex drawing.
FIG. 3 is a flow chart of the method of FIG. 2 for implementing large-scale line animation based on a GPU. Referring to fig. 3, when the method shown in fig. 2 is implemented based on a GPU, the method may include,
Step 300, constructing vertex data (i.e., trajectory point data) for a graphics processor for an animation line to be drawn. This step may be performed in the CPU.
In view of the length, speed, three-dimensional space coordinates representing the positions of the track points, and other attributes of each animation line, in order to implement drawing of the animation line on the GPU, attribute information of the animation line needs to be processed into parameters (i.e., vertex data representing track point data) required by the GPU.
Parameters required by the GPU include, in particular, the frame identification of the vertex (i.e., the track point identification) for indicating the number of frames required for the animation end to reach the track point at a given frame rate, such as the frame number (e.g., track point frame number), the track length frame number, and the animation line length frame number. In addition, in order to process the attribute information of the animation line into parameters required for the GPU, a frame rate is also required. The respective parameters are described below.
Frame rate: since the animation takes a frame as a refresh unit, and the final line animation speed needs to be represented on the refresh rate of the frame, the frame rate (FV) is the ratio of the line animation speed to the number of frames per unit time, and the mathematical expression is expressed as:
FV=V/FPS,
wherein, the FPS is a frame per second (frame) transmission frame number, which can be understood as the refresh times of the browser per second at the web end, and the content sliding, updating, animation and the like of the web end are realized through the refresh of the browser frame, which also shows the performances of hardware and programs (for example, the FPS of the general animation is 60 and is smooth, and the FPS is less than 20 and is in the form of blocking and picture discontinuity); v is the line animation speed and represents the coordinate position change of the track point in unit time.
Frame number of vertex (i.e., track point frame number): in view of the fact that the track points in the given track line consist of a coordinate array representing three-dimensional space coordinates, the track points are used as vertexes in the GPU, and can be numbered according to the number of frames of the animation frames, and the number of frames required by the animation end to reach the track points at the given frame rate is represented; so that the trace points are timed.
For any track point, according to the distance L between the track point and a track point (for example, track point a in fig. 1) of a given track line start track point and the frame rate FV, the number of frames required for the animation to reach the track point from the start track point can be calculated, and the number of frames is expressed as the following mathematical formula:
wherein,representing a rounding down.
When the distance L between the track point and the track point at the beginning of a given track line and the frame rate FV are unchanged, the current animation frame number of the track point is unchanged. Different trajectory points have different vertex frame numbers.
The positional offset (three-dimensional spatial offset) between any two track points can be obtained by multiplying the frame rate by the difference between the frame numbers of the two track points. By comparing the global time frame with the vertex frame number, the positional relationship between the position reached by the animation line and the vertex can be determined.
In an alternative embodiment, the animation frame numbers of the previous vertex (i.e., the previous track point) and the next vertex (i.e., the next track point) of the current vertex (i.e., the current track point) may be recorded simultaneously, and respectively denoted as prevNum and nextNum.
Track length frame number: according to the length S of the preset track line and the frame rate FV, the number of frames (the end tail of the animation runs to the end track point) required for running the animation from the start track point of the preset track line to the end track point of the preset track line is calculated and recorded as routeFrameSum; in one embodiment, the number of trace length frames may be saved separately to each vertex. The formula of routeFrameSum is expressed as:
length frame number of animation lines: based on the length D of the animation line itself and the frame rate FV, the number of frames occupied by the animation passing a track point can be calculated and denoted as selfSum. The mathematical formula of selfSum is expressed as:
referring to fig. 4a, fig. 4a is a schematic diagram of vertex attribute data suitable for the flow shown in fig. 3. The vertex attribute of the vertex corresponding to each track point comprises coordinates (three-dimensional space coordinates of the vertex corresponding to the track point) representing the position of the track point, the current frame number of the vertex corresponding to the track point, the previous vertex frame number of the vertex corresponding to the track point, the next vertex frame number of the vertex corresponding to the track point, the forward position offset (three-dimensional space offset) of the vertex corresponding to the track point, the backward position offset (three-dimensional space offset) of the vertex corresponding to the track point, the length frame number of the animation line and the length frame number of the track line.
Referring to fig. 4b, fig. 4b is a schematic diagram of integrating all vertex data into a vertex attribute array, where parameters corresponding to each attribute of each vertex (i.e. track point) form vertex attribute data of the vertex; each vertex may include a plurality of attributes such as position (position), current frame number of the vertex (currNum), previous frame number of the vertex (prevNum), next frame number of the vertex (nextNum), etc., and may include a plurality of vertex attribute data; for the same attribute, vertex attribute data corresponding to the attribute of all vertexes can be combined into vertex attribute array data, such as a position array, a vertex frame number array of the current track point and the like; if a plurality of attributes exist, a plurality of vertex attribute array data exist; all vertex attribute array data constitute vertex data of the line animation. And using an array form to enable the vertex shader of the GPU to acquire data of different attributes of the same vertex according to the same position in each vertex attribute array.
In an alternative embodiment, if the given trajectory line is a straight line, the trajectory point data (vertex data) of the given trajectory line may include at least the start trajectory point data (including the start trajectory point time frame) and the end trajectory point data (including the end trajectory point time frame) of the given trajectory line; the method comprises the steps that a starting track point time frame is the number of frames required by an animation end to reach a starting track point at a set frame rate, and the starting track point time frame is the same as an initial value of a global time frame; the end track point time frame is the number of frames required for the end of the animation to reach the end track point at a given frame rate. The embodiment can reduce the number of track points on the straight line, and is beneficial to improving the drawing efficiency of the track.
In an alternative embodiment, if the predetermined trajectory line is a broken line, the trajectory point data (vertex data) of the predetermined trajectory line may include at least inflection point data (including an inflection point time frame) of the trajectory point forming the inflection point in the predetermined trajectory line, in addition to the start trajectory point data (including a start trajectory point time frame) and the end trajectory point data (including an end trajectory point time frame), wherein the inflection point time frame is a number of frames required for the animation end to reach the inflection point at the predetermined frame rate. In this embodiment, since each inflection point is a vertex, the change in the position of the trajectory point can be characterized, which is advantageous in improving the accuracy of trajectory line drawing.
In step 301, vertex data is bound to the GPU's vertex buffer objects (VBO, vertex Buffer Object) to enable trace point data for a given trace line to be loaded into the GPU as vertex data. This step may be performed in the CPU.
VBO is a way for the GPU to load the vertex array, and is to allocate and cache vertex data in the memory of the GPU and perform rendering, so as to improve rendering performance and reduce memory bandwidth and power consumption.
In this step, a buffer creation control gl.createbuffer is called to create a VBO object, and a vertex attribute index pointed to by the VBO object is specified, and then a vertex attribute array at the index is enabled.
Then, the buffer binding control gl. Bind buffer binding buffer object is called, and then the buffer data control gl. Buffer data binding the vertex data to the buffer object is called.
In step 302, a global time frame is set, which is used for recording the current animation progress of the line animation, so that the GPU obtains the set global time frame and determines the obtained global time frame as the current global time frame. This step may be performed in the CPU.
The global time frame is denoted globalFrameNum, which is incremented from frame to frame. For example, in a web page, the number of web page refreshes is accumulated by a global time frame. Before the on-line animation starts, the global time frame needs to be initialized, and the initialization frame number is the same as the starting track point time frame of the established track line; after the online animation begins, the global time frame is incremented from frame to frame.
In step 303, a shader mechanism based on the GPU performs drawing determination to generate an animation line. This step may be performed by the GPU and the execution may refer to steps 203 to 205 in the flow as shown in fig. 2.
The shader mechanism is to compare the frame number of each vertex with the global time frame globalFrameNum in the vertex shader;
in this step, based on the result of the comparison, it is determined which vertices are target track points, and the drawing positions are determined according to the target track points, and then the drawing positions are provided to the primitives, and finally the primitives sequentially connect the drawing positions in a series of continuous track LINEs (line_strip) based on the drawing positions to be drawn.
Referring to fig. 5, fig. 5 is a schematic diagram of a specific process for determining a drawing position based on a shader mechanism in the process shown in fig. 3. It should be appreciated that the GPU may determine the drawing positions for the respective animation lines to be drawn in parallel according to the flow diagram. For each animation line to be drawn, the GPU vertex shader comprises the following steps:
step 510, the current global time frame is entered.
Step 520, for the vertex data of each track point in the predetermined track corresponding to the animation line, it is determined whether the vertex is a target track point to be drawn according to the current animation progress and the vertex relationship of the line animation, that is, as shown in step 203 in the flowchart of fig. 2, the target track point under the current global time frame is determined according to the relationship between the current global time frame and the track point time frame.
For example, each track point in the track point data may be taken as the current track point:
if the current global time frame is greater than the previous track point time frame and less than or equal to the current track point time frame, determining the current track point as a first target track point;
if the current global time frame is larger than the current track point time frame and the current animation end tail time frame is smaller than the current track point time frame, judging that the current track point is a second target track point;
If the current animation end tail time frame is larger than the current track point time frame and smaller than or equal to the next track point time frame, judging that the current track point is a third target track point;
the method comprises the steps that a previous track point time frame is the number of frames required by an animation end to reach a previous track point at a set frame rate, a next track point time frame is the number of frames required by the animation end to reach a next track point at the set frame rate, and a current animation end tail time frame is the time frame for drawing a current animation end tail; determining a current animation end tail time frame according to the current global time frame and the animation line length frame number; the animation line length frame number is used to represent the number of frames required to draw the animation line itself.
In the flow shown in fig. 5, the relationships of "greater than" and "less than" among the current global time frame, the previous track point time frame, the current animation end tail time frame, and the next track point time frame can be determined according to the frame numbers.
If the vertex is the target track point to be drawn, executing step 530, judging whether the vertex needs to be offset to determine a drawing position, calculating the position offset required by the vertex through step 531 when the vertex needs to be offset, determining the position of the vertex after the vertex is offset according to the required position offset, and transmitting the offset position to the fragment through step 532 as the drawing position; when no offset is required, then step 531 is skipped and the position of the vertex itself is taken as the drawing position directly through step 532; that is, steps 530 through 532 may be considered a preferred implementation of step 204 in the flow shown in FIG. 2;
If the vertex is not the target trajectory point, step 540 is performed to determine that the vertex does not require rendering.
Step 550, after step 532 or step 540, it is determined whether all points on the animation line have been determined, if so, the current flow is ended and the flow is simultaneously skipped to step 205 of the flow shown in fig. 2, otherwise, step 560 is performed.
Step 560, taking the next vertex as the current vertex, and returning to step 520. This is repeated until it is determined, via step 550, whether each vertex data in the given trajectory line is determined to require rendering, thereby determining the rendering location of the animated line.
The current animation progress of the line animation can be characterized according to a current global time frame, for example, the global time frame is gradually increased from 1, and when the current global time frame is n, the image frame of the line animation is refreshed to an nth frame; from the visual effect, when the first frame is refreshed, the visual representation line animation starts to perform animation from the starting track point, at the moment, the animation end (head) coincides with the starting point, when the first frame is refreshed to the nth frame, the visual representation is that the animation end moves from the starting track point animation of the established track line to the current track point position of the established track line, therefore, the animation end position in each global time frame corresponds to the current track point position under the global time frame, the corresponding frame number is n, and the relationship between the current animation progress of the line animation and the vertex frame number is the relationship between the current animation end and the vertex frame number under the current global time frame, namely, the relationship between the current animation end and the vertex, and the current track point position under any global time frame can be obtained according to the current global time frame.
Referring to fig. 6, fig. 6 is a schematic diagram of processing vertices at global time frames based on the embodiment of fig. 5. Since the frame number of the current animation end is equal to the current global time frame, in each global time frame, for each vertex in the vertex data, each vertex is compared with the current global time frame, and in fact, is compared with the animation end, for example, in the first time frame, vertices 1 to v are compared with the current animation end in the first time frame, and in the second time frame, vertices 1 to v are compared with the current animation end in the second time frame. Determining whether the vertex is used as a target track point under the current global time frame depends on the position relation between the vertex and the current track point, namely, depends on the relation between the vertex frame number and the current animation end frame number; from the perspective of the current animation end, it is equivalent to determining which vertices to draw as the current animation end or the current animation end tail.
In an alternative embodiment, another procedure for determining a rendering location based on a vertex shader is as follows: for each line animation, the GPU vertex shader performs the following steps:
Step 700, the current global time frame is entered.
In step 710, vertices of each predetermined trajectory line having a motion end time frame greater than the current global time frame are selected based on vertex data of the predetermined trajectory line, wherein the motion end time frame of the predetermined trajectory line is a sum of a trajectory line length frame number and a motion line length frame number for one predetermined trajectory line. If the animation end time frame is smaller than or equal to the current animation end tail time frame, indicating that the animation end tail has reached the end track point of the set track line corresponding to the track line length frame number, namely that the set track line has been finished, and judging whether each track point in the set track line is a target track point or not is not needed, wherein each track point in the set track line is not a potential target track point; otherwise, if the animation end time frame is greater than the current global time frame, the end of the animation does not reach the end track point of the given track line corresponding to the track line length frame number, that is, the given track line does not complete the line animation process, each track point in the given track line belongs to a potential target track point, and whether the potential target track points are target track points needs to be judged later.
It can be seen that the vertex of the animation end time frame that is less than or equal to the current global time frame is not a potential target track point, and jump to step 750; the vertices of the animation end time frame that are greater than the current global time frame are potential target track points, and the process proceeds to step 720.
Step 720, determining whether the selected vertex is the target track point according to the relationship between the current animation progress of the line animation and the selected vertex (i.e. determining the target track point under the current global time frame according to the relationship between the current global time frame and the track point time frame as in step 203 in the flow shown in fig. 2), where the determination manner of step 720 may be the same as that of step 520 in the flow shown in fig. 5.
If the position is the target track point to be drawn, executing step 730 to determine whether the vertex needs to be offset, when the position needs to be offset, calculating the position offset required by the vertex according to step 731, determining the position of the offset vertex according to the required position offset, and then sending the offset position to the fragment as the drawing position, and turning to step 750; when no offset is required, then step 731 is skipped and the position of the vertex itself is taken as the drawing position directly through step 732; that is, steps 730, 731, 732 may be considered a preferred implementation of step 204 in the flow shown in fig. 2;
If the vertex is not the target trajectory point, then step 740 is performed to determine that the vertex does not require rendering.
Step 750, after step 732 or step 740, it is determined whether all points on the selected animation line have been determined, if so, the current flow is ended, and the process jumps to step 205 of the flow shown in fig. 2, otherwise, step 760 is performed.
Step 760, regarding the next vertex as the current vertex, and returning to step 710. This is repeated until it is determined, via step 750, whether each vertex data in the given trajectory line is determined to require rendering, thereby determining the rendering location of the animation line.
Compared to the flow shown in fig. 5, the flow of the above embodiment further increases the process of screening the candidate set of track points possibly belonging to the target track point by using the track length frame number, so as to avoid executing the processing of step 720 on the vertex corresponding to the track point unlikely to belong to the target track point, and help to improve the determination efficiency of the drawing position.
Referring to fig. 7, fig. 7 is a schematic diagram showing the current animation progress and vertex relationship of the line animation. The current animation progress and vertex relationship of the line animation includes the following cases:
Each vertex is used as the current vertex to carry out the following judgment:
case one: the frame number of the end of the current animation is smaller than the frame number of the vertex before the current vertex, namely: globalFrameNum < prevNum, when the end of the animation does not reach the vertex before the current vertex, the current vertex is judged not to be the target track point.
And a second case: the frame number of the end of the current animation is greater than the frame number of the previous vertex of the current vertex and less than or equal to the frame number of the current vertex, namely: prevNum < globalFramenum is less than or equal to currNum, when the animation end exceeds the previous vertex but does not reach the current vertex, determining that the current vertex is a first target track point, calculating the position offset (obtained by the product of the frame rate and the time frame difference between the current vertex and the current animation end) between the current vertex and the current animation end, and determining the position offset according to the position offset as the drawing position, namely the animation end position in the second case.
In other words, the current vertex is taken as a first target track point, and the position offset between the current animation end and the first target track point is determined; and determining a first drawing position of an animation end for drawing the current animation along a preset track line according to the position of the first target track point and the position offset.
Case three: the end frame number of the current animation is larger than the frame number of the current vertex, but the difference between the end frame number of the current animation and the length frame number of the animation line (the end frame number of the current animation is obtained) is smaller than the frame number of the current vertex, namely:
globalFrameNum > currNum but globalFrameNum-selfSum < currNum,
at this time, the animation head passes through the current vertex, and the position of the current vertex is determined as the drawing position.
That is, the current vertex is taken as a second target track point, and the position of the current track point is determined to be a second drawing position;
case four: the difference between the current animation end frame number and the animation line length frame number (obtained by the current animation end tail frame number) is larger than the current vertex frame number, and the difference between the current animation end frame number and the animation line length frame number (obtained by the current animation end tail frame number) is smaller than or equal to the frame number of the next vertex of the current vertex, namely:
currNum<globalFrameNum–selfSum≤nextNum,
at this time, the end tail of the animation has passed the current vertex but has not reached the next vertex, the position offset between the current vertex and the end tail of the animation is calculated, the position offset is determined according to the product of the frame rate and the frame number difference between the current vertex and the end tail of the animation, and the position offset according to the position offset is determined as the drawing position.
Namely, taking the current vertex as a third target track point, and determining the position offset between the current animation end tail and the third target track point according to the difference value between the current animation end tail time frame and the third target track point time frame and the frame rate; and determining a third drawing position of the end tail of the animation for drawing the current animation along the preset track line according to the position of the third target track point and the position offset.
Case five: the difference between the current animation end frame number and the animation line length frame number (i.e. the current animation end tail frame number) is greater than the frame number of the next vertex of the current vertex, namely: the globalFrameNum-selfSum > nextNum, when the animation end passes the next vertex of the current vertex, the current vertex is judged not to be the target track point.
It can be understood that when the animation is initialized, the animation end head and the animation end tail draw at the positions of the starting track points; along with the increase of the global time frame, when the current global time frame is smaller than or equal to the length frame number of the animation line, the end of the animation is drawn according to the drawing position determined by the second situation, and the end tail of the animation is drawn still at the position of the starting track point; and when the current global time frame is greater than the length frame number of the animation line, drawing the animation end tail at the drawing position determined in the fourth situation. When the end of the animation reaches the ending track point, the end of the animation draws at the position of the ending track point, and as the current global time frame increases, the end of the animation draws at the position of the ending track point, and the end tail of the animation draws at the drawing position determined in the fourth situation until the end tail of the animation reaches the ending track point, and then the animation of the established track line is ended.
The invention constructs the vertex data for the graphic processor for the animation line to be drawn, so that the animation processing process can be transplanted into the GPU for carrying out; and determining the drawing of the vertexes and the position offset during drawing in the GPU by using a shader mechanism of the GPU through the global time frame, so as to realize the line animation.
Referring to fig. 8, fig. 8 is a schematic diagram of a method for implementing a large-scale line animation based on a GPU according to the method shown in fig. 2. For a braking line 1 to be drawn, constructing vertex data 1 of the braking line 1; for the animation 2 of the line to be drawn, constructing vertex data 2 of the animation line 2; … …, and so on, for an animation line m to be drawn, the vertex data m of that animation line m is constructed, in an alternative embodiment m being a millions of numbers. A global time frame is set for m animation lines.
In the first global time frame, vertex judgment of m animation lines in each vertex data under the current global time frame is processed in parallel;
in the second global time frame, vertex judgment of m animation lines in each vertex data under the current global time frame is processed in parallel;
……
thus, the integration can reach infinity.
From the judgment of the current vertexes, for any braking line to be drawn, the calculated amount of each frame is just to obtain the current global time frame, then whether each vertex under the current global time frame is a target track point is judged, if so, how to draw according to the target track point is greatly reduced compared with the traditional CPU animation realization. And because the GPU has parallel computing capability, for each line animation, the drawing of each animation line to be drawn under the global time frame can be processed in parallel, so that the efficiency of realizing the animation is greatly improved, and millions of line animations are realized.
Compared with the traditional method, the implementation method of the invention has the advantages that the effect of the animation process of the line animation can not be reflected in the text, so that the FPS is used as an evaluation index, and under the condition of the same number of line animations, the higher the FPS is, the higher the animation efficiency is, and the refresh frequency of a computer used in the test is 60HZ. Taking the unified number of track points of each line animation as 2 as an example, the actual measurement results are shown in the following table:
as can be seen, the invention greatly improves the efficiency of the line animation, and when the number of the animation lines is 100 ten thousand, the efficiency is improved by at least 60 times compared with the traditional method.
When the line animation implementation method of the invention is applied to the browser end, the animation capacity borne by the browser end is greatly improved while the line animation effect is maintained, as shown in fig. 9, and fig. 9 is the line animation effect of screen copying when being applied to the browser end. As can be seen from the value of FPS, when the number of animation lines is 10 tens of thousands, the conventional line animation has lost FPS.
Referring to fig. 10, fig. 10 is a schematic diagram of an electronic device for implementing line animation according to the present invention. The electronic device may comprise a memory 1210 and a first processor 1220, the memory 1210 storing a computer program, the first processor 1220 being configured to perform the steps of the method of implementing a line animation as described hereinbefore. For example, the first processor 1220 may be a GPU.
If the first processor 1220 is a GPU, the electronic device may further include a second processor 1230, such as a CPU, for constructing vertex data (i.e., trace point data) for the graphics processor for the animation line to be drawn, binding the vertex data to the VBO of the first processor 1220 (GPU) to enable the trace point data of the given trace line to be loaded into the first processor 1220 (GPU) as vertex data.
The Memory 1210 may be a Non-Volatile Memory (NVM) and exist in a stand-alone form as a computer storage medium.
Referring to fig. 11, fig. 11 is a schematic diagram of an apparatus for implementing a line animation according to an embodiment of the present invention. The apparatus may comprise a device for receiving a signal,
a track point data obtaining module 1310 configured to load track point data of a predetermined track line, wherein, for any track point, the track point data includes a track point time frame, and the track point time frame is used for representing a frame number required for the animation end to reach the track point at a predetermined frame rate;
a global time frame determining module 1320, configured to determine a current global time frame, the current global time frame being used to represent a time frame to which an animation end for drawing a current animation line belongs,
A drawing module 1330 configured to determine a target track point in the current global time frame according to the relationship between the current global time frame and the track point time frame;
the drawing module 1330 is further configured to determine a drawing position according to a target track point time frame and a global time frame, where the target track point time frame is a time frame for drawing the target track point;
the drawing module 1330 is further configured to draw a braking scribe line according to the drawing position.
In an alternative embodiment, the drawing module 1330 may include,
a target trajectory point determination submodule configured to:
each track point in the track point data is taken as the current track point respectively:
if the current global time frame is greater than the previous track point time frame and less than or equal to the current track point time frame, determining the current track point as a first target track point;
if the current global time frame is larger than the current track point time frame and the current animation end tail time frame is smaller than the current track point time frame, judging that the current track point is a second target track point;
if the current animation end tail time frame is larger than the current track point time frame and smaller than or equal to the next track point time frame, judging that the current track point is a third target track point;
The method comprises the steps that a previous track point time frame is the number of frames required by an animation end to reach a previous track point at a set frame rate, a next track point time frame is the number of frames required by the animation end to reach a next track point at the set frame rate, and a current animation end tail time frame is the time frame for drawing a current animation end tail; determining the tail time frame of the current animation end according to the current global time frame and the length frame number of the animation line; the animation line length frame number is used to represent the number of frames required to draw the animation line itself.
In an alternative embodiment, the drawing module 1330 may further include:
a rendering position determination sub-module configured to:
if the target track point is the first target track point, determining the position offset between the current animation end and the first target track point according to the difference value of the current global time frame and the track point time frame of the first target track point and the frame rate; determining a first drawing position of an animation end for drawing the current animation along a preset track line according to the position of the first target track point and the position offset;
if the target track point is the second target track point, determining the position of the current track point as a second drawing position;
If the target track point is the third target track point, determining the position offset between the tail of the current animation end and the third target track point according to the difference value between the tail time frame of the current animation end and the time frame of the third target track point and the frame rate; and determining a third drawing position of the end tail of the animation for drawing the current animation along the preset track line according to the position of the third target track point and the position offset.
For the apparatus/network side device/storage medium embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and the relevant points are referred to in the description of the method embodiment.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the invention.

Claims (10)

1. A method for realizing line animation is characterized in that the method comprises the following steps,
loading track point data of a given track line, wherein for any track point, the track point data comprises a track point time frame, and the track point time frame is used for representing the number of frames required by an animation end to reach the track point at a given frame rate;
determining a current global time frame, wherein the current global time frame is used for representing a time frame to which an animation end for drawing a current animation line belongs;
determining a target track point under the current global time frame according to the relation between the current global time frame and the track point time frame;
determining a drawing position according to a target track point time frame and the global time frame, wherein the target track point time frame is a time frame for drawing the target track point;
drawing a braking drawing line according to the drawing position;
wherein the determining the target track point under the current global time frame according to the relation between the current global time frame and the track point time frame comprises,
Each track point in the track point data is taken as a current track point respectively:
if the current global time frame is greater than the previous track point time frame and is less than or equal to the current track point time frame, determining the current track point as a first target track point;
if the current global time frame is larger than the current track point time frame and the current animation end tail time frame is smaller than the current track point time frame, judging that the current track point is a second target track point;
if the current animation end tail time frame is larger than the current track point time frame and smaller than or equal to the next track point time frame, judging that the current track point is a third target track point;
the method comprises the steps that a previous track point time frame is the number of frames required by an animation end to reach a previous track point at a set frame rate, a next track point time frame is the number of frames required by the animation end to reach a next track point at the set frame rate, and a current animation end tail time frame is the time frame for drawing the current animation end tail; the current animation end tail time frame is determined according to the current global time frame and the animation line length frame number; the animation line length frame number is used to represent the number of frames required to draw the animation line itself.
2. The method of claim 1, wherein if the given trajectory is a straight line, the trajectory point data comprises at least a start trajectory point time frame and an end trajectory point time frame;
the starting track point time frame is the number of frames required by the animation end to reach the starting track point at a given frame rate, and the ending track point time frame is the number of frames required by the animation end to reach the ending track point at the given frame rate; the initial trajectory point time frame is the same as the initial value of the global time frame.
3. The method of claim 2, wherein,
if the predetermined trajectory is a broken line, the trajectory point data at least further includes an inflection point time frame;
the inflection point time frame is the number of frames required for the animation end to reach the inflection point at a given frame rate.
4. A method according to any one of claims 1 to 3, wherein said determining a rendering position from a target trajectory point time frame and said global time frame comprises:
if the target track point is the first target track point, determining the position offset between the current animation end and the first target track point according to the difference value of the track point time frame of the current global time frame and the first target track point and the frame rate; determining a first drawing position of an animation end for drawing the current animation along the preset track line according to the position of the first target track point and the position offset;
If the target track point is the second target track point, determining the position of the current track point as a second drawing position;
if the target track point is the third target track point, determining the position offset between the current animation end tail and the third target track point according to the difference value between the current animation end tail time frame and the third target track point time frame and the frame rate; and determining a third drawing position of the animation end tail for drawing the current animation along the preset track line according to the position of the third target track point and the position offset.
5. An apparatus for realizing line animation, characterized in that the apparatus comprises,
the track point data acquisition module is configured to load track point data of a given track line, wherein for any track point, the track point data comprises a track point time frame, and the track point time frame is used for representing the number of frames required by an animation end to reach the track point at a given frame rate;
the global time frame determining module is configured to determine a current global time frame, wherein the current global time frame is used for representing a time frame to which an animation end for drawing a current animation line belongs;
The drawing module is configured to determine a target track point under the current global time frame according to the relation between the current global time frame and the track point time frame;
the drawing module is further configured to determine a drawing position according to a target track point time frame and the global time frame, wherein the target track point time frame is a time frame for drawing the target track point;
the drawing module is further configured to draw a braking drawing line according to the drawing position;
wherein the drawing module comprises a drawing module, a drawing module and a drawing module, wherein the drawing module comprises,
a target track point determination submodule configured to:
each track point in the track point data is taken as a current track point respectively:
if the current global time frame is greater than the previous track point time frame and is less than or equal to the current track point time frame, determining the current track point as a first target track point;
if the current global time frame is larger than the current track point time frame and the current animation end tail time frame is smaller than the current track point time frame, judging that the current track point is a second target track point;
if the current animation end tail time frame is larger than the current track point time frame and smaller than or equal to the next track point time frame, judging that the current track point is a third target track point;
The method comprises the steps that a previous track point time frame is the number of frames required by an animation end to reach a previous track point at a set frame rate, a next track point time frame is the number of frames required by the animation end to reach a next track point at the set frame rate, and a current animation end tail time frame is the time frame for drawing the current animation end tail; the current animation end tail time frame is determined according to the current global time frame and the animation line length frame number; the animation line length frame number is used to represent the number of frames required to draw the animation line itself.
6. The apparatus of claim 5, wherein if the given trajectory is a straight line, the trajectory point data comprises at least a start trajectory point time frame and an end trajectory point time frame;
the starting track point time frame is the number of frames required by the animation end to reach the starting track point at a given frame rate, and the ending track point time frame is the number of frames required by the animation end to reach the ending track point at the given frame rate; the initial trajectory point time frame is the same as the initial value of the global time frame.
7. The apparatus of claim 6, wherein if the given trajectory is a polyline, the trajectory point data further comprises at least a inflection time frame;
The inflection point time frame is the number of frames required for the animation end to reach the inflection point at a given frame rate.
8. The apparatus of any one of claim 5 to 7, wherein the rendering module further comprises,
a rendering position determination sub-module configured to:
if the target track point is the first target track point, determining the position offset between the current animation end and the first target track point according to the difference value of the track point time frame of the current global time frame and the first target track point and the frame rate; determining a first drawing position of an animation end for drawing the current animation along the preset track line according to the position of the first target track point and the position offset;
if the target track point is the second target track point, determining that the position of the current track point is a second drawing position;
if the target track point is the third target track point, determining the position offset between the current animation end tail and the third target track point according to the difference value between the current animation end tail time frame and the third target track point time frame and the frame rate; and determining a third drawing position of the animation end tail for drawing the current animation along the preset track line according to the position of the third target track point and the position offset.
9. An electronic device comprising a memory storing a computer program and a processor configured to perform the steps of the method of implementing a line animation according to any of claims 1 to 4.
10. A computer storage medium, wherein a computer program is stored in the storage medium, which when executed by a processor causes the processor to perform the steps of the method of implementing a line animation according to any of claims 1 to 4.
CN202011001669.4A 2020-09-22 2020-09-22 Method and device for realizing line animation Active CN112116688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011001669.4A CN112116688B (en) 2020-09-22 2020-09-22 Method and device for realizing line animation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011001669.4A CN112116688B (en) 2020-09-22 2020-09-22 Method and device for realizing line animation

Publications (2)

Publication Number Publication Date
CN112116688A CN112116688A (en) 2020-12-22
CN112116688B true CN112116688B (en) 2024-02-02

Family

ID=73800254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011001669.4A Active CN112116688B (en) 2020-09-22 2020-09-22 Method and device for realizing line animation

Country Status (1)

Country Link
CN (1) CN112116688B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651998A (en) * 2015-10-27 2017-05-10 北京国双科技有限公司 Canvas-based animation playing speed adjusting method and device
JP2018078431A (en) * 2016-11-09 2018-05-17 日本放送協会 Object tracker and its program
CN111080754A (en) * 2019-12-12 2020-04-28 广东智媒云图科技股份有限公司 Character animation production method and device for connecting characteristic points of head and limbs

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4505760B2 (en) * 2007-10-24 2010-07-21 ソニー株式会社 Information processing apparatus and method, program, and recording medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651998A (en) * 2015-10-27 2017-05-10 北京国双科技有限公司 Canvas-based animation playing speed adjusting method and device
JP2018078431A (en) * 2016-11-09 2018-05-17 日本放送協会 Object tracker and its program
CN111080754A (en) * 2019-12-12 2020-04-28 广东智媒云图科技股份有限公司 Character animation production method and device for connecting characteristic points of head and limbs

Also Published As

Publication number Publication date
CN112116688A (en) 2020-12-22

Similar Documents

Publication Publication Date Title
CN109840931B (en) Batch rendering method, device and system for skeletal animation and storage medium
US8760450B2 (en) Real-time mesh simplification using the graphics processing unit
US7280121B2 (en) Image processing apparatus and method of same
US6654020B2 (en) Method of rendering motion blur image and apparatus therefor
US20160371876A1 (en) Systems and methods for 3-d scene acceleration structure creation and updating
EP2308224B1 (en) Gpu scene composition and animation
CN110738721A (en) Three-dimensional scene rendering acceleration method and system based on video geometric analysis
Boubekeur et al. A flexible kernel for adaptive mesh refinement on GPU
US8130223B1 (en) System and method for structuring an A-buffer to support multi-sample anti-aliasing
CN112184575A (en) Image rendering method and device
EP1519317A1 (en) Apparatus and method for antialiasing based on z buffer or normal vector information
US8553041B1 (en) System and method for structuring an A-buffer to support multi-sample anti-aliasing
KR102657587B1 (en) Method and apparatus for rendering a curve
JPWO2013005366A1 (en) Anti-aliased image generation apparatus and anti-aliased image generation method
CN112116688B (en) Method and device for realizing line animation
JP2023178274A (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions
CN111402369A (en) Interactive advertisement processing method and device, terminal equipment and storage medium
CN111815736A (en) Flying line construction method and device, flying line display method and device, computer storage medium and electronic equipment
CN111932689B (en) Three-dimensional object quick selection method adopting ID pixel graph
Gonakhchyan Performance Model of Graphics Pipeline for a One-Pass Rendering of 3D Dynamic Scenes
Ramos et al. Efficient visualization of 3D models on hardware-limited portable devices
Semenov et al. Visualization of Large Scenes with Deterministic Dynamics
JP2000148126A (en) Image display device and method
US11954802B2 (en) Method and system for generating polygon meshes approximating surfaces using iteration for mesh vertex positions
US20230394767A1 (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant