CN116389832A - Animation resource playing method and device and network live broadcast system - Google Patents

Animation resource playing method and device and network live broadcast system Download PDF

Info

Publication number
CN116389832A
CN116389832A CN202211644885.XA CN202211644885A CN116389832A CN 116389832 A CN116389832 A CN 116389832A CN 202211644885 A CN202211644885 A CN 202211644885A CN 116389832 A CN116389832 A CN 116389832A
Authority
CN
China
Prior art keywords
animation
frame
picture
playing
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211644885.XA
Other languages
Chinese (zh)
Inventor
梁伟杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202211644885.XA priority Critical patent/CN116389832A/en
Publication of CN116389832A publication Critical patent/CN116389832A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to a playing method and device of animation resources, a network live broadcast system, electronic equipment and a computer storage medium, wherein the method comprises the following steps: obtaining animation resources and analyzing to obtain animation pictures of each frame; the animation resources are generated according to the minimum size played by the client; calculating the magnification factor according to the actual size and the minimum size of the current playing animation picture, and selecting a corresponding target interpolation algorithm according to the magnification factor; wherein the different magnification ranges correspond to different interpolation algorithms; interpolation processing is carried out on the animation pictures according to a target interpolation algorithm to obtain pictures to be rendered, wherein the pictures are amplified to the actual size; and calling an animation player to play the picture to be rendered frame by frame. According to the technical scheme, the animation resources are loaded with lower memory overhead, the phenomena of frame dropping and blocking are avoided, the definition of images played by the animation resources can be ensured to the greatest extent, and the playing effect of the animation resources is improved.

Description

Animation resource playing method and device and network live broadcast system
Technical Field
The application relates to the technical field of video processing, in particular to a playing method and device of animation resources and a network live broadcast system.
Background
At present, with the large-scale popularization of network live broadcast and short video application, various special effects animation can be designed by using a graphic design tool, typically, an engineer designs a set of animation resources and then provides the animation resources for a client device to play, in order to ensure that each frame of the animation is high definition when the animation is played at the client, the engineer can customize each frame of the animation resources with a maximum size according to the screen conditions of various clients, and if the animation resources are manufactured by using a small size, pixelation or edge jaggy phenomenon can be generated after the animation picture is enlarged, so that the display effect of the animation on various clients can be ensured through the maximum size customization without enlarging processing.
However, in actual use, when a client such as a small-screen client device loads a large-size animation resource, a large Memory similar to that Of the large-screen client device needs to be opened up to load image pixels, so that the performance cost Of the client device is particularly high, and when the performance cost Of the client device is serious, the risk Of OOM (Out Memory) is brought, so that the APP is killed by the system, and the use experience Of a user is directly affected.
Therefore, although the definition of playing on various client devices can be ensured by adopting the large-size customized animation resources, the memory occupation is overlarge easily caused when the small-screen client devices are played, the OOM risk exists, and the playing effect of the animation resources is affected.
Disclosure of Invention
Based on this, it is necessary to provide a playing method, device, network live broadcast system, electronic device and computer storage medium for the animation resources to achieve the image definition of the animation resources playing ensured on the basis of low memory overhead.
A playing method of animation resources comprises the following steps:
obtaining animation resources and analyzing to obtain animation pictures of each frame; wherein, the animation resource is generated according to the minimum size played by the client;
calculating the magnification factor according to the actual size and the minimum size of the current playing animation picture, and selecting a corresponding target interpolation algorithm according to the magnification factor; wherein, different magnification ranges correspond to different interpolation algorithms;
performing interpolation processing on the animation picture according to the target interpolation algorithm to obtain a picture to be rendered, which is amplified to the actual size;
and calling an animation player to play the picture to be rendered frame by frame.
In one embodiment, the obtaining the animation resource and analyzing to obtain each frame of animation picture includes:
and when the animation resources are played, analyzing the animation resources by using a system ImageIO frame to obtain each frame of animation pictures.
In one embodiment, the calculating the magnification according to the actual size and the minimum size of the currently played animation picture, and selecting the corresponding target interpolation algorithm according to the magnification includes:
acquiring the actual size of the animation picture currently played;
calculating the magnification factor of the area of the animation picture according to the actual size and the minimum size;
and selecting a target interpolation algorithm according to the corresponding relation of the pre-configured magnification and interpolation algorithm.
In one embodiment, the interpolation algorithm includes nearest neighbor interpolation, bilinear interpolation, and bicubic interpolation;
when p is larger than beta, adopting a nearest neighbor interpolation method;
when beta is more than or equal to p is more than or equal to alpha, adopting a bilinear interpolation method;
when p is less than alpha, adopting a double-cube interpolation method;
wherein p is the magnification, alpha and beta are constants, and beta > alpha.
In one embodiment, the interpolating the animation picture according to the target interpolation algorithm to obtain a picture to be rendered enlarged to the actual size includes:
acquiring an offset relation between vertex coordinates of the animation picture and UI interface coordinates of a client;
and traversing each pixel point of the animation picture, calling a target interpolation algorithm through an API, and carrying out interpolation processing on each pixel point according to the offset relation to obtain an image byte array to be rendered.
In one embodiment, the invoking the animation player to play the picture to be rendered frame by frame includes:
drawing the image byte array to be rendered into textures by using an OpenGL texture mapping function;
rendering the texture into a display picture by using an OpenGL rendering function;
storing the display picture into a frame buffer;
and calling the animation player to read the display picture from the frame buffer at the set play frame rate for display.
In one embodiment, the frame buffer is a three-level buffer structure, including frame buffer a, frame buffer B, and frame buffer C;
the storing the display picture into a frame buffer includes:
storing the display picture into a frame buffer A;
when the frame buffer A reads and writes data and the frame buffer B is empty, storing the display picture into the frame buffer B;
when the frame buffer B is full, storing the display picture into a frame buffer C;
the calling the animation player to read the display picture from the frame buffer for display at the set play frame rate comprises the following steps:
and calling an animation player to read the display picture from the frame buffer A at a preset frame rate for display.
In one embodiment, before invoking the animation player for frame-by-frame rendering, further comprising:
acquiring the utilization rate of the memory currently played;
acquiring the type of a target interpolation algorithm used;
and calculating the playing frame rate of the animation player according to the memory utilization rate and the type of the target interpolation algorithm, and calling the animation player to read the display picture from a frame buffer according to the playing frame rate for display.
In one embodiment, the method for playing the animation resource further includes:
acquiring the number of frame pictures in the animation resource;
calculating the starting delay time of the animation player according to the playing frame rate and the number of frame pictures;
and controlling the animation player to display the display picture after the starting delay time.
A playback apparatus of an animation resource, comprising:
the analysis module is used for acquiring animation resources and analyzing to obtain animation pictures of each frame; wherein, the animation resource is generated according to the minimum size played by the client;
the selection module is used for calculating the amplification factor according to the actual size and the minimum size of the current playing animation picture, and selecting a corresponding target interpolation algorithm according to the amplification factor; wherein, different magnification ranges correspond to different interpolation algorithms;
the amplifying module is used for carrying out interpolation processing on the animation pictures according to the target interpolation algorithm to obtain pictures to be rendered amplified to the actual size;
and the playing module is used for calling the animation player to play the pictures to be rendered frame by frame.
A network live broadcast system, comprising: a live broadcast server and a plurality of clients; the client is connected to the live broadcast server through a network;
the live broadcast server is used for generating animation resources according to the minimum size played by the client and transmitting the animation resources to each client;
the client is configured to execute the steps of the playing method of the animation resource.
An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of playing animation resources described above.
A computer storage medium storing at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded by the processor and executing the method of playing an animation resource as described above.
According to the technical scheme, firstly, animation resources are generated according to the minimum size played by a client, when the animation resources are played, the animation resources are analyzed to obtain each frame of animation pictures, the amplification factor is calculated, a corresponding target interpolation algorithm is selected, the animation pictures are subjected to interpolation processing by the target interpolation algorithm in the amplification processing to obtain pictures to be rendered, and finally the pictures to be rendered are sequentially put into a frame buffer to be rendered frame by frame; according to the technical scheme, the animation resources are loaded with lower memory overhead, the phenomena of frame dropping and blocking are avoided, and when the animation resources are played in different sizes, the interpolation processing is carried out by selecting the proper interpolation algorithm according to different amplification factors, so that the image definition of the animation resource playing is ensured to the greatest extent, and the animation resource playing effect is improved.
Furthermore, the playing frame rate of the animation player is calculated through the memory utilization rate and the type of the target interpolation algorithm, so that animation resources can be smoothly played when aiming at different types of client equipment, the phenomenon of frame dropping in the playing process is avoided, and meanwhile, the definition of animation pictures can be improved as much as possible.
Furthermore, the starting delay time of the animation player is calculated through the playing frame rate and the number of frame pictures, and when the animation player plays the animation pictures, the animation player can be started after a certain time delay, so that the phenomenon of blocking of a client side when playing the animation resources is avoided, and the playing effect is improved.
Drawings
FIG. 1 is a schematic diagram of an exemplary hardware environment;
FIG. 2 is a flow chart of a method of playing an animation resource of an embodiment;
FIG. 3 is a flow chart of an example animation player playing an animated picture;
FIG. 4 is a schematic diagram of an exemplary frame buffer structure;
FIG. 5 is a schematic diagram of a playback apparatus of an animation resource according to an embodiment;
FIG. 6 is a schematic diagram of a webcast system architecture of one embodiment;
fig. 7 is a schematic diagram of an exemplary electronic device structure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In the embodiments of the present application, reference to "first," "second," etc. is used to distinguish between identical items or similar items that have substantially the same function and function, "at least one" means one or more, "a plurality" means two or more, e.g., a plurality of objects means two or more. The words "comprise" or "comprising" and the like mean that information preceding the word "comprising" or "comprises" is meant to encompass the information listed thereafter and equivalents thereof as well as additional information not being excluded. Reference to "and/or" in the embodiments of the present application indicates that there may be three relationships, and the character "/" generally indicates that the associated object is an "or" relationship.
Referring to fig. 1, fig. 1 is a schematic diagram of an exemplary hardware environment, in which a designer needs to make various animation resources and play the animation resources through client devices under a network, and by applying the technical scheme of the embodiment, when the client plays the animation resources, the high definition requirements can be ensured according to the requirements of client devices with different sizes on the basis of less memory overhead, that is, the problems of OOM risk and jam phenomenon occurring in the playing process can be avoided, and the definition of the maximum effect can be obtained.
Referring to fig. 2, fig. 2 is a flowchart of an exemplary playing method of an animation resource, which mainly includes the following steps:
s10, obtaining animation resources and analyzing to obtain animation pictures of each frame; the animation resource is generated according to the minimum size played by the client.
In this step, the designer uses the image processing tool to create various animation resources, and creates animation resources with the minimum size according to the screen sizes of all the clients playing, so that when the clients load animation resources, the minimum memory can be used for loading.
When playing the animation resources, the client can download the animation resources from the server and then analyze the animation resources to obtain each frame of animation pictures; and in the analysis process, a system ImageIO frame (an underlying image read-write frame) can be used for analyzing the animation resources to obtain animation pictures of each frame. The system ImageIO framework is adopted for analysis, so that extra memory consumption caused by picture scaling can be reduced.
S20, calculating the magnification factor according to the actual size and the minimum size of the current playing animation picture, and selecting a corresponding target interpolation algorithm according to the magnification factor; wherein, different magnification ranges correspond to different interpolation algorithms.
In this step, different interpolation algorithms corresponding to different magnification ranges are configured in advance at the client, and various interpolation algorithms are packaged in advance at the client and provide an API for calling.
Accordingly, in one embodiment, when the client plays the animation picture, the actual size of the animation picture that the client needs to play currently is first obtained, then the magnification is calculated according to the ratio of the actual size to the minimum size, and the corresponding target interpolation algorithm is selected according to the magnification.
As one example, the interpolation algorithm of the present application may include nearest neighbor interpolation, bilinear interpolation, bicubic interpolation, and the like; therefore, the client can dynamically select a required interpolation algorithm according to the actual size of the played animation picture; on the one hand, the memory overhead is ensured to be in a proper range in the running process of the interpolation algorithm, and the phenomenon of blocking and the like caused by influencing the playing performance of the client equipment is avoided; on the other hand, the best sharpness effect can be obtained by selecting the interpolation algorithm with the best effect according to the utilization of the memory resource of the client device as much as possible.
Preferably, when the magnification range and interpolation algorithm are configured, the following form can be adopted:
(1) when p is larger than beta, adopting a nearest neighbor interpolation method;
(2) when beta is more than or equal to p is more than or equal to alpha, adopting a bilinear interpolation method;
(3) when p is less than alpha, adopting a double-cube interpolation method;
in the above embodiment, α and β are constants, the magnification is p=lh/LH, and β > α; the configured magnification range and interpolation algorithm can be determined according to requirements, L and H are the length and the width of the actual size of the animation picture played by the client, and L and H are the length and the width of the minimum size of the animation picture.
And S30, carrying out interpolation processing on the animation picture according to the target interpolation algorithm to obtain a picture to be rendered, which is amplified to the actual size.
In the step, in the process of amplifying the animation pictures, the definition of the animation pictures is ensured by interpolation, and the animation pictures are interpolated according to the selected target interpolation algorithm, so that the pictures to be rendered amplified to the actual size can be obtained.
In one embodiment, for the interpolation processing method of step S30, the following may be included:
s301, acquiring an offset relation between the vertex coordinates of the animation picture and UI interface coordinates of the client.
Because the image data is a two-dimensional vector, when the OpenGL is adopted to render the image, coordinate values in the image matrix are required to be mapped into a two-dimensional coordinate system, a vertex coordinate system in the OpenGL uses a center as a coordinate origin, a UI interface of a general client uses an upper left corner as the coordinate origin, and an offset relationship exists between the two coordinate systems, which can be described as: (x) 1 ,y 1 ) = (x-0.5, y-0.5), corresponding, the matrix of the animated picture will be enlargedWhen the coordinates are mapped back to the matrix coordinates of the original animation picture, the following mathematical relationship needs to be satisfied: (x) 2 ,y 2 ) (x-0.5, y-0.5) +0.5, where (x, y) is the matrix coordinates of the original animation picture, (x) 1 ,y 1 ) Is the vertex coordinates, (x) 2 ,y 2 ) For the enlarged animation picture matrix coordinates, the ratio represents a complex number.
S302, traversing each pixel point of the animation picture, calling a target interpolation algorithm through an API, and carrying out interpolation processing on each pixel point according to the offset relation to obtain an image byte array to be rendered.
Specifically, taking nearest neighbor interpolation, bilinear interpolation, and bicubic interpolation as examples, the interpolation process may specifically be as follows:
(1) Nearest neighbor interpolation (i.e., single linear interpolation):
the nearest neighbor interpolation needs to use two coordinates to traverse each pixel point of the animation picture, the current pixel coordinate, the next pixel coordinate of the horizontal position and the mathematical relation are transmitted into a system API, and the nearest neighbor interpolation method is called to conduct interpolation processing to obtain an image byte array to be rendered.
(2) Bilinear interpolation:
the bilinear interpolation is a two-dimensional interpolation algorithm based on a nearest neighbor interpolation method, three coordinates are needed, each pixel point of the animation picture is traversed, the coordinates of the current pixel point, the next coordinates of the horizontal position and the next coordinates of the vertical position of the current pixel coordinate and the mathematical relation are transmitted into a system API, and the bilinear interpolation method is called to conduct interpolation processing to obtain an image byte array to be rendered.
(3) Double cubic interpolation:
boundary condition formulation is carried out firstly: the cubic interpolation function needs to be transmitted into a frame processing conditional expression, and the client can customize the frame processing to be as follows: x <1 y <1 x > col or y > row;
where col and row are the width and height in the animation picture parsed by ImageIO.
Then, a double cubic interpolation is performed: and transferring the image bytes, the mathematical relation and the boundary condition expression into a system API by using a double-cube interpolation function, and calling the double-cube interpolation method to perform interpolation processing to obtain an image byte array to be rendered.
S40, calling an animation player to play the picture to be rendered frame by frame.
In the step, each frame of animation pictures is traversed, each frame of pictures to be rendered is rendered into pictures by using OpenGL, the pictures are put into a frame buffer for buffering, and then an animation player of a client is called for rendering and displaying frame by frame.
In one embodiment, referring to fig. 3, fig. 3 is a flowchart of an example animation player playing an animation picture, and the playing process of step S40 may include the following:
s401, drawing the image byte array to be rendered into textures by using an OpenGL texture mapping function.
S402, rendering the texture into a display picture by using an OpenGL rendering function.
S403, storing the display picture in a frame buffer.
S404, calling an animation player to read the display picture from the frame buffer at the set play frame rate for display.
In one embodiment, to avoid frame dropping of the picture to be rendered, the frame buffer may employ a three-level buffer structure, as shown in fig. 4, and fig. 4 is a schematic diagram of an exemplary frame buffer structure, including frame buffer a, frame buffer B, and frame buffer C; accordingly, in the process of storing the display picture into the frame buffer, the following may be adopted:
storing the display picture into a frame buffer A; when the frame buffer A reads and writes data and the frame buffer B is empty, storing the display picture into the frame buffer B; when the frame buffer B is full, storing the display picture into a frame buffer C; the buffer sizes of the frame buffer B and the frame buffer C may be set according to actual needs.
When a display picture is played, calling an animation player to read the display picture from a frame buffer A at a preset frame rate fps for display; i.e. the animation player reads the presentation pictures from the frame buffer a for presentation according to a 1/fps time interval.
In one embodiment, in order to ensure smoothness of the client when playing the animation resources and avoid a frame dropping phenomenon, the technical solution of the present application may further include the following steps before invoking the animation player to perform frame-by-frame rendering in step S40:
(a) Acquiring the utilization rate of the memory currently played; specifically, the current memory overhead condition can be monitored through the memory utilization rate.
(b) Acquiring the type of a target interpolation algorithm used; specifically, nearest neighbor interpolation, bilinear interpolation, and bicubic interpolation as described above; each algorithm has different memory overhead and can be used as a reference for setting the playing frame rate.
(c) And calculating the playing frame rate of the animation player according to the memory utilization rate and the type of the target interpolation algorithm, and calling the animation player to read the display picture from a frame buffer according to the playing frame rate for display.
According to the technical scheme of the embodiment, the playing frame rate of the animation player is calculated through the memory utilization rate and the type of the target interpolation algorithm, when the memory utilization rate is higher and the memory overhead of the target interpolation algorithm is higher, the playing frame rate of the animation player is reduced, and otherwise, the playing frame rate of the animation player can be improved; therefore, when aiming at different types of client equipment, the animation resources can be smoothly played, the phenomenon of frame dropping in the playing process is avoided, and meanwhile, the definition of the animation pictures can be improved as much as possible.
In one embodiment, in order to avoid a jamming phenomenon of the client when playing the animation resource, the technical scheme of the application may further include the following steps before playing the display picture:
(d) Acquiring the number of frame pictures in the animation resource; specifically, the number of animation pictures in the animation resource is traversed and analyzed.
(e) Calculating the starting delay time of the animation player according to the playing frame rate and the number of frame pictures; specifically, after the animation resources are loaded, the time for which the animation player needs to delay starting is calculated by combining the playing frame rate and the number of frame pictures, so that rendering cannot keep up with the playing speed due to early starting is avoided.
(f) Controlling the animation player to display the display picture after the starting delay time; specifically, each frame of animation picture is traversed to carry out interpolation processing, and the starting delay time of the animation player is calculated.
According to the technical scheme, the starting delay time of the animation player is calculated through the playing frame rate and the number of the frame pictures, when the animation player plays the animation pictures, the animation player can be started after a certain time delay, so that the phenomenon of blocking of a client side when playing the animation resources is avoided, and the playing effect is improved.
According to the technical scheme, through manufacturing the animation resources with the minimum size, when the animation resources are played on all the size client devices, memory expenditure can be well controlled, for example, when the animation resources with the small size are played on a small screen, the animation pictures can be normally played, the aspect ratio is kept unchanged, OOM risks are avoided, when the animation resources with the small size are played on a large screen, the animation resources can be loaded with lower memory expenditure, and meanwhile, an appropriate interpolation algorithm is selected according to the amplification factor to conduct interpolation processing on the animation pictures, so that the amplified animation pictures can still keep high definition, and therefore performance and effects of playing the animation resources are improved.
An embodiment of a playback apparatus for animation resources is described below.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a playing device of an animation resource according to an embodiment, where the device includes:
the analysis module 10 is used for acquiring animation resources and analyzing to obtain animation pictures of each frame; wherein, the animation resource is generated according to the minimum size played by the client;
the selection module 20 is configured to calculate a magnification factor according to an actual size and the minimum size of the currently played animation picture, and select a corresponding target interpolation algorithm according to the magnification factor; wherein, different magnification ranges correspond to different interpolation algorithms;
the amplifying module 30 is configured to perform interpolation processing on the animation image according to the target interpolation algorithm to obtain a to-be-rendered image amplified to the actual size;
and the playing module 40 is used for calling the animation player to play the pictures to be rendered frame by frame.
The playing device of the animation resources of the present embodiment may execute a playing method of the animation resources provided by the embodiment of the present application, and its implementation principle is similar, and actions executed by each module in the playing device of the animation resources of each embodiment of the present application correspond to steps in the playing method of the animation resources of each embodiment of the present application, and detailed functional descriptions of each module of the playing device of the animation resources may be specifically referred to descriptions in the playing method of the corresponding animation resources shown in the foregoing, which are not repeated herein.
An embodiment of a webcast system is set forth below.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a network living broadcast system according to an embodiment, including: a live broadcast server and a plurality of clients; the client is connected to the live broadcast server through a network;
the live broadcast server is used for generating animation resources according to the minimum size played by the client and transmitting the animation resources to each client; the client is configured to execute the steps of the playing method of the animation resource of any of the above embodiments.
When the resources such as the animation special effects are required to be issued to the client device for playing, the network live broadcast system firstly generates animation resources according to the minimum size played by the client device on the live broadcast server, when the animation resources are played, the client device analyzes the animation resources to obtain each frame of animation pictures, calculates the amplification factors, selects a corresponding target interpolation algorithm, performs interpolation processing on the animation pictures by using the target interpolation algorithm in the amplification processing to obtain pictures to be rendered, and finally sequentially puts the pictures to be rendered into a frame buffer for frame-by-frame rendering.
The network live broadcast system realizes that various clients can load animation resources with lower memory overhead, avoids frame dropping and blocking, can ensure the image definition of animation resource playing to the maximum extent, and improves the animation resource playing effect.
Embodiments of an electronic device and a computer storage medium are described below.
An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of playing animation resources described above.
Referring to fig. 7, fig. 7 is a schematic diagram of an exemplary electronic device including a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the electronic device is used for conducting wired or wireless communication with an external device, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program when executed by a processor is to implement a live face image processing method. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the electronic equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
A computer storage medium storing at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded by the processor and executing the method of playing an animation resource as described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The foregoing examples represent only a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (13)

1. A method for playing an animation resource, comprising:
obtaining animation resources and analyzing to obtain animation pictures of each frame; wherein, the animation resource is generated according to the minimum size played by the client;
calculating the magnification factor according to the actual size and the minimum size of the current playing animation picture, and selecting a corresponding target interpolation algorithm according to the magnification factor; wherein, different magnification ranges correspond to different interpolation algorithms;
performing interpolation processing on the animation picture according to the target interpolation algorithm to obtain a picture to be rendered, which is amplified to the actual size;
and calling an animation player to play the picture to be rendered frame by frame.
2. The method for playing an animation resource according to claim 1, wherein the steps of obtaining an animation resource and parsing the animation resource to obtain each frame of animation picture comprise:
and when the animation resources are played, analyzing the animation resources by using a system ImageIO frame to obtain each frame of animation pictures.
3. The method for playing an animation resource according to claim 1, wherein calculating a magnification according to an actual size of a currently played animation picture and the minimum size, and selecting a corresponding target interpolation algorithm according to the magnification comprises:
acquiring the actual size of the animation picture currently played;
calculating the magnification factor of the area of the animation picture according to the actual size and the minimum size;
and selecting a target interpolation algorithm according to the corresponding relation of the pre-configured magnification and interpolation algorithm.
4. A playing method of animation resources according to claim 3, wherein the interpolation algorithm comprises nearest neighbor interpolation, bilinear interpolation and bicubic interpolation;
when p is larger than beta, adopting a nearest neighbor interpolation method;
when beta is more than or equal to p is more than or equal to alpha, adopting a bilinear interpolation method;
when p is less than alpha, adopting a double-cube interpolation method;
wherein p is the magnification, alpha and beta are constants, and beta > alpha.
5. The method for playing an animation resource according to claim 3, wherein the interpolating the animation picture according to the target interpolation algorithm to obtain a picture to be rendered enlarged to the actual size comprises:
acquiring an offset relation between vertex coordinates of the animation picture and UI interface coordinates of a client;
and traversing each pixel point of the animation picture, calling a target interpolation algorithm through an API, and carrying out interpolation processing on each pixel point according to the offset relation to obtain an image byte array to be rendered.
6. The method for playing an animation resource according to claim 5, wherein the invoking the animation player to play the picture to be rendered frame by frame comprises:
drawing the image byte array to be rendered into textures by using an OpenGL texture mapping function;
rendering the texture into a display picture by using an OpenGL rendering function;
storing the display picture into a frame buffer;
and calling the animation player to read the display picture from the frame buffer at the set play frame rate for display.
7. The method for playing back animation resources according to claim 6, wherein the frame buffer is a three-level buffer structure comprising a frame buffer a, a frame buffer B and a frame buffer C;
the storing the display picture into a frame buffer includes:
storing the display picture into a frame buffer A;
when the frame buffer A reads and writes data and the frame buffer B is empty, storing the display picture into the frame buffer B;
when the frame buffer B is full, storing the display picture into a frame buffer C;
the calling the animation player to read the display picture from the frame buffer for display at the set play frame rate comprises the following steps:
and calling an animation player to read the display picture from the frame buffer A at a preset frame rate for display.
8. The playback method of an animation resource as recited in any one of claims 1-7, further comprising, prior to invoking the animation player to render frame-by-frame:
acquiring the utilization rate of the memory currently played;
acquiring the type of a target interpolation algorithm used;
and calculating the playing frame rate of the animation player according to the memory utilization rate and the type of the target interpolation algorithm, and calling the animation player to read the display picture from a frame buffer according to the playing frame rate for display.
9. The method for playing an animation resource according to claim 8, further comprising:
acquiring the number of frame pictures in the animation resource;
calculating the starting delay time of the animation player according to the playing frame rate and the number of frame pictures;
and controlling the animation player to display the display picture after the starting delay time.
10. A playback apparatus for an animation resource, comprising:
the analysis module is used for acquiring animation resources and analyzing to obtain animation pictures of each frame; wherein, the animation resource is generated according to the minimum size played by the client;
the selection module is used for calculating the amplification factor according to the actual size and the minimum size of the current playing animation picture, and selecting a corresponding target interpolation algorithm according to the amplification factor; wherein, different magnification ranges correspond to different interpolation algorithms;
the amplifying module is used for carrying out interpolation processing on the animation pictures according to the target interpolation algorithm to obtain pictures to be rendered amplified to the actual size;
and the playing module is used for calling the animation player to play the pictures to be rendered frame by frame.
11. A network living broadcast system, comprising: a live broadcast server and a plurality of clients; the client is connected to the live broadcast server through a network;
the live broadcast server is used for generating animation resources according to the minimum size played by the client and transmitting the animation resources to each client;
the client being configured to perform the steps of the playing method of the animation resource of any of claims 1-9.
12. An electronic device, comprising:
one or more processors;
a memory;
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more program configured to perform the steps of the method of playing an animation resource of any of claims 1-9.
13. A computer storage medium storing at least one instruction, at least one program, code set, or instruction set, the at least one instruction, the at least one program, the code set, or instruction set being loaded by the processor and performing the steps of the playing method of the animation resource of any of claims 1-9.
CN202211644885.XA 2022-12-16 2022-12-16 Animation resource playing method and device and network live broadcast system Pending CN116389832A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211644885.XA CN116389832A (en) 2022-12-16 2022-12-16 Animation resource playing method and device and network live broadcast system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211644885.XA CN116389832A (en) 2022-12-16 2022-12-16 Animation resource playing method and device and network live broadcast system

Publications (1)

Publication Number Publication Date
CN116389832A true CN116389832A (en) 2023-07-04

Family

ID=86977499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211644885.XA Pending CN116389832A (en) 2022-12-16 2022-12-16 Animation resource playing method and device and network live broadcast system

Country Status (1)

Country Link
CN (1) CN116389832A (en)

Similar Documents

Publication Publication Date Title
KR102475212B1 (en) Foveated rendering in tiled architectures
WO2022110903A1 (en) Method and system for rendering panoramic video
CN110377264B (en) Layer synthesis method, device, electronic equipment and storage medium
US9715750B2 (en) System and method for layering using tile-based renderers
CN108574806B (en) Video playing method and device
US20220139017A1 (en) Layer composition method, electronic device, and storage medium
CN110377263B (en) Image synthesis method, image synthesis device, electronic equipment and storage medium
US10540077B1 (en) Conserving processing resources by controlling updates to damaged tiles of a content page
US20150091931A1 (en) Procedurally Defined Texture Maps
CN108027955B (en) Techniques for storage of bandwidth-compressed graphics data
JP2018512644A (en) System and method for reducing memory bandwidth using low quality tiles
CN103026386B (en) The data structure of image processing apparatus, image display device, image processing method and image file
JP2014533382A (en) Rendering mode selection in graphics processing unit
US20180144538A1 (en) Method and apparatus for performing tile-based rendering
US10169887B2 (en) Accelerated blits of multisampled textures on GPUs
EP2997547B1 (en) Primitive-based composition
TW201506844A (en) Texture address mode discarding filter taps
US8780120B2 (en) GPU self throttling
WO2019041264A1 (en) Image processing apparatus and method, and related circuit
US20150242988A1 (en) Methods of eliminating redundant rendering of frames
JP2017531229A (en) High-order filtering in graphics processing unit
CN116389832A (en) Animation resource playing method and device and network live broadcast system
US10271097B2 (en) Dynamic resolution determination
CN112116719B (en) Method and device for determining object in three-dimensional scene, storage medium and electronic equipment
CN114969409A (en) Image display method and device and readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination