CN112150591B - Intelligent cartoon and layered multimedia processing device - Google Patents

Intelligent cartoon and layered multimedia processing device Download PDF

Info

Publication number
CN112150591B
CN112150591B CN202011056771.4A CN202011056771A CN112150591B CN 112150591 B CN112150591 B CN 112150591B CN 202011056771 A CN202011056771 A CN 202011056771A CN 112150591 B CN112150591 B CN 112150591B
Authority
CN
China
Prior art keywords
frame buffer
texture
layer
information
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011056771.4A
Other languages
Chinese (zh)
Other versions
CN112150591A (en
Inventor
林青山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Guangzhuiyuan Information Technology Co ltd
Original Assignee
Guangzhou Guangzhuiyuan Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Guangzhuiyuan Information Technology Co ltd filed Critical Guangzhou Guangzhuiyuan Information Technology Co ltd
Priority to CN202011056771.4A priority Critical patent/CN112150591B/en
Publication of CN112150591A publication Critical patent/CN112150591A/en
Application granted granted Critical
Publication of CN112150591B publication Critical patent/CN112150591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation

Abstract

The invention relates to an intelligent animation and layered multimedia processing device, which comprises a layer rendering module, an image content input module, a special effect processing module, an animation control module, a rendering display control module and a compression coding export module. The intelligent animation editing method and device can support intelligent animation and edit the graphics layer through cooperation of multiple modules.

Description

Intelligent cartoon and layered multimedia processing device
Technical Field
The invention belongs to the technical field of multimedia, and particularly relates to an intelligent animation and layering multimedia processing device.
Background
With the wide popularization of mobile devices in the world, the improvement of computing power of the mobile devices and the perfection and maturity of internet infrastructure are increased, the demands of users for self-individuation expression by making videos by means of the mobile devices are increasing, but the existing mobile terminal image video processing software has the problems that multiple different types of multimedia files cannot be edited in a layering manner, special effect layers cannot be flexibly selected, animation monotonically lacks of customization and the like, and the users cannot support the flexible and diversified expression demands.
In the related art, in image video editing software on the market, although various content materials such as video, GIF, still picture, text and the like can be selected for editing, there are the following problems:
1, multiple content materials cannot be layered to freely adjust the display hierarchy relationship between different types of content materials. The content materials of different types can only select the corresponding type of system control provided by the Android platform to display, the hierarchical relationship among the different types of control can not be freely adjusted, and the hierarchical relationship among the different types of control can only be fixed.
2, the action object with any special effect cannot be set as any layer. The application object of the special effect resource can only take a single image layer of a specific type as an application target, but cannot take the whole of any plurality of adjacent image layers as the application target.
And 3, only supporting the layer animation of the predefined spatial motion type inside the application system. Because the animation scheme provided by the system is used, a user cannot autonomously determine the motion trail of the layer animation, and cannot customize rich types of layer animation such as filter effect intensity change and the like outside spatial motion.
Disclosure of Invention
In view of the above, the present invention aims to overcome the shortcomings of the prior art, and provide an intelligent animation and layered multimedia processing device, so as to solve the problems that in the prior art, video processing cannot edit multiple different types of multimedia files in layers, special effect layers cannot be flexibly selected, animation monotonically lacks in customization, and the like, and the user cannot support the flexible and diversified expression requirements of users.
In order to achieve the above purpose, the invention adopts the following technical scheme: a smart animation and layered multimedia processing apparatus, comprising: the system comprises a layer rendering module, an image content input module, a special effect processing module, an animation control module, a rendering display control module and a compression coding export module;
the image content input module is used for receiving input information and analyzing the input information to obtain basic information, and acquiring image data in a unified format according to the basic information; the input information comprises multimedia files and/or text information in various formats;
the layer rendering module is used for calculating the spatial position of the layer according to the basic information, updating the spatial position of the layer and transmitting the image data into textures for storing initial image data;
the special effect processing module is used for generating a special effect processing pipeline according to special effect setting, and the special effect processing pipeline is used for processing textures of the initial image data and transmitting a processing result image to the layer rendering module;
the animation control module is used for calculating current animation parameters according to animation settings and clock signals, wherein the animation parameters comprise picture layer position change parameters and special effect parameters, the picture layer position change parameters are transmitted to the picture layer rendering module to participate in determining the spatial position of a picture layer, and the special effect parameters are transmitted to the special effect processing module to participate in determining the processing effect of image data;
the layer rendering module is also used for merging a plurality of layers according to the spatial position of the layers and the processing result image to obtain final image processing data;
the rendering display control module is used for displaying the image processing data, and is also used for providing a playing rendering clock and coordinating the rate of rendering input and rendering output;
the compression coding export module is used for selecting a coding export mode according to a preset, and outputting the image processing data into a multimedia file.
Further, when the input information is a multimedia file, the basic information includes: file type, width, height and rotation angle;
when the input information is text information, the basic information includes: text content and word size information.
Further, the receiving the input information and analyzing the input information to obtain basic information, and obtaining the image data in the unified format according to the basic information includes:
receiving input information and judging the file type of the input information;
selecting a decoding program corresponding to the file type according to the file type;
and decoding the multimedia file according to the decoding program, unifying decoding results into RGBA format image data and/or rendering the text information to generate RGBA format text image data.
Further, the calculating the spatial position of the layer according to the basic information and updating the spatial position of the layer include:
calculating the initial size and the display position of the image layer according to the file type, width, height, rotation angle and screen resolution of the current equipment of the multimedia file;
and updating the size and the display position of the layer according to the moving operation of the user, the layer adjusting operation of the user and the animation parameters.
Further, the carrier of the image data is a layer;
the animation control module adds animation settings for the layers;
the animation settings are provided in the form of a list of key-value pairs, the keys being time stamps and the values being animation parameters.
Further, the special effect processing module comprises a chain type processing unit;
the chained processing unit is used for linking the special effects back and forth according to a preset sequence and outputting the texture of the input initial image data as the texture of the processing result image data.
Further, the outputting the texture of the input initial image data as the texture of the processing result image data includes:
constructing an input texture TS and an output texture TE, wherein N special effects exist, and the special effects are sequentially marked as F1 and F2 … … FN according to a set processing sequence, and constructing a frame buffer Pool for creating and recycling and managing frame buffer resources, and externally opening an application and returning frame buffer interface which is marked as Pool;
inputting a texture T of initial image data into F1, applying for a first frame buffer to Pool, marking as FB1, marking the texture attached to the FB1 as T1, and setting the FB1 as an output target of the F1;
executing F1, and outputting a result to FB1;
inputting T1 into F2, applying for a second frame buffer to Pool, marking as FB2, marking the texture attached to the FB2 as T2, and setting the FB2 as an output target of the F2;
executing F2, outputting a result to FB2, and returning FB1 to Pool;
inputting T2 into F3, applying frame buffering to Pool, obtaining FB1 again, multiplexing FB1 as an output target of F3;
executing F3, outputting a result to FB1, and returning FB2 to Pool;
inputting T1 into F4, applying frame buffer to Pool, obtaining FB2 again, multiplexing FB2 as the output target of F4,
executing F4, outputting the result to FB2,
repeating the steps until the final special effect FN;
and executing FN, outputting a result to TE, and returning frame buffer attached to the texture of the input FN to Pool.
Further, the internal mechanisms of Pool include:
an idle frame buffer table for recording the frame buffer information available for the current idle;
an occupied frame buffer table for recording frame buffer information currently applied for use;
a maximum idle frame buffer number threshold, when the idle frame buffer number exceeds the maximum idle frame buffer number threshold, triggering the release operation of idle frame buffer resources;
when receiving a user frame buffer application request, checking whether frame buffer information meeting the condition exists in an idle frame buffer table, if so, removing the frame buffer information from the idle frame buffer table, adding an occupied frame buffer table, returning the frame buffer to a user for use, if not, directly creating the frame buffer information, and adding the frame buffer information into the occupied frame buffer table;
when receiving a user frame buffer information return request, adding the frame buffer information into an idle frame buffer table; checking whether the idle frame buffer table exceeds a threshold value of the maximum idle frame buffer quantity, and if so, executing idle frame buffer resource release operation to reduce the occupation of the system memory;
when the memory shortage information of the system is received, the idle frame buffer resource releasing operation is triggered.
Further, the decoding program decodes the input information by means of soft decoding, hard decoding, multi-threading and a caching mechanism.
Further, the merging the multiple layers according to the spatial position of the layer and the processing result image to obtain final image processing data includes:
receiving image data in a unified format, wherein the image data takes Opengles texture as a data carrier, and is marked as texture src, and the width of the image data is marked as src W, and the height of the image data is marked as src H;
creating a texture of the same size as the input texture to copy and store the input information, denoted texture dst;
creating a frame buffer denoted fb, attaching dst to fb, and setting fb as the current frame buffer;
setting the width of a current OpenglES viewport as src W and the height of the viewport as src H;
loading vertex shader codes and fragment shader codes, compiling shaders and linking the shaders into shader programs to obtain shader program handles; enabling a shader through a shader handle;
normalizing coordinates of four vertexes of the viewport, and inputting the coordinates into a vertex shader;
inputting the texture src into a fragment shader, and setting texture sampling coordinates;
running a shader program, and copying the content of the texture src into a texture dst;
inputting the texture dst into a special effect processing module, receiving a processing result image of the special effect processing module, wherein the processing result image is also the texture and is marked as p;
according to the user operation, if the user selects N multimedia files to obtain N layers, executing the steps to obtain corresponding N textures with special effect processing results, and respectively marking the textures as p1, p2, … and pN;
loading vertex shader codes and fragment shader codes, compiling shaders and linking the shaders into shader programs to obtain shader program handles; enabling a shader by a shader handle, the shader program merging a plurality of texture inputs and outputting, denoted as g;
setting the size of the view port as the size of the top layer;
creating a texture of the same size as the topmost layer, denoted pM, as a carrier for the final layer to merge the resulting image data; creating a frame buffer, denoted fbM, attaching pM to fbM, and setting fbM to the current frame buffer;
the image layer position calculation result and a plurality of textures p1, p2, … and pN are input into a shader program g, the shader program is operated, and the combination result is output to a texture pM to obtain final image processing data.
By adopting the technical scheme, the invention has the following beneficial effects:
the invention provides an intelligent animation and layered multimedia processing device, which comprises a layer rendering module, an image content input module, a special effect processing module, an animation control module, a rendering display control module and a compression coding export module. The intelligent animation editing method and device can support intelligent animation and edit the graphics layer through cooperation of multiple modules.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a smart animation and layered multimedia processing apparatus according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, based on the examples herein, which are within the scope of the invention as defined by the claims, will be within the scope of the invention as defined by the claims.
A specific smart animation and layered multimedia processing apparatus provided in an embodiment of the present application is described below with reference to the accompanying drawings.
As shown in fig. 1, the intelligent animation and layering multimedia processing apparatus provided in the embodiment of the present application includes: the system comprises a layer rendering module, an image content input module, a special effect processing module, an animation control module, a rendering display control module and a compression coding export module;
the image content input module is used for receiving input information and analyzing the input information to obtain basic information, and acquiring image data in a unified format according to the basic information; the input information comprises multimedia files and/or text information in various formats;
the layer rendering module is used for calculating the spatial position of the layer according to the basic information, updating the spatial position of the layer and transmitting the image data into textures for storing initial image data;
the special effect processing module is used for generating a special effect processing pipeline according to special effect setting, and the special effect processing pipeline is used for processing textures of the initial image data and transmitting a processing result image to the layer rendering module;
the animation control module is used for calculating current animation parameters according to animation settings and clock signals, wherein the animation parameters comprise picture layer position change parameters and special effect parameters, the picture layer position change parameters are transmitted to the picture layer rendering module to participate in determining the spatial position of a picture layer, and the special effect parameters are transmitted to the special effect processing module to participate in determining the processing effect of image data;
the layer rendering module is also used for merging a plurality of layers according to the spatial position of the layers and the processing result image to obtain final image processing data;
the rendering display control module is used for displaying the image processing data, and is also used for providing a playing rendering clock and coordinating the rate of rendering input and rendering output;
the compression coding export module is used for selecting a coding export mode according to a preset, and outputting the image processing data into a multimedia file.
The intelligent animation and layering multimedia processing device has the working principle that: the layer is a carrier of image data, which has size and location properties. The layers have parent layers and child layer list attributes, each layer having at most one parent layer, and each layer may have 0 or more child layers. According to the two attributes, different layers are organized into a tree structure, only the layer node at the top layer has no father layer, the index size of the child layer at the position of the father layer list determines the level of the child layer in the father layer, the larger the index size is, the higher the level is, and when the positions of two brother layers are overlapped, the overlapped area displays the content of the layer at the level. The image content with sub-layers is the result of combining multiple sub-layers. Each layer has width, height, abscissa, ordinate, rotation angle attributes, respectively denoted as width, height, x, y, rotation. Except for the top layer, the top left corner of the father layer is used as the origin, the direction from the top left corner vertex to the top right corner vertex of the father layer is the positive direction of the x-axis, the positive direction from the top left corner vertex to the bottom left corner vertex is the y-axis, a rectangular coordinate system is established, and the x-y attributes represent the coordinates of the top left corner of the layer under the coordinate system. The x and y of the top layer have no effect, and the width and height of the top layer are the width and height of the screen display area determined by the preview display control module. The anchor coordinates of the rotation are (x+width/2, y+height/2). The topmost layer has been created in advance before the user selects the multimedia file to add to the new layer, which is always the sub-layer of the topmost layer.
And the layer rendering module calculates the display position of the layers on the screen and the hierarchical relation among the layers according to the information of the source multimedia file associated with each layer and the user operation, and merges and caches the final image processing results of each layer. The image content input module analyzes the multimedia files of different types into unified image data. Wherein the user selects one or more multimedia files of various types or inputs text. The special effect processing module takes the unified image data generated by the image content input module as input and outputs new image data with special effect. The animation control module calculates specific parameters of the layer animation at any moment by interpolation according to the state of the layer at the key moment, and the parameters participate in determining the layer position, so that an animation effect is formed. The rendering display control module is used for controlling the display of the rendering result on the screen, providing a playing rendering clock and coordinating the rendering input and rendering output rates. The compression coding export module outputs the editing result of the plurality of different types of multimedia files as a single multimedia file.
In some embodiments, when the input information is a multimedia file, the basic information includes: file type, width, height and rotation angle;
when the input information is text information, the basic information includes: text content and word size information.
Specifically, the image content input module analyzes information of the file to obtain information such as file type, width, height and rotation angle. According to the file type, different decoding modes are selected, a decoding target frame is determined according to a clock signal, and the decoded image data are converted into image data in a unified format, or the image data in the unified format are directly generated according to text information input by a user. The module has the beneficial effects that multimedia data from different types are successfully converted into image data in a unified format in the equipment memory, and a foundation is laid for the special effect processing module to act on different layers without difference.
In some embodiments, the receiving the input information and analyzing the input information to obtain basic information, and obtaining the image data in the unified format according to the basic information includes:
receiving input information and judging the file type of the input information;
selecting a decoding program corresponding to the file type according to the file type;
and decoding the multimedia file according to the decoding program, unifying decoding results into RGBA format image data and/or rendering the text information to generate RGBA format text image data.
Specifically, the file type is judged by analyzing the file information, and a corresponding decoding program is selected. Then in the decoding process, soft solution, hard solution, multithreading and buffer memory mechanism are used, the beneficial effect of the design is that the computing power of the processor is utilized to the maximum extent, the decoding efficiency is the highest, and important support is provided for the real-time performance of the whole rendering of the system and the continuity and fluency of the animation. After the fast decoding is completed, the image data in the original format is obtained. Because the image data formats decoded by different files may not be consistent, in order for the subsequent special effect processing module to have universality for processing different types of layers, the original format image data needs to be converted into the image data with a uniform format. Where it is unified into image data in RGBA format and uploaded into the texture through OpenglES. For a video type file, if the device processor supports decoding the video type file, a hard solution mode is preferably selected, and if the device processor does not support decoding the video type file, a soft solution mode is selected. When the video is decoded, the file is read and analyzed to obtain a compressed binary image data packet, the data packet is input into the selected decoder for decoding, the data with the complete image information of one frame in the original format is output, and finally the original format data is converted into unified RGBA format image data. The parsing, decoding and conversion are operated in parallel, and the intermediate data transmission is used for coordinating the inconsistent data processing speed through the buffer zone.
In some embodiments, the calculating the spatial position of the layer according to the basic information and updating the spatial position of the layer include:
calculating the initial size and the display position of the image layer according to the file type, width, height, rotation angle and screen resolution of the current equipment of the multimedia file;
and updating the size and the display position of the layer according to the moving operation of the user, the layer adjusting operation of the user and the animation parameters.
Specifically, the layer rendering module analyzes the width, height and rotation angle information, or the text content and the word size information in the obtained file information, and calculates the spatial position of the layer. And updating the spatial position of the layer according to the moving operation and the hierarchy adjusting operation of the user and the animation setting.
Preferably, the carrier of the image data is a layer;
the animation control module adds animation settings for the layers;
the animation settings are provided in the form of a list of key-value pairs, the keys being time stamps and the values being animation parameters.
Specifically, the user adds animation settings for the layers, the animation settings are provided in the form of a list of key value pairs, the keys are timestamps, the values are animation parameters, the animation parameters comprise parameters describing the layer position change and parameters of special effects, and the rich animation type effects are realized by inputting the parameters into the layer rendering module and the special effect processing module. Animation is a change in continuous time period, but the user does not need to densely set animation parameters in the animation time period, only needs to set animation parameters at key moments, and the animation parameters between the two key moments are automatically interpolated by an animation control module. The animation design method has the beneficial effects that a great deal of intensive and tedious repeated labor of users is saved, and convenience and production efficiency of animation design are greatly improved. The animation control module supports user-defined animation, and the method for automatically interpolating between key frames to calculate complete animation simultaneously supports position movement and special effect change.
In some embodiments, the special effects processing module comprises a chained processing unit,
the chained processing unit is used for linking the special effects back and forth according to a preset sequence and outputting the texture of the input initial image data as the texture of the processing result image data.
Specifically, the chained processing unit provided by the application can link a plurality of special effects back and forth according to the sequence appointed by a user, and the texture to be processed is input from one end of the unit, so that the texture with the processing result image data can be output from the other end of the unit. The unit automatically manages intermediate processing results, the intermediate results take frame buffering as a carrier, a multiplexing mechanism of the frame buffering is designed, a small amount of frame buffering is used for alternately and repeatedly storing the intermediate processing results of different processing stages, the frequency of frequently distributing and releasing memory is reduced, and the maximization of memory efficiency is realized. A typical linear processing flow of the special effect chain processing device is as follows:
preferably, the outputting the texture of the input initial image data as the texture of the processing result image data includes:
constructing an input texture TS and an output texture TE, wherein N special effects exist, and the special effects are sequentially marked as F1 and F2 … … FN according to a set processing sequence, and constructing a frame buffer Pool for creating and recycling and managing frame buffer resources, and externally opening an application and returning frame buffer interface which is marked as Pool;
inputting a texture T of initial image data into F1, applying for a first frame buffer to Pool, marking as FB1, marking the texture attached to the FB1 as T1, and setting the FB1 as an output target of the F1;
executing F1, and outputting a result to FB1;
inputting T1 into F2, applying for a second frame buffer to Pool, marking as FB2, marking the texture attached to the FB2 as T2, and setting the FB2 as an output target of the F2;
executing F2, outputting a result to FB2, and returning FB1 to Pool;
inputting T2 into F3, applying frame buffering to Pool, obtaining FB1 again, multiplexing FB1 as an output target of F3;
executing F3, outputting a result to FB1, and returning FB2 to Pool;
inputting T1 into F4, applying frame buffer to Pool, obtaining FB2 again, multiplexing FB2 as the output target of F4,
executing F4, outputting the result to FB2,
repeating the steps until the final special effect FN;
and executing FN, outputting a result to TE, and returning frame buffer attached to the texture of the input FN to Pool.
The special effect processing module stores the special effect processing intermediate result in a frame buffer mode in the chained processing and designs a multiplexing mechanism to achieve memory efficiency maximization. It will be appreciated that the above procedure completes the intermediate result buffering of the whole special effects processing chain using only two frame buffers, wherein the key is that the frame buffers can be reused before being returned by applying for the frame buffers to Pool multiple times.
Preferably, the internal mechanism of the Pool includes:
an idle frame buffer table for recording the frame buffer information available for the current idle;
an occupied frame buffer table for recording frame buffer information currently applied for use;
a maximum idle frame buffer number threshold, when the idle frame buffer number exceeds the maximum idle frame buffer number threshold, triggering the release operation of idle frame buffer resources;
when receiving a user frame buffer application request, checking whether frame buffer information meeting the condition exists in an idle frame buffer table, if so, removing the frame buffer information from the idle frame buffer table, adding an occupied frame buffer table, returning the frame buffer to a user for use, if not, directly creating the frame buffer information, and adding the frame buffer information into the occupied frame buffer table;
when receiving a user frame buffer information return request, adding the frame buffer information into an idle frame buffer table; checking whether the idle frame buffer table exceeds a threshold value of the maximum idle frame buffer quantity, and if so, executing idle frame buffer resource release operation to reduce the occupation of the system memory;
when the memory shortage information of the system is received, the idle frame buffer resource releasing operation is triggered.
Preferably, the decoding program decodes the input information by means of soft decoding, hard decoding, multi-threading and a caching mechanism.
The image content input module converts the original format image data of various different multimedia files into the unified format image data, and simultaneously, multithreading, a buffer area and the like are used for accelerating the decoding speed, so that the rendering instantaneity and the animation fluency are ensured.
Preferably, the merging the multiple layers according to the spatial position of the layers and the processing result image to obtain final image processing data includes:
receiving image data in a unified format, wherein the image data takes Opengles texture as a data carrier, and is marked as texture src, and the width of the image data is marked as src W, and the height of the image data is marked as src H;
creating a texture of the same size as the input texture to copy and store the input information, denoted texture dst;
creating a frame buffer denoted fb, attaching dst to fb, and setting fb as the current frame buffer;
setting the width of a current OpenglES viewport as src W and the height of the viewport as src H;
loading vertex shader codes and fragment shader codes, compiling shaders and linking the shaders into shader programs to obtain shader program handles; enabling a shader through a shader handle;
normalizing coordinates of four vertexes of the viewport, and inputting the coordinates into a vertex shader;
inputting the texture src into a fragment shader, and setting texture sampling coordinates;
running a shader program, and copying the content of the texture src into a texture dst;
inputting the texture dst into a special effect processing module, receiving a processing result image of the special effect processing module, wherein the processing result image is also the texture and is marked as p;
according to the user operation, if the user selects N multimedia files to obtain N layers, executing the steps to obtain corresponding N textures with special effect processing results, and respectively marking the textures as p1, p2, … and pN;
loading vertex shader codes and fragment shader codes, compiling shaders and linking the shaders into shader programs to obtain shader program handles; enabling a shader by a shader handle, the shader program merging a plurality of texture inputs and outputting, denoted as g;
setting the size of the view port as the size of the top layer;
creating a texture of the same size as the topmost layer, denoted pM, as a carrier for the final layer to merge the resulting image data; creating a frame buffer, denoted fbM, attaching pM to fbM, and setting fbM to the current frame buffer;
the image layer position calculation result and a plurality of textures p1, p2, … and pN are input into a shader program g, the shader program is operated, and the combination result is output to a texture pM to obtain final image processing data.
In summary, the present invention provides an intelligent animation and layered multimedia processing device, which includes a layer rendering module, an image content input module, a special effect processing module, an animation control module, a rendering display control module, and a compression coding export module. The intelligent animation editing method and device can support intelligent animation and edit the graphics layer through cooperation of multiple modules.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as an apparatus, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of apparatus, devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A smart animation and layering multimedia processing apparatus, comprising: the system comprises a layer rendering module, an image content input module, a special effect processing module, an animation control module, a rendering display control module and a compression coding export module;
the image content input module is used for receiving input information and analyzing the input information to obtain basic information, and acquiring image data in a unified format according to the basic information; the input information comprises multimedia files and/or text information in various formats;
the layer rendering module is used for calculating the spatial position of the layer according to the basic information, updating the spatial position of the layer and transmitting the image data into textures for storing initial image data;
the special effect processing module is used for generating a special effect processing pipeline according to special effect setting, and the special effect processing pipeline is used for processing textures of the initial image data and transmitting a processing result image to the layer rendering module;
the animation control module is used for calculating current animation parameters according to animation settings and clock signals, wherein the animation parameters comprise picture layer position change parameters and special effect parameters, the picture layer position change parameters are transmitted to the picture layer rendering module to participate in determining the spatial position of a picture layer, and the special effect parameters are transmitted to the special effect processing module to participate in determining the processing effect of image data;
the layer rendering module is also used for merging a plurality of layers according to the spatial position of the layers and the processing result image to obtain final image processing data;
the rendering display control module is used for displaying the image processing data, and is also used for providing a playing rendering clock and coordinating the rate of rendering input and rendering output;
the compression coding export module is used for selecting a coding export mode according to a preset, and outputting the image processing data into a multimedia file.
2. The apparatus of claim 1, wherein the device comprises a plurality of sensors,
when the input information is a multimedia file, the basic information includes: file type, width, height and rotation angle;
when the input information is text information, the basic information includes: text content and word size information.
3. The apparatus of claim 2, wherein the receiving the input information and parsing the input information to obtain the basic information, and obtaining the image data in the unified format according to the basic information, comprises:
receiving input information and judging the file type of the input information;
selecting a decoding program corresponding to the file type according to the file type;
and decoding the multimedia file according to the decoding program, unifying decoding results into RGBA format image data and/or rendering the text information to generate RGBA format text image data.
4. The apparatus of claim 2, wherein the computing the spatial location of the layer from the base information and updating the spatial location of the layer comprises:
calculating the initial size and the display position of the image layer according to the file type, width, height, rotation angle and screen resolution of the current equipment of the multimedia file;
and updating the size and the display position of the layer according to the moving operation of the user, the layer adjusting operation of the user and the animation parameters.
5. The apparatus of claim 1, wherein the carrier of image data is a layer;
the animation control module adds animation settings for the layers;
the animation settings are provided in the form of a list of key-value pairs, the keys being time stamps and the values being animation parameters.
6. The apparatus of claim 1, wherein the special effects processing module comprises a chained processing unit;
the chained processing unit is used for linking the special effects back and forth according to a preset sequence and outputting the texture of the input initial image data as the texture of the processing result image data.
7. The apparatus of claim 6, wherein outputting the texture of the input initial image data as the texture of the processing result image data comprises:
constructing an input texture TS and an output texture TE, wherein N special effects exist, and the special effects are sequentially marked as F1 and F2 … … FN according to a set processing sequence, and constructing a frame buffer Pool for creating and recycling and managing frame buffer resources, and externally opening an application and returning frame buffer interface which is marked as Pool;
inputting a texture T of initial image data into F1, applying for a first frame buffer to Pool, marking as FB1, marking the texture attached to the FB1 as T1, and setting the FB1 as an output target of the F1;
executing F1, and outputting a result to FB1;
inputting T1 into F2, applying for a second frame buffer to Pool, marking as FB2, marking the texture attached to the FB2 as T2, and setting the FB2 as an output target of the F2;
executing F2, outputting a result to FB2, and returning FB1 to Pool;
inputting T2 into F3, applying frame buffering to Pool, obtaining FB1 again, multiplexing FB1 as an output target of F3;
executing F3, outputting a result to FB1, and returning FB2 to Pool;
inputting T1 into F4, applying frame buffering to Pool, obtaining FB2 again, multiplexing FB2 as an output target of F4;
executing F4, and outputting a result to FB2;
repeating the steps until the final special effect FN;
and executing FN, outputting a result to TE, and returning frame buffer attached to the texture of the input FN to Pool.
8. The apparatus of claim 7, wherein the internal mechanism of Pool comprises:
an idle frame buffer table for recording the frame buffer information available for the current idle;
an occupied frame buffer table for recording frame buffer information currently applied for use;
a maximum idle frame buffer number threshold, when the idle frame buffer number exceeds the maximum idle frame buffer number threshold, triggering the release operation of idle frame buffer resources;
when receiving a user frame buffer application request, checking whether frame buffer information meeting the condition exists in an idle frame buffer table, if so, removing the frame buffer information from the idle frame buffer table, adding an occupied frame buffer table, returning the frame buffer to a user for use, if not, directly creating the frame buffer information, and adding the frame buffer information into the occupied frame buffer table;
when receiving a user frame buffer information return request, adding the frame buffer information into an idle frame buffer table; checking whether the idle frame buffer table exceeds a threshold value of the maximum idle frame buffer quantity, and if so, executing idle frame buffer resource release operation to reduce the occupation of the system memory;
when the memory shortage information of the system is received, the idle frame buffer resource releasing operation is triggered.
9. The apparatus of claim 3, wherein the decoding program decodes the input information by means of soft decoding, hard decoding, multi-threading, and a caching mechanism.
10. The apparatus of claim 1, wherein the merging the plurality of layers according to the spatial location of the layers and the processing result image to obtain final image processing data comprises:
receiving image data in a unified format, wherein the image data takes Opengles texture as a data carrier, and is marked as texture src, and the width of the image data is marked as src W, and the height of the image data is marked as src H;
creating a texture of the same size as the input texture to copy and store the input information, denoted texture dst;
creating a frame buffer denoted fb, attaching dst to fb, and setting fb as the current frame buffer;
setting the width of a current OpenglES viewport as src W and the height of the viewport as src H;
loading vertex shader codes and fragment shader codes, compiling shaders and linking the shaders into shader programs to obtain shader program handles; enabling a shader through a shader handle;
normalizing coordinates of four vertexes of the viewport, and inputting the coordinates into a vertex shader;
inputting the texture src into a fragment shader, and setting texture sampling coordinates;
running a shader program, and copying the content of the texture src into a texture dst;
inputting the texture dst into a special effect processing module, receiving a processing result image of the special effect processing module, wherein the processing result image is also the texture and is marked as p;
according to the user operation, if the user selects N multimedia files to obtain N layers, executing the steps to obtain corresponding N textures with special effect processing results, and respectively marking the textures as p1, p2, … and pN;
loading vertex shader codes and fragment shader codes, compiling shaders and linking the shaders into shader programs to obtain shader program handles; enabling a shader by a shader handle, the shader program merging a plurality of texture inputs and outputting, denoted as g;
setting the size of the view port as the size of the top layer;
creating a texture of the same size as the topmost layer, denoted pM, as a carrier for the final layer to merge the resulting image data; creating a frame buffer, denoted fbM, attaching pM to fbM, and setting fbM to the current frame buffer;
the image layer position calculation result and a plurality of textures p1, p2, … and pN are input into a shader program g, the shader program is operated, and the combination result is output to a texture pM to obtain final image processing data.
CN202011056771.4A 2020-09-30 2020-09-30 Intelligent cartoon and layered multimedia processing device Active CN112150591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011056771.4A CN112150591B (en) 2020-09-30 2020-09-30 Intelligent cartoon and layered multimedia processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011056771.4A CN112150591B (en) 2020-09-30 2020-09-30 Intelligent cartoon and layered multimedia processing device

Publications (2)

Publication Number Publication Date
CN112150591A CN112150591A (en) 2020-12-29
CN112150591B true CN112150591B (en) 2024-02-02

Family

ID=73895940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011056771.4A Active CN112150591B (en) 2020-09-30 2020-09-30 Intelligent cartoon and layered multimedia processing device

Country Status (1)

Country Link
CN (1) CN112150591B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116980544B (en) * 2023-09-22 2023-12-01 北京淳中科技股份有限公司 Video editing method, device, electronic equipment and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184720A (en) * 2010-06-22 2011-09-14 上海盈方微电子有限公司 A method and a device for image composition display of multi-layer and multi-format input
CN104978413A (en) * 2015-06-24 2015-10-14 北京超图软件股份有限公司 Apparatus and method for visualizing GIS line data at browser
CN109389661A (en) * 2017-08-04 2019-02-26 阿里健康信息技术有限公司 A kind of animation file method for transformation and device
CN110377264A (en) * 2019-07-17 2019-10-25 Oppo广东移动通信有限公司 Layer composition, device, electronic equipment and storage medium
CN111612878A (en) * 2020-05-21 2020-09-01 广州光锥元信息科技有限公司 Method and device for making static photo into three-dimensional effect video

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020064888A (en) * 1999-10-22 2002-08-10 액티브스카이 인코포레이티드 An object oriented video system
WO2014205460A2 (en) * 2013-05-02 2014-12-24 Waterston Entertainment (Pty) Ltd System and method for incorporating digital footage into a digital cinematographic template

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184720A (en) * 2010-06-22 2011-09-14 上海盈方微电子有限公司 A method and a device for image composition display of multi-layer and multi-format input
CN104978413A (en) * 2015-06-24 2015-10-14 北京超图软件股份有限公司 Apparatus and method for visualizing GIS line data at browser
CN109389661A (en) * 2017-08-04 2019-02-26 阿里健康信息技术有限公司 A kind of animation file method for transformation and device
CN110377264A (en) * 2019-07-17 2019-10-25 Oppo广东移动通信有限公司 Layer composition, device, electronic equipment and storage medium
CN111612878A (en) * 2020-05-21 2020-09-01 广州光锥元信息科技有限公司 Method and device for making static photo into three-dimensional effect video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
计算机图像技术在动画制作中的新应用;袁义;《无线互联科技》;第4卷;121 *

Also Published As

Publication number Publication date
CN112150591A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN112184856B (en) Multimedia processing device supporting multi-layer special effect and animation mixing
US9583140B1 (en) Real-time playback of an edited sequence of remote media and three-dimensional assets
JP7446468B2 (en) Video special effects processing methods, devices, electronic equipment and computer programs
US7336280B2 (en) Coordinating animations and media in computer display output
Bulterman et al. Structured multimedia authoring
US6741242B1 (en) Multimedia documents integrating and displaying system
US6266053B1 (en) Time inheritance scene graph for representation of media content
US20150088977A1 (en) Web-based media content management
US20070139418A1 (en) Control of animation timeline
CN104091607A (en) Video editing method and device based on IOS equipment
CN104091608B (en) A kind of video editing method and device based on ios device
CN107644019A (en) A kind of hypermedia eBook content manufacturing system
CN114450966A (en) Data model for representation and streaming of heterogeneous immersive media
EP4218252A1 (en) Temporal alignment of mpeg and gltf media
CN112150591B (en) Intelligent cartoon and layered multimedia processing device
CN112802168A (en) Animation generation method and device and television terminal
Flotyński et al. Building multi-platform 3D virtual museum exhibitions with Flex-VR
KR20160120128A (en) Display apparatus and control method thereof
CN102479387A (en) Method for generating multimedia animation and playing multimedia animation and apparatus thereof
US9620167B2 (en) Broadcast-quality graphics creation and playout
Morris et al. CyAnimator: simple animations of Cytoscape networks
CN115314732B (en) Multi-user collaborative film examination method and system
CN111327941B (en) Offline video playing method, device, equipment and medium
CN115134658B (en) Video processing method, device, equipment and storage medium
Arntzen et al. Data-independent sequencing with the timing Object: a JavaScript sequencer for single-device and Multi-device Web Media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant