CN116781993A - Video animation generation and playing method and device, electronic equipment and storage medium - Google Patents

Video animation generation and playing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116781993A
CN116781993A CN202310824531.1A CN202310824531A CN116781993A CN 116781993 A CN116781993 A CN 116781993A CN 202310824531 A CN202310824531 A CN 202310824531A CN 116781993 A CN116781993 A CN 116781993A
Authority
CN
China
Prior art keywords
mask layer
animation
video animation
user
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310824531.1A
Other languages
Chinese (zh)
Inventor
舒伟
郭曼丽
尹志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Michang Network Technology Co ltd
Original Assignee
Guangzhou Michang Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Michang Network Technology Co ltd filed Critical Guangzhou Michang Network Technology Co ltd
Priority to CN202310824531.1A priority Critical patent/CN116781993A/en
Publication of CN116781993A publication Critical patent/CN116781993A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method and a device for generating and playing video animation, electronic equipment and a storage medium, and belongs to the technical field of video animation processing. The generating method comprises the following steps: acquiring a video frame sequence of a background video animation; acquiring first fusion resource information and second fusion resource information; generating a first mask layer of user-defined content and a second mask layer of superimposed video animation, the first mask layer comprising a transparency channel mask layer and the second mask layer comprising a transparency channel mask layer and an RGB channel mask layer; sequentially inserting the first mask layer and the second mask layer to corresponding positions in a video frame sequence to generate an MP4 file of a target video animation; and writing the first fusion resource information and the second fusion resource information into the playing configuration. The visual effect that the superimposed video animation 'covers' the user-defined content is realized. The additional resource files are not required to be added, so that the file size is saved; can be infinitely extended, and the diversity of video animation is improved.

Description

Video animation generation and playing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of video animation processing technologies, and in particular, to a method, an apparatus, an electronic device, and a storage medium for generating and playing a video animation.
Background
The virtual gift is a relatively common interactive function, and generally, after the user gives the virtual gift, an animation effect corresponding to the virtual gift is played on the client. In order to embody the personalized customization and diversity of the virtual gift, it is a trend to insert user-defined content (such as user head portrait, user name, etc.) in the animation effect of the virtual gift.
The current implementation schemes supporting the insertion of user-defined content include the following: SVGA animation (Scalable Vector GraphicsAnimation), scalable vector graphics animation), rendering engine 3D animation, VAP animation (VideoAnimation Player ). The SVGA animation can support limited animation effects, and has good operation efficiency; text and pictures can be inserted; rendering engine 3D animation can support very high-grade and complex gift animation effect, has good running efficiency, but introduces a relatively large engine library and needs professional developers and designers to cooperate, so that the cost is relatively high; the VAP animation is essentially added with custom configuration on the mp4 format, can support complex animation effects, and has good operation efficiency; due to the transformation based on mp4, the realization cost is low.
In the VAP format video animation, since the insertion of the user-defined content is generated by drawing a video frame of the background video animation and then drawing the video frame twice, the inserted user-defined content must be at the uppermost layer of the whole video frame, so that the user-defined content in the video animation can shield part of the animation effect, influence the overall presentation of the animation effect, and the visual effect that the animation effect is "covered" on the customized content cannot be presented.
Disclosure of Invention
The invention provides a method, a device, electronic equipment and a storage medium for generating and playing video animation, which are used for solving the defect that the integral presentation of the animation effect is influenced by the fact that the animation effect overlapped with user-defined content in the video animation in the prior art is blocked, and realizing that part of the animation effect can be placed on the upper layer of the user-defined content so as to present the visual effect that the animation effect is covered on the user-defined content.
The invention provides a method for generating video animation, which comprises the following steps:
acquiring a video frame sequence of a background video animation;
acquiring first fusion resource information and second fusion resource information of the background video animation, wherein the first fusion resource information comprises placeholders, shade information, transparency information, size information and position information of user-defined contents, the second fusion resource information comprises placeholders, shade information, RGB color information, transparency information, size information and position information of superimposed video animation, and the superimposed video animation is set to be an animation effect superimposed on the user-defined contents;
Generating a first mask layer of the user-defined content according to the first fusion resource information, and generating a second mask layer of the superimposed video animation according to the second fusion resource information, wherein the first mask layer comprises a transparency channel mask layer and the second mask layer comprises a transparency channel mask layer and an RGB channel mask layer;
sequentially inserting the first mask layer and the second mask layer to corresponding positions in the video frame sequence, and generating an MP4 file of the target video animation according to the video frame sequence inserted with the first mask layer and the second mask layer;
and writing the first fusion resource information and the second fusion resource information into play configuration in the MP4 file.
According to the method for generating the video animation, provided by the invention, the hierarchical relationship between the user-defined content and the superimposed video animation in the target video animation is determined through the setting sequence of the first fusion resource information and the second fusion resource information;
the method further comprises the steps of:
determining the rendering sequence of the user-defined content and the superimposed video animation according to the setting sequence;
Writing the rendering order into the play configuration.
According to the method for generating the video animation, the user-defined content comprises at least one of a user head portrait, a user virtual image and a user uploaded image.
According to the method for generating the video animation, provided by the invention, the target video animation is a virtual gift animation.
The invention also provides a playing method of the video animation, which comprises the following steps:
acquiring target user-defined content and the MP4 file according to any one of the above;
synthesizing the target user custom content and the first mask layer to generate a target user custom content image;
and according to the playing configuration, sequentially rendering the video frame sequence, the target user-defined content image and the superimposed video animation, and playing the target video animation.
According to the video animation playing method provided by the invention, the target user custom content obtaining step comprises the following steps:
receiving a download address of the target user-defined content sent by a server;
and downloading and acquiring the user-defined content of the target user according to the downloading address.
The invention also provides a device for generating the video animation, which comprises the following steps:
the first acquisition module is used for acquiring a video frame sequence of the background video animation;
the second acquisition module is used for acquiring first fusion resource information and second fusion resource information of the background video animation, wherein the first fusion resource information comprises placeholders, shade information, transparency information, size information and position information of user-defined contents, the second fusion resource information comprises placeholders, shade information, RGB color information, transparency information, size information and position information of superimposed video animation, and the superimposed video animation is set to be an animation effect superimposed on the user-defined contents;
the first generation module is used for generating a first mask layer of the user-defined content according to the first fusion resource information and generating a second mask layer of the superimposed video animation according to the second fusion resource information, wherein the first mask layer comprises a transparency channel mask layer and the second mask layer comprises a transparency channel mask layer and an RGB channel mask layer;
the second generation module is used for sequentially inserting the first mask layer and the second mask layer to the corresponding positions in the video frame sequence, and generating MP4 files of the target video animation according to the video frame sequence inserted with the first mask layer and the second mask layer;
And the writing module is used for writing the first fusion resource information and the second fusion resource information into the playing configuration in the MP4 file.
The invention also provides a playing device of the video animation, which comprises:
the acquisition module is used for acquiring the user-defined content of the target user and the MP4 file;
the generating module is used for synthesizing the target user custom content and the first mask layer to generate a target user custom content image;
and the playing module is used for sequentially rendering the video frame sequence, the target user self-defined content image and the superimposed video animation according to the playing configuration and playing the target video animation.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the generation method of the video animation or the playing method of the video animation when executing the program.
The present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of generating a video animation as described in any one of the above or a method of playing the video animation.
According to the method, the device, the electronic equipment and the storage medium for generating and playing the video animation, the superimposed video animation which needs to be superimposed on the user-defined content is added into the background video animation in a fusion mode as the user-defined content; different from the user-defined content, the transparency information is stored by adopting one mask layer, and the RGB color information and the transparency information of the superimposed video animation are respectively stored by adopting two mask layers; firstly, generating a transparency mask layer according to fusion resource information of user-defined content, and inserting the transparency mask layer into a video frame sequence; generating a transparency mask layer and an RGB channel mask layer according to the fusion resource information of the superimposed video animation, inserting the two mask layers into a video frame sequence, thereby generating an MP4 file of the target video animation, and creating a playing configuration and writing the playing configuration into the MP4 file; the sequence of inserting the mask layer of the user-defined content and then inserting the mask layer of the superimposed video animation is first performed, so that the superimposed video animation can be covered on the user-defined content when the target video animation is played, and therefore, the superimposed video animation can be completely presented.
When playing, the user-defined content needs to be obtained from the outside, so that the personalized content of different users is displayed in the target video animation, the superimposed video animation does not need to be obtained from the outside, is directly obtained from the video frame, does not need to additionally increase resource files, and can save the file size; by adding a plurality of superimposed video animations in the mode, infinite expansion can be performed, so that more complex animation effects are realized, and the diversity of the video animations is improved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is an effect schematic of a video animation of the prior art;
FIG. 2 is one of the effect schematics of video animation generated by the method for generating video animation according to the present invention;
FIG. 3 is a flow chart of a method for generating video animation according to the present invention;
FIG. 4 is one of the video frame schematics of a video animation generated using the method for generating a video animation according to the present invention;
FIG. 5 is a second schematic diagram of the effect of video animation generated by the video animation generation method according to the present invention;
FIG. 6 is a second schematic diagram of video frames of a video animation generated by the method for generating a video animation according to the present invention;
FIG. 7 is a schematic flow chart of a video animation playing method provided by the invention;
FIG. 8 is a schematic diagram of a video animation generating apparatus according to the present invention;
fig. 9 is a schematic structural diagram of a video animation playing device provided by the invention;
fig. 10 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that in the description of embodiments of the present invention, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. The orientation or positional relationship indicated by the terms "upper", "lower", etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of description and to simplify the description, and are not indicative or implying that the apparatus or elements in question must have a specific orientation, be constructed and operated in a specific orientation, and therefore should not be construed as limiting the present invention. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
The terms "first," "second," and the like in this specification are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present invention may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. In addition, "and/or" indicates at least one of the connected objects, and the character "/", generally indicates that the associated object is an "or" relationship.
The video animation generation and playing method, device, electronic equipment and storage medium provided by the invention can be a virtual gift animation, an account number grade upgrading animation, a permission acquisition animation and the like, and can be particularly applied to application scenes such as voice real-time chat, network games, stand-alone games, video platforms, video live broadcast, voice live broadcast, online shopping and the like.
In the above various application scenarios, at least one network architecture is provided, and the network architecture may include a server and a plurality of clients. The server side is used for providing background service of the application scene, and the server side can be a server, a server cluster, a cloud platform and the like. The client provides service for users, and can be an application program (APP) and software installed on the terminal, a webpage running on a terminal browser, and the like. The terminal may be a smart phone, a computer, a tablet, a PDA (Personal DigitalAssistant ), a wearable device, etc.
Taking an application scenario of voice real-time chat as an example, a user A gives a virtual gift to a user B through a client, a server pushes a virtual gift animation playing event to a voice room where the user A, B is commonly located or a full-service real-time online client after receiving the virtual gift giving event, and after receiving the virtual gift animation playing event, each client triggers a playing interface of the client to play the virtual gift animation.
In order to better understand the generation and playing methods, devices, electronic equipment and storage media of video animation provided by the invention, the visual effect presented after user-defined content is inserted into the video animation in the prior art is described.
Fig. 1 illustrates an effect diagram of a video animation according to the prior art, specifically, a schematic diagram of a frame of the video animation, which may be a virtual gift animation, in which user-defined content is incorporated. As shown in fig. 1, this is an plot gift, with the intention that after user a gives to user B, a "520" stamp may be stamped on the avatar of the gift receiver (user B). The paper, the thumbtack and the 520 seal are materials of animation effects preset by virtual gifts, the head portraits of characters in the circular frames in the paper are user head portraits, and the user head portraits are user-defined contents.
However, in the prior art, since the inserted user-defined content is necessarily at the uppermost layer of the video frame of the whole video animation, that is, a part of the video animation overlaps with the user-defined content in the video animation, the part of the video animation is blocked by the user-defined content, which affects the presentation of the part of the video animation. As in fig. 1, the circular frame portion inserted with the user head portrait overlaps with the "520 stamp", and when the gift animation is played, the "520 stamp" cannot be completely displayed due to the shielding of the user head portrait, and the visual effect of the "520 stamp" on the user head portrait cannot be reflected. By adopting the method for generating and playing the video animation, the visual effect of covering the '520 seal' on the head portrait of the user B can be shown as shown in figure 2.
As shown in fig. 3, the method for generating video animation provided by the invention specifically includes the following steps:
step 310, a sequence of video frames of a background video animation is acquired.
Specifically, the background video animation may refer to a background animation in the final displayed target video animation, for example, the video animation including "paper", "drawing pin", background color, and other material presentations in fig. 1. The background video animation realizes an animation effect through multi-frame video frames, and the multi-frame video frames are sequentially arranged and combined into a video frame sequence. The background video animation may also be considered the original video animation.
Step 320, obtaining first fused resource information and second fused resource information of the background video animation, where the first fused resource information includes placeholders, mask information, transparency information, size information and position information of user-defined content, the second fused resource information includes placeholders, mask information, RGB color information, transparency information, size information and position information of superimposed video animation, and the superimposed video animation is set as an animation effect superimposed on the user-defined content.
Specifically, in a video animation development tool such as VAP, other resources may be incorporated into a video animation, which may be referred to as a fusion resource, that is, a resource that needs to be inserted into an original video animation, such as user-defined content.
Taking VAP as an example, when developing video animation, the information of the resources required to be fused into the video animation can be added in a 'fusion information' module, so that an MP4 file of the target video animation is generated, and when playing the MP4 file, the displayed video animation can comprise the fusion resources.
Currently, the "fusion information" module of the VAP, by default, supports the addition of resources of custom properties, such as user-defined content as described in step 320. In some embodiments, the user-defined content may be a user avatar, a user-uploaded image, a user nickname, user-entered text content, or the like. And when the placeholder, transparency information, size information, position information and the like of the user-defined content are input, a transparent channel mask layer can be generated and inserted into a background video animation (namely an original video animation), so that the user-defined content is acquired and rendered before playing is started, the user-defined content can be fused into the video animation, and the user-defined content can be presented when the video animation is played.
The placeholder may mark a location of the fusion resource, the transparency information refers to a transparency value of the fusion resource, the size information may be a width and a height of the fusion resource, and the location information refers to a specific location coordinate of the fusion resource inserted into the video frame. The first mask layer of the user-defined content, i.e. the transparency channel mask layer, may be determined from the above information.
According to the invention, the 'fusion information' module is modified, and the mask layer type classification of the fusion resource is increased, so that the 'fusion information' module can support adding of user-defined content and can also support adding of preset video animation, such as overlaying video animation. The superimposed video animation is a video animation which is also required to be inserted into a video frame sequence of the background video animation, so that the superimposed video animation, the background video animation and the user-defined content belong to different layers, and the superimposed video animation can be superimposed on the user-defined content to realize the purpose of covering the user-defined content.
Adding the superimposed video animation requires obtaining the superimposed video animation placeholder, mask information, RGB color information, transparency information, size information, and position information. The RGB color information refers to RGB values of the fusion resource.
In addition, the mask information of the fusion resource needs to be acquired, and the mask layer type of the fusion resource is determined according to the mask information. The present invention provides two mask layer types, one for user-customized content and one for superimposed video animation, whereby a user-customized content, superimposed video animation mask layer is generated by step 330.
And 330, generating a first mask layer of the user-defined content according to the first fusion resource information, and generating a second mask layer of the superimposed video animation according to the second fusion resource information, wherein the first mask layer comprises a transparency channel mask layer, and the second mask layer comprises a transparency channel mask layer and an RGB channel mask layer.
Specifically, the first mask layer and the second mask layer belong to different mask layer types, the first mask layer is a transparency channel mask layer, the mask layer of the type is needed to be adopted for inserting user-defined content, and the user-defined content needs to be obtained from the outside during rendering; the second mask layer comprises a transparency channel mask layer and an RGB channel mask layer, and the type of mask layer is needed to be adopted for inserting the superimposed video animation, so that resources of the superimposed video animation are written into a video frame sequence of the background video animation, and the resources are not required to be acquired from the outside but are acquired from the video frame during rendering, and the playing efficiency can be improved.
In some embodiments, the mask layer used to insert the user-defined content may be named "black and white mask" and the mask layer type is selected to generate a transparent channel mask layer that stores the transparency value of the user-defined content. And when the MP4 file is played, user-defined content is obtained from the outside and is synthesized and rendered with the transparent channel mask layer, so that the user-defined content can be fused in the target video animation.
Accordingly, the mask layer used to insert the superimposed video animation may be named "color mask", and such a mask layer type is selected, resulting in two mask layers, one transparent channel mask layer and one RGB channel mask layer, storing the transparency value and RGB value, respectively, of the superimposed video animation. The specific animation effect and content of the superimposed video animation are inserted into the video frame sequence, and the MP4 file is directly extracted from the video frames for rendering without external acquisition when played, so that the superimposed video animation can be fused in the target video animation, and the hierarchical relationship with the background video animation and the user-defined content can be embodied.
It should be noted that the above-mentioned naming manners of the two mask layers are only exemplary naming manners, and the present invention is not limited thereto.
As shown in fig. 4, a schematic diagram of a video frame of a video animation generated by using the method for generating a video animation according to the present invention is shown. It can be seen that the transparency channel mask layer (circular box in fig. 4) of the user-defined content and the transparency channel mask layer and RGB channel mask layer (520 stamp in fig. 4) of the superimposed video animation are inserted in the video frame of the background video animation (paper, thumbtack in fig. 4). It can also be seen that the specific resources, RGB color information, transparency information, and the like of the background video animation, which are the same as those of the superimposed video animation, are written into the video frame, and are directly acquired from the video frame during playback.
Step 340, sequentially inserting the first mask layer and the second mask layer to corresponding positions in the video frame sequence, and generating an MP4 file of the target video animation according to the video frame sequence in which the first mask layer and the second mask layer are inserted.
Specifically, the transparency channel mask layer and the RGB channel mask layer of the generated user-defined content are sequentially inserted into corresponding positions in the video frame sequence, and an MP4 file of the target video animation is generated.
In some embodiments, the MP4 file of the target video animation can be generated by utilizing the key frame technology, thereby effectively compressing the file size of the MP4 file.
In some embodiments, the target video animation may be a virtual gift animation.
And step 350, writing the first fusion resource information and the second fusion resource information into the playing configuration in the MP4 file.
Specifically, the MP4 file includes a play configuration, where the play configuration stores a configuration required for playing the target video animation, and specifically, in addition to related information of the background video animation, first fusion resource information related to user-defined content and second fusion resource information related to superimposed video animation need to be written into the play configuration, so that when the target video animation is played, the user-defined content and the superimposed video animation can be rendered and played according to the play configuration, so that the target video animation presents a combination of the background video animation, the user-defined content and the superimposed video animation.
According to the method for generating the video animation, the superimposed video animation which needs to be superimposed on the user-defined content is added into the video animation in a fusion mode as the user-defined content; different from the user-defined content, the transparency information is stored by adopting one mask layer, and the RGB color information and the transparency information of the superimposed video animation are respectively stored by adopting two mask layers; firstly, generating a transparency mask layer according to fusion resource information of user-defined content, and inserting the transparency mask layer into a video frame sequence; generating a transparency mask layer and an RGB channel mask layer according to the fusion resource information of the superimposed video animation, inserting the two mask layers into a video frame sequence, thereby generating an MP4 file of the target video animation, and creating a playing configuration and writing the playing configuration into the MP4 file; the sequence of inserting the mask layer of the user-defined content and then inserting the mask layer of the superimposed video animation is first performed, so that the superimposed video animation can be covered on the user-defined content when the target video animation is played, and therefore, the superimposed video animation can be completely presented.
When playing, the user-defined content needs to be obtained from the outside, so that the personalized content of different users is displayed in the target video animation, the superimposed video animation does not need to be obtained from the outside, is directly obtained from the video frame, does not need to additionally increase resource files, and can save the file size; the method adds a plurality of superimposed video animations to make infinite expansion so as to realize more complex animation effects and promote the diversity of the video animations.
In some embodiments, the hierarchical relationship between the user-defined content and the superimposed video animation in the target video animation is determined by the setting sequence of the first fusion resource information and the second fusion resource information.
Specifically, the layer of the fusion resource which is set first is positioned at the lower layer relative to the layer of the fusion resource which is set later in the target video animation. That is, if the superimposed video animation needs to be "covered" by the user-defined content, the user-defined content is set first, and then the superimposed video animation is set; and vice versa.
And, the set order is also related to the rendering order of the fusion resources.
In some embodiments, the method further comprises: determining the rendering sequence of the user-defined content and the superimposed video animation according to the setting sequence; the rendering order is written into the play configuration.
When the target video animation is played, the fusion resource which is firstly set is rendered, and then the fusion resource which is later set is rendered. Therefore, when the target video animation is rendered, the user-defined content and the superimposed video animation can be rendered according to the rendering sequence, and the superimposed video animation is rendered later than the user-defined content, so that the superimposed video animation 'covers' the user-defined content.
It should be noted that, by the method for generating video animation of the present invention, other video animations can be infinitely superimposed in the original video animation, so as to achieve more complex animation effects, and the resource information of these video animations is inserted into the video frame sequence of the original video animation to generate an MP4 file, without adding additional resource files. In addition, the MP4 file is generated by combining the key frame technology, and even if the resource information of other video animations is inserted into the video frame sequence of the original video animation, the size of the MP4 file can be effectively controlled, so that the operation efficiency is not obviously affected.
Taking a video animation in a VAP format as an example, a 'fusion information' module of a VAP video animation development tool sequentially adds user-defined content or superimposes video animation according to the presentation requirement of an animation effect.
As shown in fig. 5, a "small circle" is superimposed on the "520 stamp", and only after the "520 stamp" in the "fusion information" module fuses the resource information, the "small circle" fusion resource information is set, which specifically includes placeholders, mask information, RGB color information, transparency information, size information, position information, and the like. Wherein the mask information includes a type of mask layer such as a "color mask", and the coordinate range of the "small circle" position information coincides with the "520 stamp", thereby being capable of exhibiting the effect of covering the "520 stamp" with the "small circle".
As shown in fig. 5, a "large circle" is superimposed on the background video animation after the occurrence of the "small circle", and similarly, after the "small circle" fused resource information in the "fused information" module, the "large circle" fused resource information is set, and the type of mask layer of the "color mask" is selected. Thus, both the animation effects and the content of "small circle", "large circle" will be inserted into the video frame, as shown in FIG. 6.
It can be understood that if it is required to present the visual effect of "covering" the user-defined content of the superimposed video animation, the fusion resource information of the user-defined content is set first, then the fusion resource information of the superimposed video animation is set, specifically, the mask layer type of the user-defined content is set to be a "black-and-white mask", that is, only one transparency channel mask layer is included, and the mask layer type of the superimposed video animation is set to be a "color mask", that is, to include one transparency channel mask layer and one RGB channel mask layer.
Otherwise, if the visual effect that the user-defined content 'covers' the superimposed video animation needs to be displayed, the fusion resource information of the superimposed video animation is set first, and then the fusion resource information of the user-defined content is set.
Similarly, the hierarchical relationship between the superimposed video animations is also determined according to the setting sequence of the fusion resources, and the rendering sequence is also determined according to the setting sequence of the fusion resources.
Correspondingly, the invention also provides a playing method of the video animation, which can be applied to the client, as shown in fig. 7, and specifically comprises the following steps:
step 710, obtaining the MP4 file of the target user custom content and the target video animation.
And step 720, synthesizing the target user custom content and the first mask layer to generate a target user custom content image.
And step 730, sequentially rendering the video frame sequence, the target user-defined content image and the superimposed video animation according to the play configuration, and playing the target video animation.
Specifically, the target user-defined content refers to user-defined content which is specifically required to be fused into the target video animation, and the target user-defined content can be determined according to different target video animation playing events triggered by users. For example, in an application scenario of virtual gift gifting, a user a gifts a virtual gift animation playing event triggered by a user B virtual gift, and an avatar of the user B is fused in the played virtual gift animation; and the user A gives a virtual gift animation playing event triggered by the virtual gift of the user C, and the played virtual gift animation is fused with the head portrait of the user C.
In some embodiments, the step of obtaining the target user-defined content may include: receiving a download address of the target user-defined content sent by a server; and downloading and acquiring the user-defined content of the target user according to the downloading address.
Taking an application scenario of presenting a virtual gift animation in a voice real-time chat room as an example, when a user A in the voice room presents a user B virtual gift, a virtual gift presentation event is sent to a server, and the server acquires user information (such as a user ID) and virtual gift information (such as a virtual gift ID) of a gift receiver (i.e. the user B) and acquires a user avatar downloading address of the user B. The server can push the virtual gift animation playing event to all clients in the voice room, and specifically pushes the virtual gift ID and the user head portrait downloading address of the user B. Each client responds to the virtual gift animation playing event, determines and acquires the MP4 file of the virtual gift animation through the virtual gift ID, downloads the user head portrait of the user B through the user head portrait downloading address, and synthesizes the corresponding mask image layers in the user head portrait and the MP4 file according to the playing configuration in the MP4 file to generate the user head portrait image fused into the virtual gift animation. And the client sequentially renders the background animation special effect, the user head image of the user B and the overlapped animation special effect in the virtual gift animation according to the playing configuration, and plays the whole virtual gift animation.
It can be understood that the original image of the originally downloaded user head portrait is square, and the mask set in the virtual gift animation is circular, then by synthesis, a circular user head portrait image can be generated.
The video animation generating device and the video animation playing device provided by the invention are described below, and the video animation generating device and the video animation generating method described below can be correspondingly referred to each other; the video animation playing device and the video animation playing method described above can be referred to correspondingly.
As shown in fig. 8, the present invention further provides a device for generating video animation, including:
a first obtaining module 810, configured to obtain a video frame sequence of a background video animation;
a second obtaining module 820, configured to obtain first fused resource information and second fused resource information of the background video animation, where the first fused resource information includes placeholders, mask information, transparency information, size information, and position information of user-defined content, and the second fused resource information includes placeholders, mask information, RGB color information, transparency information, size information, and position information of an overlay video animation, and the overlay video animation is set to be an animation effect that is overlaid on the user-defined content;
A first generating module 830, configured to generate a first mask layer of the user-defined content according to the first fusion resource information, and generate a second mask layer of the superimposed video animation according to the second fusion resource information, where the first mask layer includes a transparency channel mask layer, and the second mask layer includes a transparency channel mask layer and an RGB channel mask layer;
a second generating module 840, configured to insert the first mask layer and the second mask layer into corresponding positions in the video frame sequence in sequence, and generate an MP4 file of the target video animation according to the video frame sequence into which the first mask layer and the second mask layer have been inserted;
the writing module 850 is configured to write the first fused resource information and the second fused resource information into a play configuration in the MP4 file.
As shown in fig. 9, the present invention further provides a playing device for video animation, including:
an obtaining module 910, configured to obtain the target user-defined content and the MP4 file;
the generating module 920 is configured to perform a synthesis process on the target user-defined content and the first mask layer, and generate a target user-defined content image;
And the playing module 930 is configured to sequentially render the video frame sequence, the target user-defined content image, the superimposed video animation according to the playing configuration, and play the target video animation.
Fig. 10 illustrates a physical structure diagram of an electronic device, as shown in fig. 10, which may include: a processor 1010, a communication interface (Communications Interface) 1020, a memory 1030, and a communication bus 1040, wherein the processor 1010, the communication interface 1020, and the memory 1030 communicate with each other via the communication bus 1040. Processor 1010 may invoke logic instructions in memory 1030 to perform a method of generating video animation, the method comprising:
acquiring a video frame sequence of a background video animation;
acquiring first fusion resource information and second fusion resource information of the background video animation, wherein the first fusion resource information comprises placeholders, shade information, transparency information, size information and position information of user-defined contents, the second fusion resource information comprises placeholders, shade information, RGB color information, transparency information, size information and position information of superimposed video animation, and the superimposed video animation is set to be an animation effect superimposed on the user-defined contents;
Generating a first mask layer of the user-defined content according to the first fusion resource information, and generating a second mask layer of the superimposed video animation according to the second fusion resource information, wherein the first mask layer comprises a transparency channel mask layer and the second mask layer comprises a transparency channel mask layer and an RGB channel mask layer;
sequentially inserting the first mask layer and the second mask layer to corresponding positions in the video frame sequence, and generating an MP4 file of the target video animation according to the video frame sequence inserted with the first mask layer and the second mask layer;
writing the first fusion resource information and the second fusion resource information into play configuration in the MP4 file;
or, a playing method of executing video animation, the method comprising:
acquiring target user-defined content and the MP4 file according to any one of the above;
synthesizing the target user custom content and the first mask layer to generate a target user custom content image;
and according to the playing configuration, sequentially rendering the video frame sequence, the target user-defined content image and the superimposed video animation, and playing the target video animation.
Further, the logic instructions in the memory 1030 described above may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, where the computer program product includes a computer program, where the computer program can be stored on a non-transitory computer readable storage medium, and when the computer program is executed by a processor, the computer can execute a method for generating video animation provided by the above methods, and the method includes:
Acquiring a video frame sequence of a background video animation;
acquiring first fusion resource information and second fusion resource information of the background video animation, wherein the first fusion resource information comprises placeholders, shade information, transparency information, size information and position information of user-defined contents, the second fusion resource information comprises placeholders, shade information, RGB color information, transparency information, size information and position information of superimposed video animation, and the superimposed video animation is set to be an animation effect superimposed on the user-defined contents;
generating a first mask layer of the user-defined content according to the first fusion resource information, and generating a second mask layer of the superimposed video animation according to the second fusion resource information, wherein the first mask layer comprises a transparency channel mask layer and the second mask layer comprises a transparency channel mask layer and an RGB channel mask layer;
sequentially inserting the first mask layer and the second mask layer to corresponding positions in the video frame sequence, and generating an MP4 file of the target video animation according to the video frame sequence inserted with the first mask layer and the second mask layer;
Writing the first fusion resource information and the second fusion resource information into play configuration in the MP4 file;
or, a playing method of executing video animation, the method comprising:
acquiring target user-defined content and the MP4 file according to any one of the above;
synthesizing the target user custom content and the first mask layer to generate a target user custom content image;
and according to the playing configuration, sequentially rendering the video frame sequence, the target user-defined content image and the superimposed video animation, and playing the target video animation.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform a method of generating video animation provided by the above methods, the method comprising:
acquiring a video frame sequence of a background video animation;
acquiring first fusion resource information and second fusion resource information of the background video animation, wherein the first fusion resource information comprises placeholders, shade information, transparency information, size information and position information of user-defined contents, the second fusion resource information comprises placeholders, shade information, RGB color information, transparency information, size information and position information of superimposed video animation, and the superimposed video animation is set to be an animation effect superimposed on the user-defined contents;
Generating a first mask layer of the user-defined content according to the first fusion resource information, and generating a second mask layer of the superimposed video animation according to the second fusion resource information, wherein the first mask layer comprises a transparency channel mask layer and the second mask layer comprises a transparency channel mask layer and an RGB channel mask layer;
sequentially inserting the first mask layer and the second mask layer to corresponding positions in the video frame sequence, and generating an MP4 file of the target video animation according to the video frame sequence inserted with the first mask layer and the second mask layer;
writing the first fusion resource information and the second fusion resource information into play configuration in the MP4 file;
or, a playing method of executing video animation, the method comprising:
acquiring target user-defined content and the MP4 file according to any one of the above;
synthesizing the target user custom content and the first mask layer to generate a target user custom content image;
and according to the playing configuration, sequentially rendering the video frame sequence, the target user-defined content image and the superimposed video animation, and playing the target video animation.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for generating video animation, comprising:
acquiring a video frame sequence of a background video animation;
acquiring first fusion resource information and second fusion resource information of the background video animation, wherein the first fusion resource information comprises placeholders, shade information, transparency information, size information, position information and time frame information of user-defined contents, the second fusion resource information comprises placeholders, shade information, RGB color information, transparency information, size information, position information and time frame information of superimposed video animation, and the superimposed video animation is set to be an animation effect superimposed on the user-defined contents;
Generating a first mask layer of the user-defined content according to the first fusion resource information, and generating a second mask layer of the superimposed video animation according to the second fusion resource information, wherein the first mask layer comprises a transparency channel mask layer and the second mask layer comprises a transparency channel mask layer and an RGB channel mask layer;
sequentially inserting the first mask layer and the second mask layer to corresponding positions in the video frame sequence, and generating an MP4 file of the target video animation according to the video frame sequence inserted with the first mask layer and the second mask layer;
and writing the first fusion resource information and the second fusion resource information into play configuration in the MP4 file.
2. The method for generating a video animation according to claim 1, wherein the hierarchical relationship between the user-defined content and the superimposed video animation in the target video animation is determined by the setting order of the first fusion resource information and the second fusion resource information;
the method further comprises the steps of:
determining the rendering sequence of the user-defined content and the superimposed video animation according to the setting sequence;
Writing the rendering order into the play configuration.
3. The method of claim 1, wherein the user-defined content comprises at least one of a user avatar, and a user uploaded image.
4. The method of claim 1, wherein the target video animation is a virtual gift animation.
5. A method for playing video animation, comprising:
acquiring target user-defined content and MP4 files according to any one of claims 1 to 4;
synthesizing the target user custom content and the first mask layer to generate a target user custom content image;
and according to the playing configuration, sequentially rendering the video frame sequence, the target user-defined content image and the superimposed video animation, and playing the target video animation.
6. The method for playing a video animation according to claim 5, wherein the step of obtaining the target user-defined content comprises:
receiving a download address of the target user-defined content sent by a server;
And downloading and acquiring the user-defined content of the target user according to the downloading address.
7. A video animation generation apparatus, comprising:
the first acquisition module is used for acquiring a video frame sequence of the background video animation;
the second acquisition module is used for acquiring first fusion resource information and second fusion resource information of the background video animation, wherein the first fusion resource information comprises placeholders, shade information, transparency information, size information and position information of user-defined contents, the second fusion resource information comprises placeholders, shade information, RGB color information, transparency information, size information and position information of superimposed video animation, and the superimposed video animation is set to be an animation effect superimposed on the user-defined contents;
the first generation module is used for generating a first mask layer of the user-defined content according to the first fusion resource information and generating a second mask layer of the superimposed video animation according to the second fusion resource information, wherein the first mask layer comprises a transparency channel mask layer and the second mask layer comprises a transparency channel mask layer and an RGB channel mask layer;
The second generation module is used for sequentially inserting the first mask layer and the second mask layer to the corresponding positions in the video frame sequence, and generating MP4 files of the target video animation according to the video frame sequence inserted with the first mask layer and the second mask layer;
and the writing module is used for writing the first fusion resource information and the second fusion resource information into the playing configuration in the MP4 file.
8. A video animation playback apparatus, comprising:
an acquisition module for acquiring target user-defined content and the MP4 file according to any one of claims 1 to 4;
the generating module is used for synthesizing the target user custom content and the first mask layer to generate a target user custom content image;
and the playing module is used for sequentially rendering the video frame sequence, the target user self-defined content image and the superimposed video animation according to the playing configuration and playing the target video animation.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of generating a video animation according to any of claims 1 to 6 or the method of playing a video animation when executing the program.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of generating a video animation according to any one of claims 1 to 6 or the method of playing a video animation.
CN202310824531.1A 2023-07-06 2023-07-06 Video animation generation and playing method and device, electronic equipment and storage medium Pending CN116781993A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310824531.1A CN116781993A (en) 2023-07-06 2023-07-06 Video animation generation and playing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310824531.1A CN116781993A (en) 2023-07-06 2023-07-06 Video animation generation and playing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116781993A true CN116781993A (en) 2023-09-19

Family

ID=87992886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310824531.1A Pending CN116781993A (en) 2023-07-06 2023-07-06 Video animation generation and playing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116781993A (en)

Similar Documents

Publication Publication Date Title
CN106611435B (en) Animation processing method and device
CN107770626A (en) Processing method, image synthesizing method, device and the storage medium of video material
CN111899155B (en) Video processing method, device, computer equipment and storage medium
CN108647313A (en) A kind of real-time method and system for generating performance video
CN107832108A (en) Rendering intent, device and the electronic equipment of 3D canvas web page elements
CN108882055B (en) Video live broadcast method and system, and method and device for synthesizing video stream
CN113411664B (en) Video processing method and device based on sub-application and computer equipment
TW202004674A (en) Method, device and equipment for showing rich text on 3D model
CN110198420B (en) Video generation method and device based on nonlinear video editing
JP5277436B2 (en) Image display program, image display device, and avatar providing system
KR20140031438A (en) Apparatus and method of reconstructing mobile contents
CN116781993A (en) Video animation generation and playing method and device, electronic equipment and storage medium
CN107347116B (en) Desktop effect generation method and device and electronic equipment
CN117376660A (en) Subtitle element rendering method, device, equipment, medium and program product
CN116962748A (en) Live video image rendering method and device and live video system
JP4780679B2 (en) Mobile small communication device and program
CN113038225B (en) Video playing method, device, computing equipment and storage medium
CN110941413B (en) Display screen generation method and related device
JP2009271881A (en) Mobile type compact communication equipment and mail system and program
CN116527983A (en) Page display method, device, equipment, storage medium and product
CN116991528B (en) Page switching method and device, storage medium and electronic equipment
CN116883223B (en) Image watermark synthesis method and device, equipment, medium and product thereof
WO2023169089A1 (en) Video playing method and apparatus, electronic device, medium, and program product
CN117750085A (en) Bullet screen processing method and device in game
CN116962782A (en) Media information display method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination