CN115988255A - Special effect generation method and device, electronic equipment and storage medium - Google Patents

Special effect generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115988255A
CN115988255A CN202211668454.7A CN202211668454A CN115988255A CN 115988255 A CN115988255 A CN 115988255A CN 202211668454 A CN202211668454 A CN 202211668454A CN 115988255 A CN115988255 A CN 115988255A
Authority
CN
China
Prior art keywords
target
special effect
image
user
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211668454.7A
Other languages
Chinese (zh)
Inventor
李贝
覃裕文
刘高
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211668454.7A priority Critical patent/CN115988255A/en
Publication of CN115988255A publication Critical patent/CN115988255A/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the disclosure provides a method, a device, electronic equipment and a storage medium for generating special effects, wherein a target special effect template is loaded, the target special effect template comprises a special effect material occupation bitmap, and the target special effect template is used for representing a special effect pattern of a special effect image based on the special effect material occupation bitmap; acquiring associated data of a target user, and generating a corresponding target image according to the associated data, wherein the associated data is at least used for indicating the associated user having an association relation with the target user, and the target image is at least used for representing identification information of the associated user; and replacing the occupation bitmap of the special effect material with a target image to generate a target special effect image. The special effect template is combined with the associated data of the user to generate an image for the associated user having an associated relationship with the target user, and the image is loaded into the target special effect template, so that a special effect image with an interactive attribute is obtained, and the effect of generating different special effect images for the associated relationships of different users is realized.

Description

Special effect generation method and device, electronic equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of internet, and in particular, to a special effect generation method and device, an electronic device, and a storage medium.
Background
In current video Application programs (APPs), various video special effects are usually provided for users, and the users add special effect images into videos by using the video special effects, so that the visual expressive force and the creation quality of the videos are improved.
However, in the actual application process, the video special effect provided for the user in the prior art usually needs to be manually selected by the user based on personal preference, so that the generated video special effect has the problems of fixed style, lack of interactive attribute and the like, and the video transmission effect is influenced.
Disclosure of Invention
The embodiment of the disclosure provides a special effect generation method, a special effect generation device, electronic equipment and a storage medium, so as to solve the problems of fixed style, lack of interactive attribute and the like of video special effects.
In a first aspect, an embodiment of the present disclosure provides a special effect generating method, including:
loading a target special effect template, wherein the target special effect template comprises a special effect material occupation bitmap, and the target special effect template is used for representing a special effect pattern of a special effect image based on the special effect material occupation bitmap; acquiring association data of a target user, and generating a corresponding target image according to the association data, wherein the association data is at least used for indicating an associated user having an association relation with the target user, and the target image is at least used for representing identification information of the associated user; and replacing the special effect material placeholder map with the target image to generate a target special effect image.
In a second aspect, an embodiment of the present disclosure provides a special effect generating apparatus, including:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for loading a target special effect template, the target special effect template comprises a special effect material occupation bitmap, and the target special effect template is used for representing a special effect pattern of a special effect image based on the special effect material occupation bitmap;
the processing module is used for acquiring association data of a target user and generating a corresponding target image according to the association data, wherein the association data is at least used for indicating an associated user having an association relation with the target user, and the target image is at least used for representing identification information of the associated user;
and the generating module is used for replacing the special effect material placeholder map with the target image to generate a target special effect image.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to implement the special effects generation method as described above in the first aspect and various possible designs of the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a computer-readable storage medium, where computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the special effect generation method according to the first aspect and various possible designs of the first aspect are implemented.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program that, when executed by a processor, implements the special effects generation method as described above in the first aspect and in various possible designs of the first aspect.
According to the special effect generating method, the special effect generating device, the electronic equipment and the storage medium provided by the embodiment, a target special effect template is loaded, wherein the target special effect template comprises a special effect material occupation bitmap, and the target special effect template is used for representing a special effect pattern of a special effect image based on the special effect material occupation bitmap; acquiring association data of a target user, and generating a corresponding target image according to the association data, wherein the association data is at least used for indicating an associated user having an association relation with the target user, and the target image is at least used for representing identification information of the associated user; and replacing the special effect material placeholder map with the target image to generate a target special effect image. The special effect template is combined with the associated data of the user to generate an image for an associated user having an associated relationship with a target user, and the image is loaded into the target special effect template, so that a special effect image having an interactive attribute is obtained, the effect of generating different special effect images according to the associated relationship of different users is realized, and the interactive attribute of a video using the special effect image is improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and those skilled in the art can obtain other drawings without inventive labor.
Fig. 1 is an application scenario diagram of a special effect generation method according to an embodiment of the present disclosure;
fig. 2 is a first flowchart of a special effect generation method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a target special effects template provided by an embodiment of the present disclosure;
fig. 4 is a flowchart of an implementation step of acquiring the associated data of the target user in step S102;
fig. 5 is a flowchart illustrating a specific implementation manner of step S1022 in the embodiment shown in fig. 4;
FIG. 6 is a schematic diagram of a process for generating a target image according to an embodiment of the present disclosure;
fig. 7 is a second flowchart illustrating a special effect generating method according to an embodiment of the disclosure;
FIG. 8 is a flowchart illustrating a specific implementation manner of step S204 in the embodiment shown in FIG. 7;
FIG. 9 is a schematic diagram of a target image provided by an embodiment of the present disclosure;
FIG. 10 is a flowchart illustrating a specific implementation manner of step S206 in the embodiment shown in FIG. 7;
FIG. 11 is a diagram illustrating a rotated frame of an image sequence according to an embodiment of the disclosure;
fig. 12 is a block diagram of a special effect generating apparatus according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 14 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant country and region, and are provided with corresponding operation entrances for the user to choose authorization or denial.
The following explains an application scenario of the embodiment of the present disclosure:
fig. 1 is an application scenario diagram of a special effect generation method provided in the embodiment of the present disclosure, and the special effect generation method provided in the embodiment of the present disclosure may be applied to application scenarios such as video editing and video live broadcasting. Specifically, as shown in fig. 1, the method provided in the embodiment of the present disclosure may be applied to a terminal device or a server, taking the terminal device as an example, the terminal device may be a smart phone, an application client capable of being used for video editing operates in the terminal device, further, the client of the application includes a video editing page, and different special effect properties, for example, a special effect property #1 shown in the figure, are provided in the video editing page.
In the prior art, special effects items provided for a user in an application client are usually fixed-style maps, and the user needs to manually select the special effects based on personal preference, for example, a "cloud" map selected by the user in fig. 1. Because the pattern is fixed and single, the special effect image generated by the special effect prop cannot cause resonance of a viewer, so that the generated video work lacks interactive properties and the video transmission effect is influenced. The embodiment of the disclosure provides a special effect generation method to solve the above problem.
Referring to fig. 2, fig. 2 is a first flowchart illustrating a special effect generating method according to an embodiment of the disclosure. The method of this embodiment may be applied to a terminal device or a server, and this embodiment is described with the terminal device as an execution subject, where the special effect generating method includes:
step S101: and loading a target special effect template, wherein the target special effect template comprises a special effect material occupation bitmap, and the target special effect template is used for representing a special effect pattern of a special effect image based on the special effect material occupation bitmap.
Exemplarily, referring to the application scenario diagram shown in fig. 1, the terminal device runs an application client to display a video editing interface for video editing, and then, in response to a trigger operation of a user for a special effect component in the video editing interface, acquires corresponding data, that is, a target special effect template, for implementing the special effect style. Illustratively, description information corresponding to the special effect styles and a special effect material occupancy map can be included in the target special effect template, wherein the target special effect template generates a corresponding special effect image based on the special effect material corresponding to the special effect material occupancy map. Fig. 3 is a schematic diagram of a target special effect template provided by an embodiment of the present disclosure, and as shown in fig. 3, the contents of the target special effect template are a plurality of "stars" running on the same orbit. The special effect material space map is used for space occupation, and in the subsequent steps, the special effect material space map is replaced by other pictures, so that dynamic special effect material generation is realized. Referring to fig. 3, 4 images of the "planet" are included, namely, a target special effect template, and a special effect generated by the target special effect template is a special effect image. The position, size, orbit, motion rule, etc. of the "planet" in the target special effect template are collectively referred to as the special effect pattern of the special effect image, i.e. the special effect pattern of the special effect image can be represented by the target special effect template.
Further, the target special effect template is pre-generated special effect data, which may be pre-stored in a local terminal device or a server, and the terminal device may obtain various special effect templates including the target special effect template by downloading a client program or accessing a server corresponding to the server, which is not described in detail herein.
Step S102: acquiring the associated data of the target user, and generating a corresponding target image according to the associated data, wherein the associated data is at least used for indicating the associated user having an association relation with the target user, and the target image is at least used for representing the identification information of the associated user.
Exemplarily, after the target special effect template is loaded, the associated data of the target user is obtained, where the target user is a user identity registered based on the virtual platform corresponding to the application program, and in a possible implementation manner, a login user of the application program client running in the terminal device to which the method of this embodiment is applied is the target user, which is simply the user identity of the user of the terminal device. Further, the associated data of the target user, that is, the related data of the associated user having an association relationship with the target user in the virtual platform corresponding to the application program. The incidence relation refers to a virtual relation constructed based on a platform corresponding to the application program. For example, the user #1 and the user #2 are friends in the short video platform, and there is a type of association relationship between the user #1 and the user # 2; user #3 and user #4 are in a social platform and pay attention to each other, and then user #3 and user #4 have another type of association relationship. Therefore, by acquiring the association data of the target user, a user having a certain association relationship with the target user, that is, an associated user can be determined, and in short, the associated user of the target user can be determined.
More specifically, for example, the associated data may be an avatar, a nickname, a signature, etc. of the associated user of the target user, which may be set as needed. The associated data may be data stored in the server, and on the necessary premise of obtaining the authorization of the user, the terminal device may obtain the associated data based on an instruction of the user, and a specific implementation manner is the prior art and is not described herein again.
In a possible implementation manner, the target special effect template further includes a first parameter, and the first parameter represents a category of the association relationship, as shown in fig. 4, the specific implementation step of obtaining the association data of the target user includes:
step S1021: and acquiring a first interaction list from the server according to the first parameter, wherein the first interaction list is used for indicating the associated users having the association relation of the target category with the target user.
Step S1022: and obtaining the associated data according to the first interaction list.
Illustratively, the first interaction list is list information generated based on interaction behaviors between the user and the user, wherein the first interaction list may be, for example, a "buddy list", a "attention list", a "praise list", or the like according to the category of the association relationship. For example, the "buddy list" is a bidirectional association relationship, for example, the "attention list" and the "like list" correspond to a unidirectional association relationship, so that, taking the first interaction list as the "attention list" as an example, the first interaction list may be an association user that indicates the target user is interested in; associated users of the target user of interest may also be indicated.
Further, the first interaction list and the first parameter have a determined mapping relationship, in one possible implementation manner, the first parameter includes, for example, a category identifier representing a category of the association relationship, and the corresponding first interaction list is obtained from the server according to the category identifier. Specifically, for example, when the a field in the first parameter is S1, the "buddy list" of the target user is acquired from the server, when the a field in the first parameter is S2, the "attention list" of the target user is acquired from the server, and when the a field in the first parameter is S3, the "fan list" of the target user is acquired from the server.
In this embodiment, the first interaction list is obtained by characterizing the first parameter of the category of the association relationship, and the first interaction list corresponds to the specific virtual relationship category, so that an interactive user of the specific category is screened out, content control on the special effect image is realized, content flexibility of the special effect image is improved, and an interactive attribute of the special effect image is improved.
Further, according to the first interaction list acquired from the server, data such as a nickname and a head portrait of the associated user indicated by the first interaction list, that is, associated data, is acquired. In a possible implementation manner, the associated data may be obtained according to interaction information of the target user and the associated user, where the interaction information represents a historical interaction record between the target user and the associated user to be selected in the first interaction list. As shown in fig. 5, a specific implementation manner of step S1022 includes:
step S1022A: and determining at least one to-be-selected associated user according to the first interaction list.
Step S1022B: and acquiring the interaction information of each to-be-selected associated user, wherein the interaction information represents the historical interaction record between the to-be-selected associated user and the target user.
Step S1022C: and obtaining the associated data according to the interactive information.
For example, after the first interaction list is determined, when the number of users in the first interaction list is large, the users may be further filtered to select a few high-quality associated users to form associated data. Specifically, a historical interaction record between each to-be-selected associated user and the target user in the first interaction list is obtained, where the historical interaction record includes, for example, indexes such as the number of messages left, the number of praise, and the number of interactions between the target user and the associated user, the interaction strength of the associated user interacting with the target user can be determined through the historical interaction record, and then the associated data is determined based on the to-be-selected associated user with higher interaction strength, where a specific implementation manner can be set as needed, for example, the number of praise in the historical interaction record is greater than a preset threshold value for the to-be-selected associated user, or N (N is a positive integer) to-be-selected associated users with the highest number of praise are used to form the associated data.
In the step of this embodiment, after the first interaction list of the specific category is obtained, the first interaction list is further screened based on the interaction information, so that the obtained associated data and the special effect image generated based on the associated data have better interaction properties, and the interaction and propagation effects of the video including the special effect are improved.
It should be noted that, in another possible implementation manner, the association data of the target user may be manually input into the terminal device by the user (target user), for example, the user inputs a user nickname, an avatar, and the like for indicating the associated user into the terminal device. Therefore, the obtaining of the associated data in the above embodiment (steps S1021-S1022) is optionally implemented on the premise that the user agrees to obtain the user authorization, and the prompt message is displayed to inform the user before the user agrees to obtain the user authorization.
Further, after the associated data is obtained, the associated data is converted to generate a corresponding target image, and the target image may be used to display identification information of the associated user, for example, the content of the target image includes a head portrait, a nickname, and the like of the associated user, so that the user may determine the corresponding associated user through the target image. For example, the associated data includes a user nickname, and in one possible implementation, the user nickname in a text form is converted into a corresponding image, and an image font is beautified, for example, a font gradient is increased, a font type is changed, and the like, to obtain a target image. In another possible implementation manner, semantic analysis is performed according to the nickname of the user to obtain a background image matched with the semantics of the nickname of the user, and then the nickname of the user and the background image are combined into a complete image to obtain a target image. The specific implementation manner may be set as required, and is not described in detail here.
Step S103: and replacing the occupation bitmap of the special effect material with a target image to generate a target special effect image.
Illustratively, after the target image is obtained, the placeholder map for the special effect material in the target special effect template is replaced by the target image, so that the replacement of the special effect material in the special effect is realized, and a dynamic special effect which is different from person to person is formed. And then, adding the target special effect image to the video to be processed to obtain a target image containing the interactive attribute special effect. Fig. 6 is a schematic diagram of a process of generating a target image according to an embodiment of the present disclosure, as shown in fig. 6, after a target special effect template (see a related introduction corresponding to fig. 3 for a specific description) is loaded, associated data of a target User is obtained, where the associated data includes, for example, head portraits of friend users (including User _1, user _2, and User _ 3) of the target User, and then the associated data is processed and converted to obtain a corresponding target image P1, a target image P2, and a target image P3; and then, replacing the special effect material occupancy map in the target special effect template with a target image P1, a target image P2 and a target image P3 to obtain a target special effect image, wherein as shown in the figure, the whole special effect pattern of the target special effect image is not changed relative to the target special effect template, but the special effect material corresponding to the special effect material occupancy map is replaced, and a target image with interactive attributes is added, so that the generated target special effect image also has certain interactive attributes. And then, rendering the target special effect image into the video to be processed, and combining the target special effect image with the video to be processed to obtain a target image with an interactive attribute.
In the embodiment, a target special effect template is loaded, wherein the target special effect template comprises a special effect material occupation bitmap and is used for representing a special effect pattern of a special effect image based on the special effect material occupation bitmap; acquiring associated data of a target user, and generating a corresponding target image according to the associated data, wherein the associated data is at least used for indicating the associated user having an association relation with the target user, and the target image is at least used for representing identification information of the associated user; and replacing the occupation map of the special effect material with a target image to generate a target special effect image. The special effect template is combined with the associated data of the user to generate an image for the associated user having an associated relationship with the target user, and the image is loaded into the target special effect template, so that a special effect image having an interactive attribute is obtained, the effect of generating different special effect images according to the associated relationship of different users is realized, and the interactive attribute of a video using the special effect image is improved.
Referring to fig. 7, fig. 7 is a second flowchart illustrating a special effect generating method according to an embodiment of the disclosure. In this embodiment, on the basis of the embodiment shown in fig. 2, the step S102 is further refined, and the special effect generating method includes:
step S201: and loading a target special effect template, wherein the target special effect template comprises a special effect material occupation bitmap, and the target special effect template is used for representing a special effect pattern of a special effect image based on the special effect material occupation bitmap.
Step S202: and obtaining associated data through the first parameter in the target special effect template, wherein the associated data comprises an interactive image corresponding to an associated user.
For example, in steps S201 and S202 in this embodiment, the step of loading the target special effect template, obtaining the first interaction list according to the first parameter in the target special effect template, and further obtaining the associated data is described in detail in the embodiment of fig. 2, and details of the specific implementation are not described here. In this embodiment, the associated data includes an interactive image corresponding to the associated user, for example, an avatar picture of the associated user, a homepage cover picture of the associated user, and the like.
Step S203: and obtaining the size of the target image corresponding to the occupancy map of the special effect material through the second parameter in the target special effect template.
Step S204: and processing the interactive image based on the size of the target image to obtain the target image.
Exemplarily, the target special effect template further includes a second parameter, where the second parameter represents image size information corresponding to the special effect material placeholder map, and specifically, the second parameter may include parameters such as a picture size and a resolution of the special effect material placeholder map. After the second parameter is obtained, the size of the image to be selected in the associated data, such as the user head portrait image, is processed based on the second parameter, so that an image meeting the image loading size requirement in the target special effect template, namely the target image, is obtained. In this embodiment, the interactive image is processed through the second parameter, so that the size of the interactive image meets the size requirement of the target special effect template, the size consistency of the material image in the produced target special effect image is improved, and the attractiveness of the special effect is improved.
Optionally, the associated data further includes an interactive text corresponding to the associated user, as shown in fig. 8, step S204 includes three specific steps of steps S2041, S2042, and S2043:
step S2041: and processing the interactive image based on the size of the target image to obtain a first image.
Step S2042: and generating a second image containing the interactive text according to the interactive text.
Step S2043: and splicing the first image and the second image to obtain a target image.
Illustratively, the associated data includes interactive text, such as a user nickname, a user status signature, etc., in addition to the interactive image. Illustratively, each associated user in the associated data has a corresponding relationship between the corresponding interactive text and the interactive image. After obtaining the interactive image from the associated data, the interactive image is processed based on the first parameter to obtain a first image, and then the interactive text is converted to obtain a corresponding picture, namely a second image. And further, splicing the first image and the second image to obtain a target image. Fig. 9 is a schematic diagram of a target image according to an embodiment of the disclosure, and as shown in fig. 9, a first image is a picture generated from an avatar of a related user in an interactive image; the second image is a picture generated by a User nickname (User _ 1) of a certain associated User in the interactive text, and the picture and the User nickname are spliced to obtain an image, namely a target image.
Step S205: and determining the number of the sequence frames through the third parameter in the target special effect template, and acquiring the target images of the number of the sequence frames.
Step S206: and generating image sequence frames according to the target images of the number of the sequence frames, wherein the image sequence frames are used for dynamically displaying the target images of the number of the sequence frames frame by frame.
Further, the target special effect template further includes a third parameter, where the third parameter is a sequence frame number, and the sequence frame number refers to the number of pictures constituting an image sequence frame. The image sequence frames are dynamic images, i.e., moving images, for displaying a plurality of frame images frame by frame. In a possible implementation manner, multiple frames of target images are obtained through the associated data, the target images with the number of sequence frames are selected from the multiple frames of target images through the third parameter, and the target images are combined to generate an image sequence frame, namely a dynamic image, for dynamically displaying the target images with the number of sequence frames frame by frame. The implementation method for synthesizing a motion picture by using multiple frames of pictures is the prior art and is not described in detail.
Optionally, the third parameter includes a frequency sub-parameter, and the frequency sub-parameter characterizes a time interval of each image frame in the image sequence frames, as shown in fig. 10, step S206 includes two specific steps S2061 and S2062:
step S2061: and determining time stamp information based on the frequency sub-parameters, wherein the time stamp information represents the playing time stamp of each target image in the target images with the number of the sequence frames.
Step S2062: and generating image sequence frames according to the time stamp information.
Illustratively, in the process of synthesizing the frames of the image sequence, the interval between the target images of each frame is determined by the frequency sub-parameter in the third parameter. By acquiring the frequency sub-parameter in the third parameter, the playing time stamp corresponding to each target image can be determined, so as to obtain the playing sequence of the image sequence frame, that is, the time stamp information, and then the image sequence frame is generated based on the time stamp information. In this embodiment, the frequency sub-parameter in the third parameter is used to accurately control the playing rate of the image sequence frame, so as to avoid the problem that the display effect is affected by too fast or too slow playing speed of the image sequence frame, and improve the special effect.
Step S207: and rotating the image sequence frames by the corresponding target display angle through a fourth parameter in the target special effect template.
Step S208: and replacing the occupying bitmap of the special effect material with the rotated image sequence frame to generate a target special effect image.
Exemplarily, the target special effect template further includes a fourth parameter, where the fourth parameter represents a display angle of the special effect material in the bitmap, where the display angle may be a display angle of the target image on a two-dimensional plane where the special effect image is located, or a display angle extending from the two-dimensional plane to a three-dimensional space (camera space) corresponding to the video to be processed. Fig. 11 is a schematic diagram of a rotated image sequence frame according to an embodiment of the present disclosure, and as shown in fig. 11, according to a fourth parameter, after the image sequence frame obtained in the previous step is rotated by an angle Phi in the depth direction, the image sequence frame is consistent with a display angle of a special effect material occupation bitmap, then material replacement is performed, and the special effect material occupation bitmap is replaced by the image sequence frame, so that a target special effect image with a visual rotation effect is obtained. In this embodiment, the visual angle of the image sequence frame is further adjusted through the fourth parameter in the target special effect template, so that more diverse target special effect images are obtained, and the special effect display effect of the target special effect images is improved.
Fig. 12 is a block diagram of a special effect generating apparatus according to an embodiment of the present disclosure, which corresponds to the special effect generating method according to the foregoing embodiment. For ease of illustration, only portions that are relevant to embodiments of the present disclosure are shown.
Referring to fig. 12, the special effect generation apparatus 3 includes:
the obtaining module 31 is configured to load a target special effect template, where the target special effect template includes a special effect material occupation bitmap, and the target special effect template is used to represent a special effect style of a special effect image based on the special effect material occupation bitmap.
And the processing module 32 is configured to obtain associated data of the target user, and generate a corresponding target image according to the associated data, where the associated data is at least used to indicate an associated user having an association relationship with the target user, and the target image is at least used to represent identification information of the associated user.
And the generating module 33 is configured to replace the special effect material placeholder map with a target image to generate a target special effect image.
In one embodiment of the present disclosure, the target special effect template further includes a first parameter, and the first parameter represents a category of the association relationship; when acquiring the associated data of the target user, the processing module 32 is specifically configured to: acquiring a first interaction list according to the first parameter, wherein the first interaction list is used for indicating associated users having an association relation with the target user in a target category; and obtaining the associated data according to the first interaction list.
In an embodiment of the disclosure, when obtaining the associated data according to the first interaction list, the processing module 32 is specifically configured to: determining at least one to-be-selected associated user according to the first interaction list; acquiring interaction information of each to-be-selected associated user, wherein the interaction information represents a historical interaction record between the to-be-selected associated user and a target user; and obtaining the associated data according to the interactive information.
In one embodiment of the disclosure, the associated data includes an interactive image corresponding to an associated user, the target special effect template further includes a second parameter, and the second parameter represents image size information corresponding to the special effect material placeholder map; when generating the corresponding target image according to the associated data, the processing module 32 is specifically configured to: obtaining the size of a target image corresponding to the special effect material placeholder map through a second parameter; and processing the interactive image based on the size of the target image to obtain the target image.
In one embodiment of the present disclosure, the associated data further includes an interactive text corresponding to the associated user; when the processing module 32 processes the interactive image based on the size of the target image to obtain the target image, it is specifically configured to: processing the interactive image based on the size of the target image to obtain a first image; generating a second image containing the interactive text according to the interactive text; and splicing the first image and the second image to obtain a target image.
In an embodiment of the present disclosure, the target special effect template further includes a third parameter, where the third parameter represents the number of sequence frames; before replacing the effect material placeholder map with the target image and generating the target effect image, the generating module 33 is further configured to: acquiring target images of the number of sequence frames; generating image sequence frames according to the target images of the number of the sequence frames, wherein the image sequence frames are used for dynamically displaying the target images of the number of the sequence frames frame by frame; the generating module 33 is specifically configured to: and replacing the occupation bitmap of the special effect material with an image sequence frame to generate a target special effect image.
In one embodiment of the present disclosure, the third parameter includes a frequency sub-parameter, and the frequency sub-parameter characterizes a time interval of each image frame in the image sequence frames; when the generating module 33 generates the image sequence frames according to the target images of the sequence frame number, it is specifically configured to: determining time stamp information based on the frequency sub-parameters, wherein the time stamp information represents the playing time stamp of each target image in the target images with the number of the sequence frames; and generating image sequence frames according to the time stamp information.
In one embodiment of the disclosure, the target special effect template further includes a fourth parameter, and the fourth parameter represents a display angle of the special effect material occupying bitmap; the generating module 33 is specifically configured to: and based on the fourth parameter, rotating the target image at a corresponding target display angle, replacing the special effect material occupation map with the rotated target image, and generating a target special effect image.
The obtaining module 31, the processing module 32 and the generating module 33 are connected in sequence. The special effect generating device 3 provided in this embodiment may execute the technical solution of the above method embodiment, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, and as shown in fig. 13, the electronic device 4 includes:
a processor 41, and a memory 42 communicatively coupled to the processor 41;
the memory 42 stores computer-executable instructions;
processor 41 executes computer-executable instructions stored by memory 42 to implement the special effects generation method in the embodiment shown in fig. 2-11.
Wherein optionally the processor 41 and the memory 42 are connected by a bus 43.
The relevant descriptions and effects corresponding to the steps in the embodiments corresponding to fig. 2 to fig. 11 can be understood, and are not described in detail herein.
The embodiment of the present disclosure provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the computer-executable instructions are used to implement the special effect generating method provided in any one of the embodiments corresponding to fig. 2 to 11 of the present application.
Referring to fig. 14, a schematic structural diagram of an electronic device 900 suitable for implementing the embodiment of the present disclosure is shown, where the electronic device 900 may be a terminal device or a server. Among them, the terminal Device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a Digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a car navigation terminal (e.g., a car navigation terminal), etc., and a fixed terminal such as a Digital TV, a desktop computer, etc. The electronic device shown in fig. 14 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 14, the electronic device 900 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 901, which may perform various suitable actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage device 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data necessary for the operation of the electronic apparatus 900 are also stored. The processing apparatus 901, the ROM902, and the RAM 903 are connected to each other through a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
Generally, the following devices may be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 907 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 908 including, for example, magnetic tape, hard disk, etc.; and a communication device 909. The communication device 909 may allow the electronic apparatus 900 to perform wireless or wired communication with other apparatuses to exchange data. While fig. 14 illustrates an electronic device 900 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication device 909, or installed from the storage device 908, or installed from the ROM 902. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing apparatus 901.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the method shown in the above embodiments.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In a first aspect, according to one or more embodiments of the present disclosure, there is provided a special effect generation method, including:
loading a target special effect template, wherein the target special effect template comprises a special effect material occupation bitmap, and the target special effect template is used for representing a special effect pattern of a special effect image based on the special effect material occupation bitmap; acquiring association data of a target user, and generating a corresponding target image according to the association data, wherein the association data is at least used for indicating an associated user having an association relation with the target user, and the target image is at least used for representing identification information of the associated user; and replacing the special effect material placeholder map with the target image to generate a target special effect image.
According to one or more embodiments of the present disclosure, the target special effect template further includes a first parameter, and the first parameter represents a category of the association relationship; the acquiring of the associated data of the target user includes: acquiring a first interaction list according to the first parameter, wherein the first interaction list is used for indicating associated users having an association relationship of a target category with the target user; and obtaining associated data according to the first interaction list.
According to one or more embodiments of the present disclosure, the obtaining of the association data according to the first interaction list includes: determining at least one to-be-selected associated user according to the first interactive list; acquiring interaction information of each to-be-selected associated user, wherein the interaction information represents a historical interaction record between the to-be-selected associated user and the target user; and obtaining the associated data according to the interaction information.
According to one or more embodiments of the present disclosure, the associated data includes an interactive image corresponding to the associated user, the target special effect template further includes a second parameter, and the second parameter represents image size information corresponding to the special effect material placeholder map; the generating of the corresponding target image according to the associated data includes: obtaining the size of a target image corresponding to the special effect material occupation bitmap according to the second parameter; and processing the interactive image based on the size of the target image to obtain the target image.
According to one or more embodiments of the present disclosure, the associated data further includes an interactive text corresponding to the associated user; the processing the interactive image based on the size of the target image to obtain the target image comprises: processing the interactive image based on the size of the target image to obtain a first image; generating a second image containing the interactive text according to the interactive text; and splicing the first image and the second image to obtain the target image.
According to one or more embodiments of the present disclosure, the target special effect template further includes a third parameter, where the third parameter characterizes a number of sequence frames; before replacing the special effect material placeholder map with the target image and generating the target special effect image, the method further comprises: acquiring target images of the number of the sequence frames; generating image sequence frames according to the target images of the sequence frame number, wherein the image sequence frames are used for dynamically displaying the target images of the sequence frame number frame by frame; replacing the special effect material placeholder map with the target image to generate a target special effect image, comprising: and replacing the special effect material placeholder map with the image sequence frame to generate a target special effect image.
According to one or more embodiments of the present disclosure, the third parameter includes a frequency sub-parameter, where the frequency sub-parameter characterizes a time interval of each image frame in the image sequence frame; generating image sequence frames according to the target images of the sequence frame number, wherein the image sequence frames comprise: determining time stamp information based on the frequency sub-parameters, wherein the time stamp information represents the playing time stamp of each target image in the target images with the sequence frame number; and generating the image sequence frame according to the time stamp information.
According to one or more embodiments of the present disclosure, the target special effect template further includes a fourth parameter, where the fourth parameter represents a display angle of the special effect material occupying bitmap; replacing the special effect material placeholder map with the target image to generate a target special effect image, comprising: based on the fourth parameter, rotating the target image at a corresponding target display angle, replacing the special effect material placeholder map with the rotated target image, and generating the target special effect image.
In a second aspect, according to one or more embodiments of the present disclosure, there is provided an effect generating apparatus including:
the system comprises an acquisition module and a target special effect template, wherein the acquisition module is used for loading the target special effect template, the target special effect template comprises a special effect material occupation bitmap, and the target special effect template is used for representing a special effect pattern of a special effect image based on the special effect material occupation bitmap.
The processing module is used for acquiring the associated data of a target user and generating a corresponding target image according to the associated data, wherein the associated data is at least used for indicating the associated user having an association relation with the target user, and the target image is at least used for representing the identification information of the associated user.
And the generating module is used for replacing the special effect material placeholder map with the target image to generate a target special effect image.
In an embodiment of the present disclosure, the target special effect template further includes a first parameter, where the first parameter represents a category of the association relationship; when acquiring the associated data of the target user, the processing module is specifically configured to: acquiring a first interaction list according to the first parameter, wherein the first interaction list is used for indicating associated users having an association relation with the target user in a target category; and obtaining associated data according to the first interaction list.
In an embodiment of the disclosure, when obtaining the associated data according to the first interaction list, the processing module is specifically configured to: determining at least one to-be-selected associated user according to the first interactive list; acquiring interaction information of each to-be-selected associated user, wherein the interaction information represents a historical interaction record between the to-be-selected associated user and the target user; and obtaining the associated data according to the interaction information.
In an embodiment of the present disclosure, the associated data includes an interactive image corresponding to the associated user, and the target special effect template further includes a second parameter, where the second parameter represents image size information corresponding to the special effect material placeholder map; the processing module, when generating a corresponding target image according to the associated data, is specifically configured to: obtaining the size of the target image corresponding to the special effect material occupation bitmap according to the second parameter; and processing the interactive image based on the size of the target image to obtain the target image.
In an embodiment of the present disclosure, the associated data further includes an interactive text corresponding to the associated user; the processing module is specifically configured to, when processing the interactive image based on the size of the target image to obtain the target image: processing the interactive image based on the size of the target image to obtain a first image; generating a second image containing the interactive text according to the interactive text; and splicing the first image and the second image to obtain the target image.
In an embodiment of the present disclosure, the target special effect template further includes a third parameter, where the third parameter characterizes a number of sequence frames; before the replacing the special effect material placeholder map with the target image and generating the target special effect image, the generating module is further configured to: acquiring target images of the number of the sequence frames; generating image sequence frames according to the target images of the sequence frame number, wherein the image sequence frames are used for dynamically displaying the target images of the sequence frame number frame by frame; the generation module is specifically configured to: and replacing the special effect material placeholder map with the image sequence frame to generate a target special effect image.
In an embodiment of the present disclosure, the third parameter includes a frequency sub-parameter, and the frequency sub-parameter characterizes a time interval of each image frame in the image sequence frames; when the generating module generates the image sequence frames according to the target images of the sequence frame number, the generating module is specifically configured to: determining time stamp information based on the frequency sub-parameters, wherein the time stamp information represents the playing time stamp of each target image in the target images with the sequence frame number; and generating the image sequence frame according to the time stamp information.
In an embodiment of the present disclosure, the target special effect template further includes a fourth parameter, where the fourth parameter represents a display angle of the special effect material occupying bitmap; the generation module is specifically configured to: based on the fourth parameter, rotating the target image at a corresponding target display angle, replacing the special effect material placeholder map with the rotated target image, and generating the target special effect image.
In a third aspect, according to one or more embodiments of the present disclosure, there is provided an electronic device including: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to implement the special effects generation method as described above in the first aspect and various possible designs of the first aspect.
In a fourth aspect, according to one or more embodiments of the present disclosure, a computer-readable storage medium is provided, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the special effect generation method according to the first aspect and various possible designs of the first aspect are implemented.
In a fifth aspect, the embodiments of the present disclosure provide a computer program product, which includes a computer program that, when executed by a processor, implements the special effect generation method according to the first aspect and various possible designs of the first aspect.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and the technical features disclosed in the present disclosure (but not limited to) having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (12)

1. A special effect generation method, comprising:
loading a target special effect template, wherein the target special effect template comprises a special effect material occupation bitmap and is used for representing a special effect pattern of a special effect image based on the special effect material occupation bitmap;
acquiring association data of a target user, and generating a corresponding target image according to the association data, wherein the association data is at least used for indicating an associated user having an association relation with the target user, and the target image is at least used for representing identification information of the associated user;
and replacing the special effect material placeholder map with the target image to generate a target special effect image.
2. The method according to claim 1, wherein the target special effects template further comprises a first parameter, and the first parameter characterizes a category of the association;
the acquiring of the associated data of the target user comprises:
acquiring a first interaction list according to the first parameter, wherein the first interaction list is used for indicating associated users having an association relation with the target user in a target category;
and obtaining associated data according to the first interaction list.
3. The method of claim 2, wherein obtaining association data according to the first interaction list comprises:
determining at least one to-be-selected associated user according to the first interaction list;
acquiring interaction information of each to-be-selected associated user, wherein the interaction information represents a historical interaction record between the to-be-selected associated user and the target user;
and obtaining the associated data according to the interaction information.
4. The method according to claim 1, wherein the associated data includes an interactive image corresponding to the associated user, and the target special effect template further includes a second parameter, and the second parameter represents image size information corresponding to a special effect material placeholder map;
generating a corresponding target image according to the associated data includes:
obtaining the size of the target image corresponding to the special effect material occupation bitmap according to the second parameter;
and processing the interactive image based on the size of the target image to obtain the target image.
5. The method according to claim 4, wherein the associated data further includes interactive text corresponding to the associated user;
the processing the interactive image based on the size of the target image to obtain the target image comprises:
processing the interactive image based on the size of the target image to obtain a first image;
generating a second image containing the interactive text according to the interactive text;
and splicing the first image and the second image to obtain the target image.
6. The method according to claim 1, wherein the target special effects template further comprises a third parameter, wherein the third parameter characterizes a number of sequence frames;
before replacing the special effect material placeholder map with the target image and generating the target special effect image, the method further comprises:
acquiring target images of the number of the sequence frames;
generating image sequence frames according to the target images of the sequence frame number, wherein the image sequence frames are used for dynamically displaying the target images of the sequence frame number frame by frame;
replacing the special effect material placeholder map with the target image to generate a target special effect image, comprising:
and replacing the special effect material placeholder map with the image sequence frame to generate a target special effect image.
7. The method according to claim 6, wherein the third parameter comprises a frequency sub-parameter, and the frequency sub-parameter characterizes a time interval of each image frame in the image sequence frames;
generating image sequence frames according to the target images of the sequence frame number, wherein the image sequence frames comprise:
determining time stamp information based on the frequency sub-parameters, wherein the time stamp information represents the playing time stamp of each target image in the target images with the sequence frame number;
and generating the image sequence frame according to the time stamp information.
8. The method of claim 1, wherein the target effect template further comprises a fourth parameter, wherein the fourth parameter characterizes a display angle of the effect material occupying map;
replacing the special effect material placeholder map with the target image to generate a target special effect image, comprising:
based on the fourth parameter, rotating the target image at a corresponding target display angle, replacing the special effect material placeholder map with the rotated target image, and generating the target special effect image.
9. A special effect generation apparatus, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for loading a target special effect template, the target special effect template comprises a special effect material occupation bitmap, and the target special effect template is used for representing a special effect pattern of a special effect image based on the special effect material occupation bitmap;
the processing module is used for acquiring association data of a target user and generating a corresponding target image according to the association data, wherein the association data is at least used for indicating an associated user having an association relation with the target user, and the target image is at least used for representing identification information of the associated user;
and the generating module is used for replacing the special effect material placeholder map with the target image to generate a target special effect image.
10. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to implement the special effects generation method of any of claims 1 to 8.
11. A computer-readable storage medium having stored thereon computer-executable instructions which, when executed by a processor, implement the special effect generation method of any one of claims 1 to 8.
12. A computer program product, characterized in that it comprises a computer program which, when being executed by a processor, implements the special effects generation method of any one of claims 1 to 8.
CN202211668454.7A 2022-12-23 2022-12-23 Special effect generation method and device, electronic equipment and storage medium Pending CN115988255A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211668454.7A CN115988255A (en) 2022-12-23 2022-12-23 Special effect generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211668454.7A CN115988255A (en) 2022-12-23 2022-12-23 Special effect generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115988255A true CN115988255A (en) 2023-04-18

Family

ID=85969508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211668454.7A Pending CN115988255A (en) 2022-12-23 2022-12-23 Special effect generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115988255A (en)

Similar Documents

Publication Publication Date Title
CN109769141B (en) Video generation method and device, electronic equipment and storage medium
CN112738634B (en) Video file generation method, device, terminal and storage medium
CN110380955B (en) Message processing method and device and electronic equipment
CN111399729A (en) Image drawing method and device, readable medium and electronic equipment
CN111970571B (en) Video production method, device, equipment and storage medium
CN113806306B (en) Media file processing method, device, equipment, readable storage medium and product
CN109815448B (en) Slide generation method and device
CN112651475B (en) Two-dimensional code display method, device, equipment and medium
EP4131983A1 (en) Method and apparatus for processing three-dimensional video, readable storage medium, and electronic device
CN111381819B (en) List creation method and device, electronic equipment and computer-readable storage medium
CN114817630A (en) Card display method, card display device, electronic device, storage medium, and program product
CN113949901A (en) Comment sharing method and device and electronic equipment
CN114339447B (en) Method, device and equipment for converting picture into video and storage medium
CN113190316A (en) Interactive content generation method and device, storage medium and electronic equipment
CN112492399B (en) Information display method and device and electronic equipment
CN115878115A (en) Page rendering method, device, medium and electronic equipment
CN113628097A (en) Image special effect configuration method, image recognition method, image special effect configuration device and electronic equipment
CN116596748A (en) Image stylization processing method, apparatus, device, storage medium, and program product
CN115988255A (en) Special effect generation method and device, electronic equipment and storage medium
CN115454306A (en) Display effect processing method and device, electronic equipment and storage medium
CN111199519B (en) Method and device for generating special effect package
CN111275800B (en) Animation generation method and device, electronic equipment and computer readable storage medium
CN111385638B (en) Video processing method and device
CN113961280A (en) View display method and device, electronic equipment and computer-readable storage medium
CN115209215A (en) Video processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination