CN113421214A - Special effect character generation method and device, storage medium and electronic equipment - Google Patents
Special effect character generation method and device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN113421214A CN113421214A CN202110802878.7A CN202110802878A CN113421214A CN 113421214 A CN113421214 A CN 113421214A CN 202110802878 A CN202110802878 A CN 202110802878A CN 113421214 A CN113421214 A CN 113421214A
- Authority
- CN
- China
- Prior art keywords
- image
- character
- dynamic
- firework
- text
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 101
- 230000000694 effects Effects 0.000 title claims abstract description 80
- 238000003860 storage Methods 0.000 title claims description 9
- 239000002245 particle Substances 0.000 claims abstract description 135
- 230000008569 process Effects 0.000 claims abstract description 58
- 239000000463 material Substances 0.000 claims abstract description 35
- 230000008859 change Effects 0.000 claims abstract description 30
- 238000013507 mapping Methods 0.000 claims abstract description 20
- 238000012545 processing Methods 0.000 claims description 30
- 230000004927 fusion Effects 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 19
- 238000004891 communication Methods 0.000 description 10
- 230000009172 bursting Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 238000004880 explosion Methods 0.000 description 6
- 238000007499 fusion processing Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The method comprises the steps of obtaining characters to be processed, converting the characters to be processed into character images, mapping a dynamic change process of firework particles on corresponding pixel positions in the firework particle dynamic images to character pixel positions in the character images, generating target dynamic characters with firework effects, and fusing the target dynamic characters with firework dynamic materials to generate the firework special effect characters. The method for generating the special effect characters does not need various complex video image editing operations, and can generate a professional and attractive firework special effect character effect by one key only by inputting characters by a user, so that the life of the user is more colorful.
Description
Technical Field
The present disclosure relates to the field of image technologies, and in particular, to a method and an apparatus for generating special effect text, a storage medium, and an electronic device.
Background
In the related art, the firework character particle special effect is mainly produced by editing an image with AE (Adobe After Effects, non-linear special effect production software) or ps (photoshop). For the complicated special effect of the firework special effect characters, the requirement on the professional ability of a user is high, not only a solid video image theoretical basis needs to be provided, but also the work of editing the video image with the depth needs to be done, so that common users cannot make the firework special effect characters, and even if the firework special effect characters can be made, a large amount of time and energy needs to be consumed.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a method and an apparatus for generating special effect text, a storage medium, and an electronic device.
According to a first aspect of the embodiments of the present disclosure, a method for generating special effect words is provided, including:
acquiring characters to be processed;
converting the characters to be processed into character images;
mapping the dynamic change process of the firework particles on the corresponding pixel position in the firework particle dynamic image to the character pixel position according to the character pixel position in the character image to obtain a target dynamic character;
and fusing the target dynamic characters and the firework dynamic materials to obtain the firework special effect characters.
In some embodiments, the method further comprises:
hollowing out the character image to obtain a hollowed-out character image;
the method for mapping the dynamic change process of the firework particles on the corresponding pixel position in the firework particle dynamic image to the character pixel position according to the character pixel position in the character image to obtain the target dynamic character comprises the following steps:
and mapping the dynamic change process of the firework particles on the corresponding pixel positions in the firework particle dynamic image to the character pixel positions according to the character pixel positions in the hollowed character image to obtain the target dynamic characters.
In some embodiments, the performing the hollow processing on the text image to obtain the hollow text image includes:
performing expansion processing on the character image to obtain an expanded character image;
carrying out pixel value negation processing on the character image to obtain a negated character image;
and obtaining the hollowed character image based on the expanded character image and the inverted character image.
In some embodiments, the method further comprises:
performing Gaussian blur processing with different blurriness on a plurality of frames of the hollowed character images to obtain a plurality of frames of blurred character images, wherein the blurriness of each frame of the blurred character images is related to the particle life cycle of the image frame of the blurred character image corresponding to the firework particle dynamic image;
according to the character pixel position in the character image after the hollowing out, mapping the dynamic change process of the firework particles on the corresponding pixel position in the firework particle dynamic image to the character pixel position to obtain the target dynamic character, and the method comprises the following steps:
mapping the dynamic change process of the firework particles on the corresponding pixel positions in the firework particle dynamic image to the character pixel positions according to the character pixel positions in the hollowed character image to obtain first dynamic characters;
and for each frame of the blurred text image, fusing the blurred text image with the corresponding image frame in the first dynamic text to obtain a second dynamic text, and taking the second dynamic text as the target dynamic text.
In some embodiments, the fusing the blurred text image with the corresponding image frame in the first dynamic text comprises:
determining the transparency of each pixel point in the fuzzy character image;
filling pixel points with the transparency value of 0 in the fuzzy character image based on a preset pixel value, so that the transparency of the filled pixel points is not 0;
and replacing the pixel point with the transparency value of 1 in the blurred text image based on the target pixel point in the first dynamic text, wherein the target pixel point is a pixel point which is matched with the coordinates of the pixel point with the transparency value of 1 in the blurred text image on the image frame of the first dynamic text corresponding to the blurred text image.
In some embodiments, the firework particle dynamic image is obtained by:
determining attribute information of the firework particles selected by the user;
and generating the firework particle dynamic image by combining a particle system based on the attribute information of the firework particle.
In some embodiments, the fusing the target dynamic text with the firework dynamic material to obtain the firework special effect text includes:
and aiming at each frame of image in the target dynamic characters, fusing the frame of image into an image frame of the firework display process matched with the particle life cycle in the firework dynamic material according to the particle life cycle of the firework particles in the frame of image, and obtaining the firework special effect characters.
According to a second aspect of the embodiments of the present disclosure, there is provided a special effect character generation apparatus including:
the acquisition module is configured to acquire characters to be processed;
the conversion module is configured to convert the characters to be processed into character images;
the first fusion module is configured to map the dynamic change process of the firework particles on the corresponding pixel position in the firework particle dynamic image to the character pixel position according to the character pixel position in the character image to obtain a target dynamic character;
and the second fusion module is configured to fuse the target dynamic characters and the firework dynamic materials to obtain the firework special effect characters.
According to a third aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, on which computer program instructions are stored, which program instructions, when executed by a processor, implement the steps of the special effect text generation method provided by the first aspect of the present disclosure.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute instructions stored in the memory to implement the steps of the method for generating the special effect text provided by the first aspect of the disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the dynamic change process of the firework particles on the corresponding pixel positions in the firework particle dynamic image is mapped to the character pixel positions in the character image, so that target dynamic characters with a firework effect can be generated, and then the target dynamic characters and firework dynamic materials are fused to generate firework special effect characters. The method for generating the special effect characters does not need various complex video image editing operations, and can generate a professional and attractive firework special effect character effect by one key only by inputting characters by a user, so that the life of the user is more colorful.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a method for special effect text generation in accordance with an illustrative embodiment;
FIG. 2 is a schematic diagram of a firework effect text shown in accordance with an exemplary embodiment;
FIG. 3 is a flow diagram illustrating a method for special effect text generation in accordance with another illustrative embodiment;
FIG. 4 is a flow diagram illustrating a process for hollowing out text images in accordance with an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating an expanded text image in accordance with an exemplary embodiment;
FIG. 6 is a schematic diagram illustrating a skeleton text image in accordance with an exemplary embodiment;
FIG. 7 is a diagram illustrating a target dynamic text, according to an illustrative embodiment;
FIG. 8 is a flow diagram illustrating a method for special effect text generation in accordance with yet another illustrative embodiment;
FIG. 9 is a diagram illustrating a second dynamic text, according to an example embodiment;
FIG. 10 is a flowchart illustrating the generation of a fireworks particle dynamic image according to an exemplary embodiment;
FIG. 11 is a block diagram illustrating an apparatus for special effect text generation in accordance with an illustrative embodiment;
FIG. 12 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flow chart illustrating a method for special effect text generation according to an example embodiment. As shown in fig. 1, the method for generating special effect text can be applied to a terminal and includes the following steps.
In step 110, the word to be processed is obtained.
Here, the text to be processed may be text input by the user through the terminal, for example, if the user needs to make a "forward all the time" firework special effect text, the "forward all the time" text is input in the terminal.
In step 120, the word to be processed is converted into a word image.
Here, the terminal converts the to-be-processed text input by the user into a text image after receiving the to-be-processed text. In some embodiments, the word to be processed may be converted to a text image based on a library of libas algorithms. The libas algorithm library is a lightweight open source library for rendering subtitles in formats such as ASS (advanced positioning alpha) or SSA (positioning alpha), and can perform text-to-picture processing with extremely low power consumption.
It should be noted that alpha (transparency) of the character portion in the converted character image is 1, and alpha (transparency) of the other portion is 0. It should be understood that an alpha of 1 means completely opaque and an alpha of 0 means completely transparent.
In step 130, according to the text pixel position in the text image, mapping the dynamic change process of the firework particles at the corresponding pixel position in the firework particle dynamic image to the text pixel position to obtain the target dynamic text.
Here, the firework particle dynamic image is a dynamic image simulating a firework generated by a particle system. The firework effect is a system composed of individual particles, each particle is one of the components of the particle system, and as an object of the particle system, each particle has attributes such as coordinates, color, speed, and life cycle. By controlling the color, path and life cycle of a large number of individual particles, an effect similar to fireworks can be obtained.
The target dynamic characters are dynamic characters which change according to the changes of the firework particles in the firework particle dynamic images. The target dynamic character is generated by mapping the dynamic change process of the firework particles on the pixel position corresponding to the character pixel position in the character image in the firework particle dynamic image on the character pixel position on the basis of the character image. In general, the text pixel position in the text image is used as a display area, and the dynamic change process of the firework particles on the pixel position corresponding to the firework particle dynamic image is displayed on the display area.
In step 140, the target dynamic characters and the firework dynamic materials are fused to obtain the firework special effect characters.
Here, the firework dynamic material may refer to a firework video, which is a process of firework explosion. It should be understood that the firework dynamic material may be determined based on user selections.
The target dynamic characters and the firework dynamic materials are video images comprising multi-frame image frames, and the specific fusion process will be described in detail in the following. The firework special effect characters obtained by fusion comprise the firework bursting process and characters displayed in the firework bursting process, and the bursting process of the characters is consistent with the bursting process of the fireworks.
In some embodiments, fusing the target dynamic text with the firework dynamic material to obtain a firework special effect text, including:
and aiming at each frame of image in the target dynamic characters, fusing the frame of image into an image frame of the firework display process matched with the particle life cycle in the firework dynamic material according to the particle life cycle of the firework particles in the frame of image, and obtaining the firework special effect characters.
The target dynamic characters comprise multi-frame images, in the fusion process, the image frames in the target dynamic characters are fused with the target image frames in the firework dynamic material, and the target image frames are image frames matched with the particle life cycle of the image frames in the target dynamic characters in the firework bursting process of the firework dynamic material. For example, if the particle life cycle of the nth frame image in the target dynamic text matches with the firework bursting process of the mth frame in the firework dynamic material, the nth frame image and the mth frame image are fused.
It should be understood that the target dynamic text is fused with the firework dynamic material frame by frame, for example, the firework dynamic material is a video file of 10 seconds, the whole process of the firework explosion is performed in 6 th to 9 th seconds, and then the target dynamic text is fused into the firework dynamic material frame by frame at the position of the firework explosion from 6 th second, and is gradually rendered from small to large along with the explosion speed of the firework. The fusion mode can be fusion through a color filtering mode, and the principle is that the inverse of the primary color is multiplied by the inverse of the mixed color to obtain a calculation result, and then the calculation result is inverted.
FIG. 2 is a schematic diagram of a firework effect text shown in accordance with an exemplary embodiment. As shown in fig. 2, in the process of displaying fireworks, the target dynamic text is displayed simultaneously. It should be understood that in fig. 2, the image frame in one frame of the firework special effect text is illustrated, and in a practical case, the firework special effect text comprises a plurality of frames of image frames. The explosion process of the fireworks is a gradually attenuating process, so that the target dynamic characters are also gradually attenuated with the explosion process of the fireworks.
Therefore, the target dynamic characters with the firework effect can be generated by mapping the dynamic change process of the firework particles on the corresponding pixel positions in the firework particle dynamic image to the character pixel positions in the character image, and then the target dynamic characters are fused with the firework dynamic materials to generate the firework special effect characters. The method for generating the special effect characters does not need various complex video image editing operations, and can generate a professional and attractive firework special effect character effect by one key only by inputting characters by a user, so that the life of the user is more colorful.
Fig. 3 is a flowchart illustrating a method of special effect text generation according to another example embodiment. As shown in fig. 3, in some embodiments, the special effect text generation method may include the following steps.
In step 210, the word to be processed is obtained.
Here, the process of acquiring the to-be-processed text has been described in detail in the above embodiment, and is not described herein again.
In step 220, the word to be processed is converted into a word image.
Here, the process of converting text into image has been described in detail in the above embodiments, and is not described herein again.
In step 230, the text image is hollowed out to obtain a hollowed text image.
Here, the hollowing processing is to hollow out a middle region of the character image and reserve a character edge region, thereby obtaining a hollowed-out character image.
FIG. 4 is a flow diagram illustrating a process for hollowing out text images in accordance with an exemplary embodiment. As shown in fig. 4, in some implementation embodiments, in step 230, performing a hollow-out process on the text image to obtain a hollow-out text image may include the following steps:
in step 231, the text image is expanded to obtain an expanded text image.
Here, the expansion process is to actually widen the character boundary pixels of the character image, thereby obtaining an expanded character image. The dilation process of the image is performed on the thresholded image, and the text part of the image is 1, and the non-text part of the image is 0. The principle of the expansion processing is that a kernel function is traversed in a character image, then the value of a pixel point traversed in the character image is subjected to AND operation with the value of a custom convolution kernel, the pixel point and the pixel point are correspondingly subjected to AND operation in a one-to-one correspondence mode, when one element value corresponding to the convolution kernel is 1, the pixel point of the central value of the convolution kernel is set to be 1, and if all the element values are 0, the pixel value of the central value of the convolution kernel is set to be 0. The custom convolution kernel is a two-dimensional matrix with all elements being 1. FIG. 5 is a schematic diagram illustrating an expanded text image according to an exemplary embodiment. As shown in fig. 5, the expanded character image is an image obtained by widening character boundary pixels of the character image.
In step 232, the text image is subjected to pixel value inversion processing to obtain an inverted text image.
Here, alpha of a character portion in the character image is 1, alpha of a non-character portion is 0, and alpha of a character portion and alpha of a non-character portion in the character image after the process of inverting the pixel values is 0 in the inverted character image.
In step 233, the hollowed text image is obtained based on the expanded text image and the inverted text image.
Here, in the expanded character image, alpha of the character portion is 1, alpha of the non-character portion is 0, and by performing bitwise and operation on the expanded character image and the inverted character image, the intersection portion in the expanded character image and the inverted character image is the character portion of the hollowed-out character image. FIG. 6 is a schematic diagram illustrating a skeleton text image according to an exemplary embodiment. As shown in fig. 6, the pierced text image includes the text edge of the original text image.
In step 240, according to the character pixel position in the hollowed character image, mapping the dynamic change process of the firework particle at the corresponding pixel position in the firework particle dynamic image to the character pixel position to obtain the target dynamic character.
The target dynamic character is generated by mapping the dynamic change process of the firework particle at the pixel position corresponding to the character pixel position in the hollowed character image in the firework particle dynamic image on the character pixel position on the basis of the hollowed character image. Through carrying out fretwork processing to the character image, fireworks particle can attach to the characters edge of the character image after the fretwork better among the fireworks particle dynamic image.
FIG. 7 is a diagram illustrating a target dynamic text, according to an example embodiment. As shown in fig. 7, the dynamic change process of the firework particles of the firework particle dynamic image is mapped to the corresponding character pixel position of the hollowed-out character image, so as to obtain the target dynamic character. It should be understood that in fig. 7, a frame image is illustrated, and in a practical case, the firework particle dynamic image is an image frame including a plurality of frames of firework particle dynamic changes.
In step 250, the target dynamic characters and the firework dynamic materials are fused to obtain the firework special effect characters.
Here, the fusion process of the target dynamic text and the firework dynamic material is described in detail in the above embodiment, and is not described herein again.
Fig. 8 is a flowchart illustrating a special effect text generation method according to yet another exemplary embodiment. As shown in fig. 8, the special effect text generation method may include the following steps.
In step 310, the word to be processed is obtained.
Here, the process of acquiring the to-be-processed text has been described in detail in the above embodiment, and is not described herein again.
In step 320, the word to be processed is converted into a word image.
Here, the process of converting text into image has been described in detail in the above embodiments, and is not described herein again.
In step 330, the text image is hollowed out to obtain a hollowed text image.
Here, in the above embodiment, the process of the hollowing process has been described in detail, and is not described herein again.
In step 340, performing gaussian blurring processing with different degrees of blurring on a plurality of frames of the hollowed text images to obtain a plurality of frames of blurred text images, wherein the degree of blurring of each frame of the blurred text images is related to the particle life cycle of the image frame of the blurred text image corresponding to the firework particle dynamic image.
Here, the number of frames of the hollowed text images may be the same as the number of frames of the image frames in the firework dynamic material. The fuzzy degree of the Gaussian fuzzy processing is different for each frame of hollowed character image. The firework particle dynamic image shows the dynamic change process of the firework particles through a plurality of frames of image frames, so that the firework effect is simulated. Therefore, the degree of blur of the blurred text image is related to the particle life cycle of the corresponding image frame in the firework particle dynamic image. It should be understood that the particle lifecycle in an image frame refers to the phase of the lifecycle in which the firework particles are located. For example, when the image of the K-th frame in the firework particle dynamic image has the most particles, the blurred character image of the K-th frame is obtained by performing gaussian blurring processing on the hollowed character image based on the blurring degree corresponding to the time.
It is worth noting that gaussian blur is typically used to reduce image noise and to reduce the level of detail of an image. The gaussian blur processing process of the image is to convolute the image with normal distribution, and the gaussian blur is a mature prior art and is not described in detail here.
In step 350, according to the character pixel position in the hollowed character image, mapping the dynamic change process of the firework particle at the corresponding pixel position in the firework particle dynamic image to the character pixel position to obtain a first dynamic character.
Here, the first dynamic character is generated by mapping, on the basis of the pierced character image, a dynamic change process of the firework particle at a pixel position corresponding to a character pixel position in the pierced character image in the firework particle dynamic image, onto the character pixel position. Through carrying out fretwork processing to the character image, fireworks particle can attach to the characters edge of the character image after the fretwork better among the fireworks particle dynamic image.
In step 360, for each frame of the blurred text image, the blurred text image is fused with the corresponding image frame in the first dynamic text to obtain a second dynamic text, and the second dynamic text is used as the target dynamic text.
Here, the fusion of the blurred text image with the image frame in the first dynamic text is a process of multiple frames to multiple frames. And fusing each frame of blurred text images with the image frames corresponding to the blurred text images in the first dynamic text. For example, the M frame fuzzy character image is fused with the M frame image frame in the first dynamic character.
In some embodiments, fusing the blurred text image with the corresponding image frame in the first dynamic text may include:
determining the transparency of each pixel point in the fuzzy character image;
filling pixel points with the transparency value of 0 in the fuzzy character image based on a preset pixel value, so that the transparency of the filled pixel points is not 0;
and replacing the pixel point with the transparency value of 1 in the blurred text image based on the target pixel point in the first dynamic text, wherein the target pixel point is a pixel point which is matched with the coordinates of the pixel point with the transparency value of 1 in the blurred text image on the image frame of the first dynamic text corresponding to the blurred text image.
Here, the blurred text image is merged with the image frame in the first dynamic text, and the logical judgment may be made based on the transparency (alpha) of the blurred text image. In the fusion process, the transparency of each pixel point in the fuzzy character image is determined by taking the fuzzy character image as a reference, and the pixel point with the transparency value of 0 in the fuzzy character image is filled based on a preset pixel value, so that the transparency of the filled pixel point is not 0. The preset pixel value may be a pixel value corresponding to a pixel point of the non-text portion of the blurred text image, for example, if the non-text portion of the blurred text image is a black pixel point, the black pixel point is used to fill the pixel point with the transparency value of 0.
And replacing the pixel point with the transparency value of 1 in the fuzzy character image based on the target pixel point in the first dynamic character. The target pixel point is a pixel point which is matched with the coordinates of the pixel point with the transparency value of 1 in the fuzzy character image on the image frame of the first dynamic character corresponding to the fuzzy character image. Namely, on the pixel point with the transparency value of 1 in the fuzzy character image, the firework particles on the first dynamic character are used for replacement.
And (3) on the pixel point with the transparency value larger than 0 and smaller than 1 in the fuzzy character image, reserving the pixel value on the pixel point, namely, replacing the pixel point with the transparency value larger than 0 and smaller than 1 by using the pixel point on the corresponding position on the fuzzy character image.
FIG. 9 is a diagram illustrating a second dynamic text, according to an example embodiment. As shown in fig. 9, the blurred text image is merged with the image frame of the first dynamic text to obtain an image frame of the second dynamic text. The fuzzy character image and the image frame in the first dynamic character are fused, so that the luminous effect of the character edge is more obvious, and the display effect of the firework special effect character is optimized.
In step 370, the target dynamic characters are fused with the firework dynamic materials to obtain the firework special effect characters.
Here, the fusion process of the target dynamic text and the firework dynamic material is described in detail in the above embodiment, and is not described herein again.
FIG. 10 is a flowchart illustrating the generation of a fireworks particle dynamic image according to an exemplary embodiment.
As shown in fig. 10, in some embodiments, the firework particle dynamic image may be generated by:
in step 410, attribute information of the firework particle selected by the user is determined.
Here, the attribute information of the firework particles includes information of color, number, size, and the like of the firework particles. When a user makes a firework special effect character, the attribute information of firework particles can be input through the terminal. It is worth explaining that the attribute information of the firework particles can be matched with the attribute information of the firework particles in the firework dynamic material, so that the target dynamic characters can be matched with the firework dynamic material. For example, if the color of the particles in the firework dynamic material is blue, the color of the particles in the generated firework particle dynamic image is blue.
In step 420, based on the attribute information of the firework particles, in combination with a particle system, the firework particle dynamic image is generated.
Here, based on the attribute information of the firework particle selected by the user, the color, path and life cycle of a large number of independent particles are controlled in combination with the particle system, and then a firework particle dynamic image can be generated.
Fig. 11 is a block diagram illustrating an apparatus for special effect text generation according to an example embodiment. Referring to fig. 11, the apparatus includes an obtaining module 121, a converting module 122, a first fusing module 123 and a second fusing module 124.
The obtaining module 121 is configured to obtain a word to be processed;
the conversion module 122 is configured to convert the text to be processed into a text image;
the first fusion module 123 is configured to map a dynamic change process of the firework particles at a corresponding pixel position in the firework particle dynamic image to the character pixel position according to the character pixel position in the character image, so as to obtain a target dynamic character;
the second fusion module 124 is configured to fuse the target dynamic text with the firework dynamic material to obtain the firework special effect text.
In some embodiments, the apparatus further comprises:
the hollow module is configured to perform hollow processing on the character image to obtain a hollow character image;
the first fusion module 123 is specifically configured to:
and mapping the dynamic change process of the firework particles on the corresponding pixel positions in the firework particle dynamic image to the character pixel positions according to the character pixel positions in the hollowed character image to obtain the target dynamic characters.
In some embodiments, the fretting module comprises:
the expansion unit is configured to perform expansion processing on the character image to obtain an expanded character image;
the negation unit is configured to perform pixel value negation processing on the character image to obtain a negated character image;
and the hollow unit is configured to obtain the hollowed text image based on the expanded text image and the inverted text image.
In some embodiments, the apparatus further comprises:
the Gaussian blur module is configured to perform Gaussian blur processing with different blurriness on a plurality of frames of the hollowed character images to obtain a plurality of frames of blurred character images, wherein the blurriness of each frame of the blurred character images is related to the particle life cycle of the image frame of the blurred character image corresponding to the firework particle dynamic image;
the first fusion module is specifically configured to:
mapping the dynamic change process of the firework particles on the corresponding pixel positions in the firework particle dynamic image to the character pixel positions according to the character pixel positions in the hollowed character image to obtain first dynamic characters;
and for each frame of the blurred text image, fusing the blurred text image with the corresponding image frame in the first dynamic text to obtain a second dynamic text, and taking the second dynamic text as the target dynamic text.
In some embodiments, the first fusion module 123 comprises:
the determining unit is configured to determine the transparency of each pixel point in the fuzzy character image;
the fusion unit is configured to fill the pixel points with the transparency values of 0 in the fuzzy character image based on preset pixel values, so that the transparency of the filled pixel points is not 0;
and replacing the pixel point with the transparency value of 1 in the blurred text image based on the target pixel point in the first dynamic text, wherein the target pixel point is a pixel point which is matched with the coordinates of the pixel point with the transparency value of 1 in the blurred text image on the image frame of the first dynamic text corresponding to the blurred text image.
In some embodiments, the apparatus further comprises:
the determining module is configured to determine attribute information of the firework particles selected by the user;
and the firework particle generating module is configured to generate the firework particle dynamic image based on the attribute information of the firework particles and in combination with a particle system.
In some embodiments, the second fusion module 124 is specifically configured to:
and aiming at each frame of image in the target dynamic characters, fusing the frame of image into an image frame of the firework display process matched with the particle life cycle in the firework dynamic material according to the particle life cycle of the firework particles in the frame of image, and obtaining the firework special effect characters.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the special effect text generation method provided by the present disclosure.
FIG. 12 is a block diagram illustrating an electronic device in accordance with an example embodiment. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 12, electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the special effect text generation method described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, library of libas algorithms, particle material, firework dynamic material, pictures, videos, and the like. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power components 806 provide power to the various components of the electronic device 800. Power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described special effect text generation method.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the electronic device 800 to perform the special effect word generation method described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned special effect word generation method when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. A method for generating special effect characters is characterized by comprising the following steps:
acquiring characters to be processed;
converting the characters to be processed into character images;
mapping the dynamic change process of the firework particles on the corresponding pixel position in the firework particle dynamic image to the character pixel position according to the character pixel position in the character image to obtain a target dynamic character;
and fusing the target dynamic characters and the firework dynamic materials to obtain the firework special effect characters.
2. The method for generating special effect text according to claim 1, further comprising:
hollowing out the character image to obtain a hollowed-out character image;
the method for mapping the dynamic change process of the firework particles on the corresponding pixel position in the firework particle dynamic image to the character pixel position according to the character pixel position in the character image to obtain the target dynamic character comprises the following steps:
and mapping the dynamic change process of the firework particles on the corresponding pixel positions in the firework particle dynamic image to the character pixel positions according to the character pixel positions in the hollowed character image to obtain the target dynamic characters.
3. The method for generating a special-effect character according to claim 2, wherein the step of performing a hollow-out process on the character image to obtain a hollow-out character image comprises:
performing expansion processing on the character image to obtain an expanded character image;
carrying out pixel value negation processing on the character image to obtain a negated character image;
and obtaining the hollowed character image based on the expanded character image and the inverted character image.
4. The method for generating special effect text according to claim 2, further comprising:
performing Gaussian blur processing with different blurriness on a plurality of frames of the hollowed character images to obtain a plurality of frames of blurred character images, wherein the blurriness of each frame of the blurred character images is related to the particle life cycle of the image frame of the blurred character image corresponding to the firework particle dynamic image;
according to the character pixel position in the character image after the hollowing out, mapping the dynamic change process of the firework particles on the corresponding pixel position in the firework particle dynamic image to the character pixel position to obtain the target dynamic character, and the method comprises the following steps:
mapping the dynamic change process of the firework particles on the corresponding pixel positions in the firework particle dynamic image to the character pixel positions according to the character pixel positions in the hollowed character image to obtain first dynamic characters;
and for each frame of the blurred text image, fusing the blurred text image with the corresponding image frame in the first dynamic text to obtain a second dynamic text, and taking the second dynamic text as the target dynamic text.
5. The method of claim 4, wherein fusing the blurred text image with the corresponding image frame of the first dynamic text comprises:
determining the transparency of each pixel point in the fuzzy character image;
filling pixel points with the transparency value of 0 in the fuzzy character image based on a preset pixel value, so that the transparency of the filled pixel points is not 0;
and replacing the pixel point with the transparency value of 1 in the blurred text image based on the target pixel point in the first dynamic text, wherein the target pixel point is a pixel point which is matched with the coordinates of the pixel point with the transparency value of 1 in the blurred text image on the image frame of the first dynamic text corresponding to the blurred text image.
6. The special effect text generation method according to any one of claims 1 to 5, wherein the firework particle dynamic image is obtained by:
determining attribute information of the firework particles selected by the user;
and generating the firework particle dynamic image by combining a particle system based on the attribute information of the firework particle.
7. The method for generating special effect characters according to any one of claims 1 to 5, wherein the fusing the target dynamic characters with a firework dynamic material to obtain firework special effect characters comprises:
and aiming at each frame of image in the target dynamic characters, fusing the frame of image into an image frame of the firework display process matched with the particle life cycle in the firework dynamic material according to the particle life cycle of the firework particles in the frame of image, and obtaining the firework special effect characters.
8. A special effect character generation device, comprising:
the acquisition module is configured to acquire characters to be processed;
the conversion module is configured to convert the characters to be processed into character images;
the first fusion module is configured to map the dynamic change process of the firework particles on the corresponding pixel position in the firework particle dynamic image to the character pixel position according to the character pixel position in the character image to obtain a target dynamic character;
and the second fusion module is configured to fuse the target dynamic characters and the firework dynamic materials to obtain the firework special effect characters.
9. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method of any one of claims 1 to 7.
10. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute instructions stored in the memory to implement the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110802878.7A CN113421214A (en) | 2021-07-15 | 2021-07-15 | Special effect character generation method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110802878.7A CN113421214A (en) | 2021-07-15 | 2021-07-15 | Special effect character generation method and device, storage medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113421214A true CN113421214A (en) | 2021-09-21 |
Family
ID=77721144
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110802878.7A Pending CN113421214A (en) | 2021-07-15 | 2021-07-15 | Special effect character generation method and device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113421214A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114677461A (en) * | 2022-02-25 | 2022-06-28 | 北京字跳网络技术有限公司 | Method, device and equipment for generating special effect characters and storage medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825490A (en) * | 2016-03-16 | 2016-08-03 | 北京小米移动软件有限公司 | Gaussian blur method and device of image |
US20170193280A1 (en) * | 2015-09-22 | 2017-07-06 | Tenor, Inc. | Automated effects generation for animated content |
CN108055191A (en) * | 2017-11-17 | 2018-05-18 | 深圳市金立通信设备有限公司 | Information processing method, terminal and computer readable storage medium |
CN108337547A (en) * | 2017-11-27 | 2018-07-27 | 腾讯科技(深圳)有限公司 | A kind of word cartoon implementing method, device, terminal and storage medium |
US20180336712A1 (en) * | 2017-05-19 | 2018-11-22 | Mana AKAIKE | Display control apparatus, display control method, and computer program product |
CN109672832A (en) * | 2018-12-20 | 2019-04-23 | 四川湖山电器股份有限公司 | The processing method of digital movie interlude lyrics subtitle realization dynamic Special display effect |
CN110213638A (en) * | 2019-06-05 | 2019-09-06 | 北京达佳互联信息技术有限公司 | Cartoon display method, device, terminal and storage medium |
CN110544218A (en) * | 2019-09-03 | 2019-12-06 | 腾讯科技(深圳)有限公司 | Image processing method, device and storage medium |
CN110704059A (en) * | 2019-10-16 | 2020-01-17 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN110738715A (en) * | 2018-07-19 | 2020-01-31 | 北京大学 | automatic migration method of dynamic text special effect based on sample |
CN112258611A (en) * | 2020-10-23 | 2021-01-22 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN112700517A (en) * | 2020-12-28 | 2021-04-23 | 北京字跳网络技术有限公司 | Method for generating visual effect of fireworks, electronic equipment and storage medium |
-
2021
- 2021-07-15 CN CN202110802878.7A patent/CN113421214A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170193280A1 (en) * | 2015-09-22 | 2017-07-06 | Tenor, Inc. | Automated effects generation for animated content |
CN105825490A (en) * | 2016-03-16 | 2016-08-03 | 北京小米移动软件有限公司 | Gaussian blur method and device of image |
US20180336712A1 (en) * | 2017-05-19 | 2018-11-22 | Mana AKAIKE | Display control apparatus, display control method, and computer program product |
CN108055191A (en) * | 2017-11-17 | 2018-05-18 | 深圳市金立通信设备有限公司 | Information processing method, terminal and computer readable storage medium |
CN108337547A (en) * | 2017-11-27 | 2018-07-27 | 腾讯科技(深圳)有限公司 | A kind of word cartoon implementing method, device, terminal and storage medium |
CN110738715A (en) * | 2018-07-19 | 2020-01-31 | 北京大学 | automatic migration method of dynamic text special effect based on sample |
CN109672832A (en) * | 2018-12-20 | 2019-04-23 | 四川湖山电器股份有限公司 | The processing method of digital movie interlude lyrics subtitle realization dynamic Special display effect |
CN110213638A (en) * | 2019-06-05 | 2019-09-06 | 北京达佳互联信息技术有限公司 | Cartoon display method, device, terminal and storage medium |
CN110544218A (en) * | 2019-09-03 | 2019-12-06 | 腾讯科技(深圳)有限公司 | Image processing method, device and storage medium |
CN110704059A (en) * | 2019-10-16 | 2020-01-17 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN112258611A (en) * | 2020-10-23 | 2021-01-22 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN112700517A (en) * | 2020-12-28 | 2021-04-23 | 北京字跳网络技术有限公司 | Method for generating visual effect of fireworks, electronic equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
熊耀: ""基于Unity3D粒子系统的三维影视特效开发研究"", 《软件导刊》, vol. 11, no. 11, 30 November 2012 (2012-11-30), pages 134 - 136 * |
陈训威: ""基于粒子系统的电视字幕爆炸特效的实现"", 《科技资讯》, no. 06, 23 February 2007 (2007-02-23), pages 9 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114677461A (en) * | 2022-02-25 | 2022-06-28 | 北京字跳网络技术有限公司 | Method, device and equipment for generating special effect characters and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3136658B1 (en) | Method, device, terminal device, computer program and recording medium for changing emoticon in chat interface | |
CN109600659B (en) | Operation method, device and equipment for playing video and storage medium | |
US11705160B2 (en) | Method and device for processing video | |
US11315336B2 (en) | Method and device for editing virtual scene, and non-transitory computer-readable storage medium | |
CN107948708B (en) | Bullet screen display method and device | |
CN111553864B (en) | Image restoration method and device, electronic equipment and storage medium | |
CN109360261B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN112766234B (en) | Image processing method and device, electronic equipment and storage medium | |
RU2648616C2 (en) | Font addition method and apparatus | |
CN112785672B (en) | Image processing method and device, electronic equipment and storage medium | |
CN108122195B (en) | Picture processing method and device | |
CN111797262A (en) | Poetry generation method and device, electronic equipment and storage medium | |
CN112702531B (en) | Shooting method and device and electronic equipment | |
CN113421214A (en) | Special effect character generation method and device, storage medium and electronic equipment | |
CN113160099B (en) | Face fusion method, device, electronic equipment, storage medium and program product | |
CN109756783B (en) | Poster generation method and device | |
CN114463212A (en) | Image processing method and device, electronic equipment and storage medium | |
CN107992894B (en) | Image recognition method, image recognition device and computer-readable storage medium | |
CN111338743B (en) | Interface processing method and device and storage medium | |
CN106447747B (en) | Image processing method and device | |
CN113781359B (en) | Image processing method and device, electronic equipment and storage medium | |
CN114691000B (en) | Multi-screen linkage method, device, equipment and storage medium | |
CN117376715A (en) | Video editing method and device and electronic equipment | |
CN117311884A (en) | Content display method, device, electronic equipment and readable storage medium | |
CN117631933A (en) | Display control method, display control device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |