CN113240779A - Method and device for generating special character effect, electronic equipment and storage medium - Google Patents

Method and device for generating special character effect, electronic equipment and storage medium Download PDF

Info

Publication number
CN113240779A
CN113240779A CN202110560049.2A CN202110560049A CN113240779A CN 113240779 A CN113240779 A CN 113240779A CN 202110560049 A CN202110560049 A CN 202110560049A CN 113240779 A CN113240779 A CN 113240779A
Authority
CN
China
Prior art keywords
text
animation
progress information
stroke
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110560049.2A
Other languages
Chinese (zh)
Other versions
CN113240779B (en
Inventor
胡俊霄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110560049.2A priority Critical patent/CN113240779B/en
Publication of CN113240779A publication Critical patent/CN113240779A/en
Application granted granted Critical
Publication of CN113240779B publication Critical patent/CN113240779B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites

Abstract

The disclosure relates to a method and a device for generating a special character effect, electronic equipment and a storage medium, and relates to the technical field of image processing. The method comprises the following steps: acquiring animation information and a text to be stroked; the animation information comprises animation duration and pre-stored stroke progress information; determining the drawing progress information of each animation frame to be displayed according to the text to be stroked, the animation duration and the stroking progress information; and generating the stroking animation of the text to be stroked according to the drawing progress information of each animation frame to be displayed. The method for generating the character special effect determines the drawing progress information of the animation frames to be displayed according to the prestored tracing progress information, so that the drawing progress information of each animation frame does not need to be determined according to the prestored closed contour line of the character graph in the tracing process, the occupancy rates of a CPU and a memory in generating the character special effect are reduced, the high requirements of the electronic equipment on the performance in generating the tracing special effect are further reduced, and the problem of poor effect in generating the tracing special effect when more characters exist is solved.

Description

Method and device for generating special character effect, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for generating a special character effect, an electronic device, and a storage medium.
Background
At present, the character edge-tracing special effect (namely the character hand-drawing special effect) of the handwriting effect is widely applied.
In the prior art, the implementation manner of the character hand-drawing special effect is as follows: in the process of stroking, the position of a stroking brush at the current moment is determined according to a pre-stored closed contour line of a character graph, the stroking brush is displayed at the corresponding position, and the stroked content before the current moment is displayed. Thus, the effect of the handwritten character can be presented.
However, the above method has high performance requirements (e.g., computational power and processing efficiency) for the electronic device. In this way, in a scene with many characters, the effect of completing the character hand-drawing special effect by adopting the above method may be poor.
Disclosure of Invention
The disclosure provides a method and a device for generating a character special effect, electronic equipment and a storage medium, which are used for at least solving the problem that the performance requirement of the electronic equipment for generating the character edge-tracing special effect in the related technology is high. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, a method for generating a text special effect is provided, including: acquiring animation information and a text to be stroked; the animation information comprises animation duration and pre-stored stroke progress information; the stroke progress information is used for representing the stroke length from each pixel point on the target area to the stroke starting point and the ratio of the stroke length in the target area; the target area is used for representing a closed contour line of the text to be described with a preset pixel width; determining the drawing progress information of each animation frame to be displayed according to the text to be stroked, the animation duration and the stroking progress information; and generating the stroking animation of the text to be stroked according to the drawing progress information of each animation frame to be displayed.
In one possible implementation manner, the method for generating a text special effect further includes: determining a directed distance field map of a text to be described; the directional distance field chartlet of the text to be traced is used for representing a closed contour line of the text to be traced; determining a target area based on a directional distance field map of a text to be traced and a preset tracing width; determining the stroke progress information of each pixel point based on the stroke length from each pixel point to the stroke starting point on the target area and the length of the target area; and storing the stroke progress information.
In one possible implementation, determining a target region based on a directional distance field map and a preset stroke width of text to be stroked includes: determining the corresponding target pixel width of a preset stroke width on a directed distance field map of a text to be stroked; and determining the intersection of the directed distance field map of the text to be described and the width of the target pixel, and taking the area represented by the intersection as a target area.
In one possible implementation, storing stroke progress information includes: adding the stroke progress information to the directed distance field chartlet of the text to be stroked to obtain a directed distance field expansion chartlet of the text to be stroked; a directed distance field expansion map of text to be traced is stored.
In one possible implementation, adding stroke progress information to a directed distance field map of a text to be stroked to obtain a directed distance field extended map of the text to be stroked, including: and respectively storing the information of the directional distance field of the text to be traced, the contour line index information of the text to be traced and the tracing progress information in one channel of the directional distance field expansion map of the text to be traced.
In one possible implementation manner, the drawing progress information includes a target pixel point on a target area corresponding to each animation frame to be displayed; generating the stroking animation of the text book to be stroked according to the drawing progress information of each animation frame to be displayed, wherein the method comprises the following steps: drawing a stroke between a target pixel point corresponding to the currently played animation frame and a target pixel point corresponding to the previous animation frame to obtain a current stroke result; obtaining a stroking result to be displayed based on the current stroking result and a stroking result displayed in a previous animation frame; and displaying the stroking result to be displayed.
According to a second aspect of the embodiments of the present disclosure, there is provided a device for generating a special character effect, including: the acquisition module is configured to acquire the animation information and the text to be described; the animation information comprises animation duration and pre-stored stroke progress information; the stroke progress information is used for representing the stroke length from each pixel point on the target area to the stroke starting point and the ratio of the stroke length in the target area; the target area is used for representing a closed contour line of the text to be described with a preset pixel width; the determining module is configured to determine drawing progress information of each animation frame to be displayed according to the text to be traced, the animation duration and pre-stored tracing progress information; and the animation module is configured to execute drawing progress information according to each animation frame to be displayed and generate the drawing animation of the text to be drawn.
In one possible implementation manner, the apparatus for generating a text special effect further includes a storage module configured to perform: determining a directed distance field map of a text to be described; the directional distance field chartlet of the text to be traced is used for representing a closed contour line of the text to be traced; determining a target area based on a directional distance field map of a text to be traced and a preset tracing width; determining the stroke progress information of each pixel point based on the stroke length from each pixel point to the stroke starting point on the target area and the length of the target area; and storing the stroke progress information.
In one possible implementation, the storage module is specifically configured to perform: determining the corresponding target pixel width of a preset stroke width on a directed distance field map of a text to be stroked; and determining the intersection of the directed distance field map of the text to be described and the width of the target pixel, and taking the area represented by the intersection as a target area.
In one possible implementation, the storage module is specifically configured to perform: adding the stroke progress information to the directed distance field chartlet of the text to be stroked to obtain a directed distance field expansion chartlet of the text to be stroked; a directed distance field expansion map of text to be traced is stored.
In one possible implementation, the storage module is specifically configured to perform: and respectively storing the information of the directional distance field of the text to be traced, the contour line index information of the text to be traced and the tracing progress information in one channel of the directional distance field expansion map of the text to be traced.
In one possible implementation manner, the drawing progress information includes a target pixel point on a target area corresponding to each animation frame to be displayed; the animation module is specifically configured to perform: drawing a stroke between a target pixel point corresponding to the currently played animation frame and a target pixel point corresponding to the previous animation frame to obtain a current stroke result; obtaining a stroking result to be displayed based on the current stroking result and a stroking result displayed in a previous animation frame; and displaying the stroking result to be displayed.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method for generating a text effect according to any one of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions of the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method for generating a text effect according to any one of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, which includes computer instructions, when the computer instructions are run on an electronic device, cause the electronic device to execute the method for generating a text effect according to any one of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: the method comprises the steps of determining drawing progress information of each animation frame to be displayed in the stroking animation according to a text to be stroked, animation duration and pre-stored stroking progress information, wherein the stroking progress information is used for representing the proportion of the length from each pixel point to a stroking starting point on a target contour line with a preset pixel width in a target area, so that the drawing progress information of each animation frame is not required to be determined according to a closed contour line of a pre-stored character graph in the stroking process, the occupancy rates of a CPU (central processing unit) and a memory in the process of generating a character special effect are reduced, the high requirement on the performance of electronic equipment in the process of generating the stroking special effect is further reduced, and the problem of poor effect of generating the stroking special effect in the process of generating more stroking characters is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flow diagram illustrating a method for generating text effects in accordance with one illustrative embodiment;
FIG. 2 is a flow diagram illustrating yet another method for generating text effects in accordance with an illustrative embodiment;
FIG. 3 is a flow diagram illustrating yet another method for generating text effects in accordance with an illustrative embodiment;
FIG. 4 is a flow diagram illustrating yet another method for generating text effects in accordance with an illustrative embodiment;
FIG. 5 is a block diagram illustrating an apparatus for generating text effects in accordance with one illustrative embodiment;
FIG. 6 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Before the detailed description of the method for generating the special character effect provided by the disclosure, the application scenario and the implementation environment related to the disclosure are briefly described.
First, terms related to the present disclosure are explained.
Directed Distance Field (SDF): it is determined whether a point is within a region. Wherein, Signed indicates positive and negative signs, Distance points to the Distance of a point, and Field indicates an area.
Secondly, the application scenario related to the present disclosure is briefly introduced.
Short videos have become important media on the network at present, and users can record their own daily, viewpoint, speciality, and the like into videos, and perform appropriate video editing and then release the videos. In the video production process, the character special effect is an important element with strong expression capability and is an indispensable part in video editing software. Because the special effect of the characters is edited in the professional design software, the workload is large, and in the prior art, the special effect of the characters is added in the video by integrating the special effect of the characters into the video editing software.
When a user adds a text special effect to a video, a text edge-tracing special effect is often used. In the prior art, when a character is stroked, the position of a stroked painting brush is determined according to a pre-stored closed contour line of a character graph. The position of the painting brush for tracing is determined by adopting the contour line of the character graph, the position information of the painting brush needs to be calculated in the tracing process, and the occupancy rate of a processor is large. Especially, when the number of characters is large, the performance requirement on the electronic equipment is high, and the applicability is poor.
In order to solve the problems, the method for generating the special character effect determines the drawing progress information of each animation frame to be displayed in the stroking animation according to the text to be stroked, the animation duration and the pre-stored stroking progress information, wherein the stroking progress information is used for representing the proportion of the length from each pixel point to the stroking starting point on the target contour line with the preset pixel width in the target area, so that the drawing progress information of each animation frame is not required to be determined according to the pre-stored closed contour line of the character graph in the stroking process, the occupancy rates of a CPU and a memory in the character special effect generation process are reduced, the high requirement on the performance of electronic equipment in the process of generating the special stroking effect is further reduced, and the problem that the stroking effect is poor when more characters are to be stroked is generated is solved.
The following briefly introduces an implementation environment (implementation architecture) to which the present disclosure relates.
The method for generating the character special effect can be applied to electronic equipment. The electronic device may be a terminal device or a server. The terminal device can be a smart phone, a tablet computer, a palm computer, a vehicle-mounted terminal, a desktop computer, a notebook computer and the like. The server may be any one server or server cluster, and the disclosure is not limited thereto.
Fig. 1 is a flowchart illustrating a text effect generation method for an electronic device according to an exemplary embodiment, which includes the following steps S101 to S103.
In S101, animation information and a text to be stroked are acquired.
The animation information comprises animation duration and pre-stored stroke progress information. The stroke progress information is used for representing the stroke length from each pixel point on the target area to the stroke starting point and the proportion of the stroke length in the target area. The target area is used for representing a closed contour line of the text to be described with a preset pixel width.
It should be noted that the animation duration may be the total duration of the stroked animation input by the user.
In S102, the drawing progress information of each animation frame to be displayed is determined according to the text to be stroked, the animation duration, and the pre-stored stroking progress information.
Illustratively, the animation duration is 10s, the text to be stroked is "one", the stroking progress information of the pixel point located at the left end point of "one" is 0, the stroking progress information of the pixel point located at the right end point of "one" is 1, and the stroking progress information of the pixel point located at the middle position of "one" is 0.5. Based on this, when the stroking animation is carried out for 0.5s, the position of the stroking brush is the pixel point at the middle position of 'one', and the animation frame displayed when the stroking animation is carried out for 0.5s is the pixel point from the starting point of the stroking to the middle position of 'one'.
In S103, a stroke animation of the text to be stroked is generated according to the drawing progress information in each animation frame to be displayed.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: the method comprises the steps of determining drawing progress information of each animation frame to be displayed in the tracing through a text to be traced, animation duration and pre-stored tracing progress information, wherein the tracing progress information is used for representing the proportion of the length from each pixel point to a tracing starting point on a target contour line with a preset pixel width in a target area, so that the drawing progress information of each animation frame is not required to be determined according to a pre-stored closed contour line of a character graph in the tracing process, the occupancy rates of a CPU (central processing unit) and a memory in the process of generating a character special effect are reduced, the high requirement on the performance of electronic equipment in the process of generating the tracing special effect is further reduced, and the problem of poor effect of the tracing special effect in the process of generating more characters to be traced is solved.
In one possible implementation, as shown in fig. 2 in conjunction with fig. 1, the text special effect method further includes S104-S107. S104-S107 are applicable to electronic devices.
In S104, a directed distance field map of the text to be traced is determined.
The directional distance field map of the text to be traced is used for representing a closed contour line of the text to be traced.
In one example, a directed distance field map of text to be traced can be determined from user-input text to be traced.
It can be appreciated that the directed distance field map of the text to be traced displays a closed outline of the text to be traced. Thus, a closed contour of a text to be outlined refers to a glyph contour of the text to be outlined.
For example, a directed distance field map of text to be traced can project a closed outline of the traced text onto the generated map on the SDF.
It is to be appreciated that the directed distance field maps of the text to be traced can be pre-stored in the electronic device.
In another example, the text to be outlined includes at least one closed outline. When the text to be traced is a word, the closed contour line of the text to be traced can be a closed contour line of one word or closed contour lines of a plurality of radicals of one word. For example, the text to be traced is "one", and the closed contour line of the text to be traced is the closed contour line of "one", and in this case, the text to be traced includes one closed contour line. When the text to be traced is machine, the closed contour lines of the text to be traced are wood and several, and at this time, the text to be traced includes two closed contour lines.
In another example, when the text to be traced is a plurality of words, the closed contour line of the text to be traced may be a closed contour line of the plurality of words. For example, when the text to be outlined is "today", the closed contour line of the text to be outlined is a closed contour line of "today" and a closed contour line of "day".
Illustratively, the directed distance field map of the text to be traced is a 2D texture map.
It should be noted that the distance from any pixel point of the directional distance field map of the text to be traced to the closed contour line of the text to be traced can be obtained by calculation. It can also be determined whether any pixel point on the directed distance field map of the text to be traced is inside or outside the closed contour line of the text to be traced, or it can be determined by calculation.
In S105, a target region is determined based on the directional distance field map and the preset stroking width of the text to be stroked.
Note that the value of the preset pixel width is determined based on the value of the stroke width. The target area is the area to be stroked.
In S106, based on the length of the stroke from each pixel point on the target area to the stroke start point and the length of the target area, stroke progress information of each pixel point is determined.
For example, the stroke progress information of each pixel point is determined by determining a stroke length L1 from each pixel point to a stroke starting point on the target area and a length L2 of the target area, and the stroke progress information K of each pixel point is L1/L2.
In S107, the stroke progress information is stored.
For example, the stroke progress information may be stored in a frame buffer in the memory of the graphics card.
The technical scheme provided by the embodiment at least has the following beneficial effects: the tracing progress information of each pixel point in the target area is determined and stored before the tracing animation is generated, and compared with the closed contour line of a stored character graph, the memory occupied is smaller, so that the requirement of generating the tracing special effect when the characters are more is reduced for the performance of the electronic equipment.
In one possible implementation, in conjunction with FIG. 2, as shown in FIG. 3, S105 includes S105a-S105 b.
At S105a, a corresponding target pixel width of a preset stroke width on the directed distance field map of the text to be stroked is determined.
In one example, a preset stroke width is mapped to a directed distance field map of a text to be stroked, and a target pixel width corresponding to the preset stroke width is determined.
At S105b, an intersection of the directed distance field map of the text to be traced and the width of the target pixel is determined, and a region characterized by the intersection is taken as the target region.
It can be understood that the target pixel width corresponds to a preset stroke width, and a stroke contour with the preset pixel width, that is, a stroke contour of a text to be stroked, can be determined by solving an intersection of the target pixel width and the directional distance field map of the text to be stroked.
Alternatively, it may be that the entire region of the target pixel width is located inside and intersects the closed contour line of the text to be outlined. Or the whole area of the width of the target pixel is positioned outside the closed contour line of the text to be described and is intersected with the closed contour line of the text to be described. Or a partial area of the width of the target pixel is positioned outside the closed contour line of the text to be outlined, and a partial area is positioned inside the closed contour line of the text to be outlined and is intersected with the closed contour line of the text to be outlined.
The technical scheme provided by the embodiment at least has the following beneficial effects: the method comprises the steps of determining the corresponding target pixel width of a preset delineation width on a directed distance field map of a text to be delineated, further determining the intersection of the directed distance field map of the text to be delineated and the target pixel width, and taking the area represented by the intersection as a target area, so that the target area is generated on the directed distance field map of the text to be delineated by using the preset delineation width, namely generating an outline line for delineation, improving the pixel precision of delineation on the directed distance field map of the text to be delineated, further performing anti-aliasing based on the directed distance field, ensuring smoothness of delineation, and realizing a better anti-aliasing effect when delineation is performed on the target outline line.
In one possible implementation, storing stroke progress information includes: adding the stroke progress information to the directed distance field chartlet of the text to be stroked to obtain a directed distance field expansion chartlet of the text to be stroked; a directed distance field expansion map of text to be traced is stored.
In one example, the directed distance field map of the text to be traced is a 2D texture map.
In another example, the directed distance field extension map of the text to be traced is a 2D texture map of an RGB channel.
The technical scheme provided by the embodiment at least has the following beneficial effects: the method comprises the steps of obtaining an extended chartlet of the directional distance field of the text to be traced by storing the tracing progress information in the extended chartlet of the directional distance field of the text to be traced and storing the obtained extended chartlet of the directional distance field of the text to be traced, so that the storage space occupied by the tracing progress information can be reduced, and the requirement on the performance of the electronic equipment when the tracing special effect is generated is further reduced.
For example, a directed distance field extension map of text to be traced can be stored in a frame buffer in the graphics card memory.
In one possible implementation, adding stroke progress information to a directed distance field map of a text to be stroked to obtain a directed distance field extended map of the text to be stroked, including: and respectively storing the information of the directional distance field of the text to be traced, the contour line index information of the text to be traced and the tracing progress information in one channel of the directional distance field expansion map of the text to be traced.
In one example, contour line index information for the text to be traced is determined from a directed distance field map of the text to be traced. For example, when the text to be traced is "scary," three closed contour lines, i.e., "mouth," "down," and "person," are mapped to the directional distance field map of the text to be traced, where the index number for "mouth" is 1, the index number for "down" is 2, and the index number for "person" is 3.
It can be understood that the contour line index information refers to an index number of each closed contour line when the text to be traced has a plurality of closed contour lines.
Illustratively, the R channel stores distance field information of a text to be traced, the G channel stores contour line index information of the text to be traced, and the B channel stores tracing progress information.
It should be noted that, for pixels that are not within the preset pixel width from the contour line, a 0 value is stored, that is, for pixels that are not on the target area, a 0 value is stored. The preset pixel width is determined based on the preset stroke width information. It can be understood that the pixel points which are not on the target area do not need to be stroked, and therefore, no stroke progress information exists.
The technical scheme provided by the embodiment at least has the following beneficial effects: by adding the stroke progress information to the directional distance field map of the text to be stroked and storing the stroke progress information, the occupied space during storage can be reduced, and the problem that when too many characters need to be stroked, the requirement on the performance of electronic equipment is high, and the application range of the method for generating the special character effect is influenced is avoided.
In one possible implementation manner, the drawing progress information includes a target pixel point on a target area corresponding to each animation frame to be displayed. In conjunction with FIG. 1, as shown in FIG. 4, S103 includes S103a-S103 c.
In S103a, a stroking is performed between a target pixel point corresponding to the currently played animation frame and a target pixel point corresponding to the previous animation frame, so as to obtain a current stroking result.
Illustratively, a stroke between a target pixel point corresponding to a currently played animation frame and a target pixel point corresponding to a previous animation frame may be drawn by a brush.
In one example, in the case of performing drawing with a brush, the brush tip is attached to a corresponding position point between a target pixel point corresponding to a currently played animation frame and a target pixel point corresponding to a previous animation frame.
In S103b, a stroking result to be displayed is obtained based on the current stroking result and the stroking result displayed in the previous animation frame.
It can be understood that the stroking result displayed in the previous animation frame is the stroking result between the target pixel point corresponding to the previous animation frame and the stroking starting point.
In S103c, the stroking result to be displayed is displayed.
In one example, the animation information further includes a stroke color configuration. For example, the stroking result may refer to a color displayed after the stroking of the text to be stroked is completed.
In one example, the animation information further includes a text undertone effect configuration. Wherein the text ground color effect may be an SDF-based basic ground color text rendering.
For example, the configuration of the text ground color effect may be that when the text is not stroked, the text ground color is not displayed, and when the text stroking special effect is completed, the text ground color is completely displayed. The character background color effect configuration can also blank and display the character background color based on the stroke progress. The character ground color effect configuration can also be character ground color gradient display in the process of generating the character special effect.
In one example, the animation information further includes a time point of the text ground color display and a time point of the stroked color display.
For example, the text ground color effect may be set to display the text ground color effect at a time point later than the time point of the stroking color display, and at this time, the display progress of the text ground color will follow the stroking progress of the cursor.
In another example, the animation information further includes a brush position zoom. Through setting the position of the brush pen to zoom, the size of the brush pen can be set as required, so that the size of the brush pen is matched with the electronic equipment.
The technical scheme provided by the embodiment at least has the following beneficial effects: the method comprises the steps of obtaining a current delineation result by delineating a target pixel point corresponding to a currently played animation frame and a target pixel point corresponding to a previous animation frame, obtaining a delineation result to be displayed according to the current delineation result and the delineation result displayed by the previous animation frame, and displaying the delineation result to be displayed, so that the method only displays the pixel points of which the delineation of a delineation brush on a target area is completed, namely only displays the delineation result within a drawing range, hides the delineation result of the pixel points of which the delineation brush is not performed on the target area, and further realizes the synchronous gradual change display of the delineation result in the delineation animation along with the drawing progress of the delineation brush.
The foregoing describes the scheme provided by the embodiments of the present disclosure, primarily from a methodological perspective. To implement the above functions, it includes hardware structures and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the present disclosure is capable of being implemented in hardware or a combination of hardware and computer software for carrying out the various example modules and algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The embodiment of the disclosure also provides a device for generating the special character effect.
Fig. 5 is a block diagram illustrating an apparatus for generating text effects according to an exemplary embodiment. Referring to fig. 5, the apparatus 500 for generating a text special effect includes an obtaining module 501, a determining module 502 and an animation module 503.
The obtaining module 501 is configured to execute a process configured to obtain animation information and text to be described; the animation information comprises animation duration and pre-stored stroke progress information; the stroke progress information is used for representing the stroke length from each pixel point on the target area to the stroke starting point and the ratio of the stroke length in the target area; the target area is used to characterize a closed contour of the stroke text having a preset pixel width. For example, in conjunction with fig. 1, the determining module 901 may be configured to perform S101.
The determining module 502 is configured to perform determining the drawing progress information of each animation frame to be displayed according to the text to be stroked, the animation time length and the pre-stored stroking progress information. For example, in conjunction with fig. 1, the determination module 502 may be configured to perform S102.
The animation module 503 is configured to perform the drawing progress information according to each animation frame to be displayed, and generate the drawing animation of the text to be drawn. For example, in conjunction with FIG. 1, animation module 503 may be used to perform S103.
In one possible implementation manner, the text special effect apparatus further includes a storage module configured to perform: determining a directed distance field map of a text to be described; the directional distance field chartlet of the text to be traced is used for representing a closed contour line of the text to be traced; determining a target area based on a directional distance field map of a text to be traced and a preset tracing width; determining the stroke progress information of each pixel point based on the stroke length from each pixel point to the stroke starting point on the target area and the length of the target area; and storing the stroke progress information. For example, in conjunction with FIG. 2, a memory module may be used to perform S104-S107.
In one possible implementation, the storage module is specifically configured to perform: determining the corresponding target pixel width of a preset stroke width on a directed distance field map of a text to be stroked; and determining the intersection of the directed distance field map of the text to be described and the width of the target pixel, and taking the area represented by the intersection as a target area. For example, in connection with FIG. 3, the storage module may be used to perform S105a-S105 b.
In one possible implementation, the storage module is specifically configured to perform: adding the stroke progress information to the directed distance field chartlet of the text to be stroked to obtain a directed distance field expansion chartlet of the text to be stroked; a directed distance field expansion map of text to be traced is stored.
In one possible implementation, the storage module is specifically configured to perform: and respectively storing the information of the directional distance field of the text to be traced, the contour line index information of the text to be traced and the tracing progress information in one channel of the directional distance field expansion map of the text to be traced.
In one possible implementation manner, the drawing progress information includes a target pixel point on a target area corresponding to each animation frame to be displayed; the animation module 503 is specifically configured to perform: drawing a stroke between a target pixel point corresponding to the currently played animation frame and a target pixel point corresponding to the previous animation frame to obtain a current stroke result; obtaining a stroking result to be displayed based on the current stroking result and a stroking result displayed in a previous animation frame; and displaying the stroking result to be displayed. For example, in connection with FIG. 4, the animation module 503 may be used to perform S103a-S103 c.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 6 is a block diagram illustrating an electronic device in accordance with an example embodiment. The electronic device 600 may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a desktop computer, a portable computer, a vehicle-mounted terminal, a wearable device, and the like. As shown in fig. 6, the electronic device 600 includes, but is not limited to, a processor 601, a memory 602, a display 603, an input unit 604, an output unit 605, a network unit 606, an interface unit 607, a radio frequency unit 608, a sensor 609, a power supply 610, and the like.
It should be noted that the structure of the electronic device 600 shown in fig. 6 is not limiting for the electronic device 600, and the electronic device 600 may include more or less components than those shown in fig. 6, or some components may be combined, or a different arrangement, as will be appreciated by those skilled in the art.
In the embodiment of the present disclosure, the processor 601 is configured to execute a text special effect task when a user triggers a text special effect control.
It should be noted that the electronic device 600 can implement each process implemented by the electronic device in the foregoing method embodiments, and can achieve the same technical effect, and for avoiding repetition, detailed descriptions are not repeated here.
The processor 601 is a control center of the electronic device 600, connects various parts of the whole electronic device 600 by using various interfaces and lines, performs various functions of the electronic device 600 and processes data by operating or executing software programs and/or modules stored in the memory 602 and calling data stored in the memory 602, thereby performing overall monitoring of the electronic device 800. Processor 601 may include one or more processing units. In one embodiment, processor 601 may integrate an application processor that handles primarily operating systems, user pages, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 601.
The memory 602 may be used to store software programs as well as various data. The memory 602 may mainly include a storage program area and a storage data area, wherein the storage program area stores an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook) created according to the use of the cellular phone, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The display 603 is used for displaying the process of text effects, i.e., the display 603 may display an animation of the text effects. . The Display 603 may include a Display panel, and the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The input unit 604 may be for receiving an audio or video signal. The input Unit 604 may include an image processor (GPU) that processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode, and a microphone. The processed image frames may be displayed on the display 803. The image frames processed by the image processor may be stored in the memory 602 (or other storage medium) or transmitted via the radio frequency unit 608 or the network unit 606. The microphone may receive sound and be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 608 in case of the phone call mode.
The input unit 604 may be a user input unit operable to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus 600. Specifically, the user input unit includes a touch panel and other input devices. The touch panel, also referred to as a touch screen, may collect touch operations by a user (e.g., operations by a user on or near the touch panel using a finger, a stylus, or any other suitable object or attachment). The touch panel may include two components, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 601, and receives and executes commands sent by the processor 601. In addition, the touch panel may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit may further include other input devices, and specifically, the other input devices may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and an operation rod, which are not described herein again.
Further, the touch panel may be overlaid on the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel transmits the touch operation to the processor 601 to determine the type of touch time, and then the processor 601 provides a corresponding data output on the display panel according to the type of the touch event. The touch panel and the display panel may be used as two independent components to implement the input and output functions of the electronic device 600, or the touch panel and the display panel may be integrated to implement the input and output functions of the electronic device 600, which is not limited herein.
The output unit 605 may be an audio output unit, and may convert audio data received by the radio frequency unit 608 or the network unit 606 or stored in the memory 602 into an audio signal and output sound. Also, the audio output unit may also provide audio output related to a specific function performed by the electronic device 600 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit comprises a loudspeaker, a buzzer, a receiver and the like.
The electronic device 600 provides the user with wireless broadband internet access, such as helping the user send and receive e-mails, browse web pages, access streaming media, etc., through the network unit 606.
The interface unit 607 interfaces an external device to the electronic apparatus 600. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 607 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic apparatus 600 or may be used to transmit data between devices external to the electronic apparatus 600.
The radio frequency unit 608 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink data from a base station and then process the received downlink data to the processor 601; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 608 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 608 may also communicate with a network and other devices through a wireless communication system.
The sensor 609 may include at least one of a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that turns off the display panel and/or the backlight when the electronic device 600 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in various directions (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (e.g., horizontal screen switching, related games, magnetometer posture calibration), and identify related functions of vibration (e.g., pedometer, tapping); sensor 00 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., and will not be described in detail herein.
A power supply 610 (e.g., a battery) may be used to power the various components, and in one embodiment, the power supply 610 may be logically coupled to the processor 601 via a power management system to manage charging, discharging, and power consumption management functions via the power management system.
In addition, the electronic device 600 further includes some functional modules (e.g., a camera) not shown, which are not described herein again.
In one example, referring to fig. 5, the processing functions of the above-mentioned obtaining module 501, determining module 502 and animation module 503 can be implemented by the processor 601 in fig. 6 calling a computer program stored in the memory 602.
In an exemplary embodiment, the disclosed embodiment further provides a computer-readable storage medium including instructions, for example, a memory 602 including instructions, which are executable by a processor 601 of the electronic device 600 to perform the text special effect method in S101-S107 described above.
Alternatively, the computer-readable storage medium may be a non-transitory computer-readable storage medium, which may be, for example, a Read-Only Memory (ROM), a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, the disclosed embodiment also provides a computer program product including one or more instructions executable by the processor 601 of the electronic device 600 to perform the text special effect method in S101-S107 described above.
It should be noted that the instructions in the computer-readable storage medium or one or more instructions in the computer program product are executed by the processor 601 of the electronic device 600 to implement the processes of the embodiment of the text special effect generating method, and the technical effects same as those of the text special effect methods S101 to S107 can be achieved, and are not repeated here to avoid repetition.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for generating a special character effect is characterized by comprising the following steps:
acquiring animation information and a text to be stroked; the animation information comprises animation duration and pre-stored stroke progress information; the stroke progress information is used for representing the stroke length from each pixel point to the stroke starting point on the target area and the ratio of the stroke length to the stroke starting point in the target area; the target area is used for representing a closed contour line of the text to be described with a preset pixel width;
determining drawing progress information of each animation frame to be displayed according to the text to be traced, the animation duration and the tracing progress information;
and generating the stroking animation of the text to be stroked according to the drawing progress information of each animation frame to be displayed.
2. The method for generating a text special effect according to claim 1, further comprising:
determining a directed distance field map of the text to be traced; the directional distance field chartlet of the text to be traced is used for representing a closed contour line of the text to be traced;
determining the target area based on the directional distance field mapping of the text to be traced and a preset tracing width;
determining the stroke progress information of each pixel point based on the stroke length from each pixel point to the stroke starting point on the target area and the length of the target area;
and storing the stroke progress information.
3. The method for generating text special effects according to claim 2, wherein the determining the target area based on the directional distance field map of the text to be stroked and a preset stroked width comprises:
determining the corresponding target pixel width of a preset depicting width on the directed distance field map of the text to be depicted;
and determining the intersection of the directed distance field map of the text to be described and the width of the target pixel, and taking the area represented by the intersection as a target area.
4. The method for generating a special character effect according to claim 2, wherein the storing the stroke progress information includes:
adding the stroke progress information to the directed distance field chartlet of the text to be stroked to obtain a directed distance field expansion chartlet of the text to be stroked;
storing a directed distance field expansion map of the text to be traced.
5. The method for generating a text special effect according to claim 4, wherein the adding the stroke progress information to the directed distance field map of the text to be stroked to obtain the directed distance field expansion map of the text to be stroked includes:
and respectively storing the information of the directional distance field of the text to be traced, the index information of the contour line of the text to be traced and the information of the progress of tracing in a channel of the directional distance field extension map of the text to be traced.
6. The method for generating a special effect of a character according to any one of claims 1 to 5, wherein the drawing progress information includes a target pixel point on a target region corresponding to each animation frame to be displayed; generating the stroking animation of the character book to be stroked according to the drawing progress information of each animation frame to be displayed, wherein the method comprises the following steps:
drawing a stroke between a target pixel point corresponding to the currently played animation frame and a target pixel point corresponding to the previous animation frame to obtain a current stroke result;
obtaining a stroking result to be displayed based on the current stroking result and a stroking result displayed by a previous animation frame;
and displaying the stroking result to be displayed.
7. A device for generating a special character effect, comprising:
the acquisition module is configured to acquire the animation information and the text to be described; the animation information comprises animation duration and pre-stored stroke progress information; the stroke progress information is used for representing the stroke length from each pixel point to the stroke starting point on the target area and the ratio of the stroke length to the stroke starting point in the target area; the target area is used for representing a closed contour line of the text to be described with a preset pixel width;
the determining module is configured to determine drawing progress information of each animation frame to be displayed according to the text to be traced, the animation duration and pre-stored tracing progress information;
and the animation module is configured to execute the drawing progress information according to each animation frame to be displayed and generate the stroking animation of the text to be stroked.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of generating a text effect according to any one of claims 1-6.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method for generating text effects of any of claims 1-6.
10. A computer program product, characterized in that it comprises computer instructions which, when run on an electronic device, cause the electronic device to carry out the method of generating a text effect according to any one of claims 1-6.
CN202110560049.2A 2021-05-21 2021-05-21 Method and device for generating text special effects, electronic equipment and storage medium Active CN113240779B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110560049.2A CN113240779B (en) 2021-05-21 2021-05-21 Method and device for generating text special effects, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110560049.2A CN113240779B (en) 2021-05-21 2021-05-21 Method and device for generating text special effects, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113240779A true CN113240779A (en) 2021-08-10
CN113240779B CN113240779B (en) 2024-02-23

Family

ID=77138135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110560049.2A Active CN113240779B (en) 2021-05-21 2021-05-21 Method and device for generating text special effects, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113240779B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101799922A (en) * 2009-02-09 2010-08-11 北京新岸线网络技术有限公司 Method and device for detecting strokes of characters, method and device for locating lines of characters, and method and device for judging repeat of subtitles
CN102184652A (en) * 2011-06-01 2011-09-14 张建强 Digitization method and software system capable of demonstrating word writing process
CN103413342A (en) * 2013-07-25 2013-11-27 南京师范大学 Image and character gradual-change method based on pixel points
CN103810739A (en) * 2014-02-20 2014-05-21 南京师范大学 Image character morphing animation generating method
CN106802800A (en) * 2016-12-30 2017-06-06 深圳芯智汇科技有限公司 The generation method and display device of graphical interfaces
CN107665186A (en) * 2017-09-29 2018-02-06 深圳市前海手绘科技文化有限公司 A kind of peculiar font generation method
CN107689071A (en) * 2016-08-04 2018-02-13 创盛视联数码科技(北京)有限公司 The method of word animation producing
CN108305310A (en) * 2017-11-27 2018-07-20 腾讯科技(深圳)有限公司 A kind of word cartoon implementing method, device, terminal and storage medium
CN108337547A (en) * 2017-11-27 2018-07-27 腾讯科技(深圳)有限公司 A kind of word cartoon implementing method, device, terminal and storage medium
CN108765520A (en) * 2018-05-18 2018-11-06 腾讯科技(深圳)有限公司 Rendering intent and device, storage medium, the electronic device of text message

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101799922A (en) * 2009-02-09 2010-08-11 北京新岸线网络技术有限公司 Method and device for detecting strokes of characters, method and device for locating lines of characters, and method and device for judging repeat of subtitles
CN102184652A (en) * 2011-06-01 2011-09-14 张建强 Digitization method and software system capable of demonstrating word writing process
CN103413342A (en) * 2013-07-25 2013-11-27 南京师范大学 Image and character gradual-change method based on pixel points
CN103810739A (en) * 2014-02-20 2014-05-21 南京师范大学 Image character morphing animation generating method
CN107689071A (en) * 2016-08-04 2018-02-13 创盛视联数码科技(北京)有限公司 The method of word animation producing
CN106802800A (en) * 2016-12-30 2017-06-06 深圳芯智汇科技有限公司 The generation method and display device of graphical interfaces
CN107665186A (en) * 2017-09-29 2018-02-06 深圳市前海手绘科技文化有限公司 A kind of peculiar font generation method
CN108305310A (en) * 2017-11-27 2018-07-20 腾讯科技(深圳)有限公司 A kind of word cartoon implementing method, device, terminal and storage medium
CN108337547A (en) * 2017-11-27 2018-07-27 腾讯科技(深圳)有限公司 A kind of word cartoon implementing method, device, terminal and storage medium
CN108765520A (en) * 2018-05-18 2018-11-06 腾讯科技(深圳)有限公司 Rendering intent and device, storage medium, the electronic device of text message

Also Published As

Publication number Publication date
CN113240779B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
CN108958615B (en) Display control method, terminal and computer readable storage medium
CN107817939B (en) Image processing method and mobile terminal
WO2019174628A1 (en) Photographing method and mobile terminal
CN109240577B (en) Screen capturing method and terminal
CN107943390B (en) Character copying method and mobile terminal
CN111223143B (en) Key point detection method and device and computer readable storage medium
CN109151367B (en) Video call method and terminal equipment
KR20220154763A (en) Image processing methods and electronic equipment
CN109448069B (en) Template generation method and mobile terminal
CN111026305A (en) Audio processing method and electronic equipment
CN111461985A (en) Picture processing method and electronic equipment
JP2023518548A (en) Detection result output method, electronic device and medium
CN110908517B (en) Image editing method, image editing device, electronic equipment and medium
CN110941378B (en) Video content display method and electronic equipment
CN110908750B (en) Screen capturing method and electronic equipment
CN109618055B (en) Position sharing method and mobile terminal
CN109639981B (en) Image shooting method and mobile terminal
CN111522613B (en) Screen capturing method and electronic equipment
CN112449098B (en) Shooting method, device, terminal and storage medium
CN110007821B (en) Operation method and terminal equipment
CN109547696B (en) Shooting method and terminal equipment
CN111447598A (en) Interaction method and display device
CN110781331A (en) Picture display method, electronic equipment and computer readable storage medium
CN107861667B (en) Method for arranging desktop application icons and mobile terminal
CN110780795A (en) Screen capturing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant