CN108765520B - Text information rendering method and device, storage medium and electronic device - Google Patents

Text information rendering method and device, storage medium and electronic device Download PDF

Info

Publication number
CN108765520B
CN108765520B CN201810482906.XA CN201810482906A CN108765520B CN 108765520 B CN108765520 B CN 108765520B CN 201810482906 A CN201810482906 A CN 201810482906A CN 108765520 B CN108765520 B CN 108765520B
Authority
CN
China
Prior art keywords
map
pixel
texture
text information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810482906.XA
Other languages
Chinese (zh)
Other versions
CN108765520A (en
Inventor
傅强
宋立强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810482906.XA priority Critical patent/CN108765520B/en
Publication of CN108765520A publication Critical patent/CN108765520A/en
Application granted granted Critical
Publication of CN108765520B publication Critical patent/CN108765520B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a text information rendering method and device, a storage medium and an electronic device. Wherein, the method comprises the following steps: acquiring indication information, wherein the indication information is used for indicating target text information with a stroke is rendered in a target image; obtaining a second chartlet by sampling the first chartlet in response to the indication information, wherein the first chartlet is used for representing the stroke texture of the target text information, and the second chartlet is used for representing the stroked texture of the target text information; and rendering the target text information with the stroke in the target image by using the second map. The invention solves the technical problem that more terminal operation resources are required to be consumed for performing the stroke on the characters in the related technology.

Description

Text information rendering method and device, storage medium and electronic device
Technical Field
The invention relates to the field of internet, in particular to a text information rendering method and device, a storage medium and an electronic device.
Background
The character delineation is a contour line drawn around the characters and is positioned at the periphery of the edge of a character frame, so that the function of highlighting the characters can be visually achieved, and the character delineation is often used for highlighting the information content borne by the text in scenes such as games, movies, animation, live broadcast and the like.
In the related art, text delineation can be realized through a self-contained component of UGUI (UI system developed by Unity engine), and by using the component, the same graph as the original text can be drawn once in four offset directions of upper left, upper right, lower left and lower right on the basis of the original vertex, and finally the original text is superimposed to visually form the effect of delineation, as shown in fig. 1.
By looking at the source code that the UGUI has published, the implementation of the component can be seen as follows: based on the original vertex, the method is implemented by moving the Shadow component by the distance X on the X axis and the distance Y on the Y axis in four offset directions (X, Y), (X, -Y), (-X, Y), (-X, -Y), such as 4 lighter "a" (i.e. text for drawing lines) as shown in fig. 2, and the Shadow component is implemented by drawing a text simulation Shadow effect in the same color as the original text at the specified offset, such as the lighter "a" shown in fig. 2, i.e. the Outline component is implemented based on the Shadow component, and the Outline corresponds to the Shadow in 4 different offset directions, such as the Outline component implemented in the larger value shown in fig. 2.
The following disadvantages mainly exist in the implementation of text delineation by using UGUI self-contained components: 1) the tracing effect is discontinuous and uneven, and the tracing is disconnected in the upper, lower, left and right directions; 2) the number of vertexes is greatly increased, the number of 1 character vertex without edge is 6, the number of character vertexes after the scheme is applied is 30 (namely, the original characters are repeatedly rendered for 5 times), and under the condition of more characters, the CPU and the GPU of the mobile equipment are stressed in performance; 3) the size of the stroke only supports a small value, and the excessive stroke can cause obvious fracture and cannot form a coherent stroke effect visually.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a text information rendering method and device, a storage medium and an electronic device, and at least solves the technical problem that more terminal operation resources are consumed for performing stroke on characters in the related technology.
According to an aspect of an embodiment of the present invention, there is provided a method for rendering text information, including: acquiring indication information, wherein the indication information is used for indicating target text information with a stroke is rendered in a target image; obtaining a second chartlet by sampling the first chartlet in response to the indication information, wherein the first chartlet is used for representing the stroke texture of the target text information, and the second chartlet is used for representing the stroked texture of the target text information; and rendering the target text information with the stroke in the target image by using the second map.
According to another aspect of the embodiments of the present invention, there is also provided a text information rendering apparatus, including: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring indication information which is used for indicating target text information with a stroke is rendered in a target image; the sampling unit is used for responding to the indication information and obtaining a second chartlet by sampling the first chartlet, wherein the first chartlet is used for representing stroke textures of the target text information, and the second chartlet is used for representing stroked textures of the target text information; and the rendering unit is used for rendering the target text information with the stroked edges in the target image by using the second map.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program which, when executed, performs the above-described method.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the above method through the computer program.
In the embodiment of the invention, indication information is acquired, wherein the indication information is used for indicating that target text information with a stroke is rendered in a target image; obtaining a second map by sampling the first map, wherein the first map is used for representing stroke textures of target text information, and the second map is used for representing stroking textures of the target text information; in other words, in the embodiment of the present application, the second map may be rendered only once instead of five times in the related art, and the number of character vertices using this scheme is greatly reduced, which may solve the technical problem that more terminal operation resources are required to be consumed for performing the stroke on the characters in the related art, thereby achieving the technical effect of reducing the terminal operation resources required to be consumed for performing the stroke.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic illustration of an alternative effect of a stroke rendering of textual information;
FIG. 2 is a schematic illustration of an alternative effect of the stroke rendering of textual information;
fig. 3 is a schematic diagram of a hardware environment of a rendering method of text information according to an embodiment of the present invention;
FIG. 4 is a flow chart of an alternative method of rendering text information in accordance with an embodiment of the present invention;
FIG. 5 is a schematic view of an alternative game interface according to an embodiment of the present invention;
FIG. 6 is a schematic view of an alternative game interface according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating an alternative text pixel according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of an alternative textual texture map, in accordance with embodiments of the present invention;
FIG. 9 is a schematic diagram of an alternative textual texture map, in accordance with embodiments of the present invention;
FIG. 10 is a schematic diagram of an alternative textual texture map, in accordance with embodiments of the present invention;
FIG. 11 is a schematic diagram of an alternative textual texture map, in accordance with embodiments of the present invention;
FIG. 12 is a diagram illustrating an alternative text pixel according to an embodiment of the present invention;
FIG. 13 is a schematic diagram of an alternative textual texture map, in accordance with embodiments of the present invention;
FIG. 14 is a schematic diagram of an alternative textual texture map, in accordance with embodiments of the present invention;
FIG. 15 is a schematic diagram of an alternative text message according to an embodiment of the invention;
FIG. 16 is a schematic diagram of an alternative textual texture map, according to an embodiment of the present invention;
fig. 17 is a schematic diagram of an alternative text information rendering apparatus according to an embodiment of the present invention; and the number of the first and second groups,
fig. 18 is a block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terms appearing in the description of the embodiments of the present invention are applied to the following explanations:
UGUI: the game engine Unity is added with a set of UI system which comprises a series of basic UI controls and defines common UI control specifications.
A fragment shader: fragment Shader, an important programmable Shader stage in the rendering pipeline, can complete texture sampling (one of the stages in the rendering process).
According to an aspect of an embodiment of the present invention, a method embodiment of a method for rendering text information is provided.
Alternatively, in the present embodiment, the above text information rendering method may be applied to a hardware environment formed by the server 301 and the terminal 303 as shown in fig. 3. As shown in fig. 3, the server 301 is connected to the terminal 303 through a network, and a database 305 may be provided on the server 301 or separately from the server 301 for providing data storage services for the server 301, the network including but not limited to: the terminal 303 is not limited to a PC, a mobile phone, a tablet computer, and the like.
The rendering method of the text information according to the embodiment of the present invention may be executed by the terminal 303. Fig. 4 is a flowchart of an alternative text information rendering method according to an embodiment of the present invention, and as shown in fig. 4, the method may include the following steps:
in step S402, the terminal obtains instruction information, where the instruction information is used to instruct rendering of target text information with a stroke in a target image.
The indication information can be triggered by the system or application bottom layer configuration during rendering, and can also be triggered by the configuration of a user in the system or application. The target text information is the text to be drawn, including but not limited to characters and numbers of various languages.
Step S404, in response to the indication information, the terminal obtains a second chartlet by sampling the first chartlet, where the first chartlet is used to represent the stroke texture of the target text information, and the second chartlet is used to represent the stroked texture of the target text information, such as the stroked line around the "hua" word in fig. 1.
In step S406, the terminal renders the target text information with the stroke in the target image by using the second map.
The method for rendering text information according to the embodiment of the present invention may be further executed by the server 301, and is different from the above-described embodiment in that the execution subjects of steps S402 to S406 are converted from a terminal to a server; the method for rendering the text information according to the embodiment of the present invention may be executed by the server 301 and the terminal 303 together. The terminal 303 may also be configured to execute the text information rendering method according to the embodiment of the present invention by a client installed thereon.
Through the steps S402 to S406, instruction information is obtained, where the instruction information is used to instruct rendering of target text information with a stroke in a target image; obtaining a second map by sampling the first map, wherein the first map is used for representing stroke textures of target text information, and the second map is used for representing stroking textures of the target text information; in other words, in the embodiment of the present application, the second map may be rendered only once instead of five times in the related art, and the number of character vertices using this scheme is greatly reduced, which may solve the technical problem that more terminal operation resources are required to be consumed for performing the stroke on the characters in the related art, thereby achieving the technical effect of reducing the terminal operation resources required to be consumed for performing the stroke.
The technical scheme is that character components and the like are utilized in an engine of Unity and the like, and character textures are sampled for multiple times in a texture sampling stage of the components such as a fragment shader and the like to form a character stroke effect. The following step shown in fig. 4 is combined to further describe the embodiments of the present application by taking the application of the technical solution of the present application to a game scene as an example, and for other scenes such as movies and live broadcasts, the implementation manner is similar to the following description and is not repeated.
As shown in fig. 5, in the games such as RPG hand game, first person shooting game FPS, third person shooting game TPS and the like, where multiple persons are in free combat on the same screen, the number of players on the same screen is large, and from the performance viewpoint, the player image can be selectively omitted, but the top characters cannot be omitted as the last symbols of other players on the same screen; in addition, the background of some handgames is established on the basis of the ancient Chinese mythology, the overall artistic style of the game is biased to be classical and thick, and large-scale used parchment patterns can be highlighted only by tracing characters, as shown in fig. 6. Therefore, the game has more application scenes for character stroke, and when the requirement exists, the game can be configured in the game client (which can be installed on the terminal), and if the configuration needs to perform character stroke, the technical scheme of the application is triggered.
After the configuration in the client, when character stroking is required, the configuration triggers indication information, in the technical scheme provided in step S402, the terminal receives indication information triggered by the bottom layer logic of the client, and the bottom layer logic renders target text information with stroking in the target image through indication information.
In the technical solution provided in step S404, in response to the indication information, the terminal obtains a second map by sampling the first map, where the first map is used to represent the stroke texture of the target text information, and the second map is used to represent the stroked texture of the target text information.
Optionally, before the second map is obtained by sampling the first map, the texture map may be created as follows: acquiring the stroke texture of the target text information from a font file (such as a font file) of the target text information; a first map is created that includes the stroke texture of the target text information and first position information (e.g., UV coordinates) for representing a first region of the target image to which the stroke texture of the target text information is mapped.
In an embodiment of the present application, obtaining the second map by sampling the first map may include the following steps:
step 1, sampling the first map to obtain a texture pixel value of a pixel point in the first map.
Optionally, the texture pixel value of a pixel point in the first map is obtained by sampling the first map, and the texture pixel value of each pixel point in the first map can be obtained one by one, however, as shown in fig. 15, when the texts are dense (the text orientation may be in any direction, the texts are different in size, and the texts are arranged in a cross manner), a cross region may exist between the texts, as shown in fig. 16, a dirty pixel point, that is, a pixel point not belonging to a "text", may exist.
Optionally, at this time, filtering may be performed in the following manner, and under the condition that the pixel point in the first map is the pixel point mapped in the first region, a texel value of the pixel point in the first map is obtained; if the pixel is not the pixel mapped in the first region, it is directly configured to be a fixed value, i.e. a second threshold, e.g. a white pixel value, e.g. 255 bits.
And step 2, determining the texture pixel value of the pixel point in the third map according to the texture pixel value of the pixel point in the first map to obtain a second map, wherein the second map comprises the pixel point of the stroked texture for representing the target text information.
Optionally, the third map may be a copy of the first map, in other words, the first map may be directly sampled to obtain the second map, the third map may also be a blank map, and a sampling result of the first map may be directly stored in the second map.
The above indication information is further used to indicate a target direction that the target text information needs to be stroked and a first threshold (which may be understood as a range or a width of a rendered pixel point, for example, only one pixel point adjacent to the character texture is rendered, the first threshold is recorded as 1 pixel point (or as twice the diameter or the radius r of the pixel point), for example, the first threshold is recorded as 2 pixel points within two pixel points from the character texture can be rendered, and determining a texel value of a pixel point in the third chartlet according to a texel value of a pixel point in the first chartlet may include the following steps:
step 21, obtaining a first parameter and a second parameter, wherein the first parameter is the sum of texture pixel values of first pixel points in the first map, the first pixel points are located in the target direction of second pixel points in the first map and are away from the second pixel points by a first threshold value, and the second parameter is the number of pixel points away from the second pixel points by the first threshold value.
Optionally, in the above embodiment, the target direction may include a direction shown by an arrow in fig. 7, and four directions, i.e., left, right, front, and back, of the pixel point numbered 44, and in the above embodiment, acquiring the first parameter may include:
1) under the condition that the target direction comprises one direction, determining first pixel points which are positioned in the target direction of the second pixel points and are away from the second pixel points by a first threshold value, and taking the sum of the texture pixel values of all the first pixel points as a first parameter, wherein the first threshold value is equivalent to a rendering range; if the rendering range is N pixels, the first threshold may be understood as N × 2 × r, where r represents the radius of a pixel, such as the distance from the center of the pixel numbered 44 (i.e., the second pixel) to any corner vertex of the pixel in fig. 7.
If the rendering range is 1 pixel point, the target direction is rightward, the first pixel points are numbered 35, 45 and 55 pixels points, if the target direction is upward, the first pixel points are numbered 33, 34 and 35 pixels points, the rendering range is 2 pixel points, if the target direction is downward, the first pixel points are numbered 63, 64 and 65 pixels points, and so on.
2) And under the condition that the target direction comprises a plurality of directions, determining a first pixel point which is positioned in any one of the plurality of directions of the second pixel point and is away from the second pixel point by a first threshold value, and taking the sum of the texture pixel values of all the first pixel points as a first parameter.
For example, if the target direction includes four directions, i.e., up, down, left, and right, and the rendering range is 1 pixel, the pixels numbered 33, 34, 35, 45, 55, 54, 53, and 43 are the first pixels.
And step 22, determining a texture pixel value of a third pixel point in a third map according to the target ratio and the texture pixel value of the second pixel point, wherein the target ratio is the ratio between the first parameter and the second parameter, and the position of the third pixel point in the third map is the same as the position of the second pixel point in the first map.
Optionally, determining a texel value of a third pixel point in a third map according to the target ratio and the texel value of the second pixel point comprises: obtaining a first product between the texture pixel value of the second pixel point and the first weight, and obtaining a target ratio and a secondA second product between the weights, wherein the first weight is determined according to the transparency of the second pixel point, and optionally, if the transparency is binary data, the first parameter may be a transparency normalized value, such as represented by an n-bit binary number, and the transparency α normalized value is α/28The sum of the first weight and the second weight is one, in other words, the second weight is (1- α/2)8) (ii) a And taking the sum of the first product and the second product as the texture pixel value of a third pixel point in the third map.
In the above embodiment, obtaining the second map by sampling the first map includes determining second location information (the second location information is used to represent a second region of the target text information in the second map where the stroke texture of the target text information is mapped) as follows: and determining a third parameter, a fourth parameter, a fifth parameter and a sixth parameter for representing the second area, wherein the third parameter is the sum of the maximum value of the first area in the first direction and the first threshold, the fourth parameter is the sum of the maximum value of the first area in the second direction and the first threshold, the fifth parameter is the difference between the minimum value of the first area in the first direction and the first threshold, the sixth parameter is the difference between the minimum value of the first area in the second direction and the first threshold, and the first direction and the second direction are two directions of the target image in a two-dimensional coordinate system.
In the technical solution provided in step S406, the terminal renders the target text information with the stroke in the target image by using the second map.
Optionally, the target image comprises a game frame animation, wherein rendering the target text information with a stroke in the target image using the second map may comprise: and rendering the target text information with the stroked edges in the game frame animation by using the second chartlet, wherein optionally, the stroked edges can be colored according to the color indicated by the indication information in the rendering process.
By adopting the technical scheme of the application, the following problems can be solved: in the related technology, the original characters are moved and then overlapped according to the designated direction, and the character edge tracing effect is not consistent and uniform; in the related technology, multiple times of rendering are needed, more vertex data need to be processed, the number of vertexes of the method is consistent with the number of character vertexes without stroking, the number of vertexes cannot be increased additionally, and the hardware requirement on a terminal is reduced; the stroke size support range is large (specifically, the stroke size support range can be realized by adjusting the value of the first threshold), so that a small stroke effect in a normal application range can be provided, and a thickening stroke effect in a special application environment can be provided.
As an alternative example, the technical solution of the present application is further described below by taking the application of the technical solution of the present application to a game as an example.
Rendering text in the game engine Unity, the basic steps to render one text in the Unity engine are as follows:
step 1, generating 6 vertices according to the information of the characters in the font file (font), wherein the 6 vertices form 2 triangles, and the formed map is marked as a first map, as shown in fig. 8, the 2 characters have 4 triangles and 12 vertices.
And 2, generating mesh information according to the setting of the user in the character component, wherein the mesh information comprises vertex coordinates, vertex colors, texture coordinates (used for representing mapping to the first area) and the like.
And 3, processing the vertex information in a geometric Stage (Geometry Stage) of the rendering pipeline, and transforming the vertex coordinates into a screen space.
And 4, interpolating texture coordinates, vertex colors and the like vertex by vertex in a rasterization Stage (Rasterizer Stage), and then outputting characters to be rendered pixel by pixel.
In the related art, the UGUI stroking effect implementation scheme is as follows: the self-contained stroking effect of the UGUI is equivalent to that the stroking effect is obtained by additionally repeating the drawing operation for 4 times on the basis of drawing the original characters, and the number of the vertexes of the stroked characters is 5 times of that of the vertexes of the characters without the stroked characters. As shown in fig. 9, to apply the UGUI's own stroke component, it can be seen that 1 word has 10 triangles and 30 vertices.
Different from the technical scheme in the related art, the implementation scheme of the stroke effect in step 1 of the application is as follows, taking rendering of one stroke character as an example:
step 11, generating 6 vertexes according to the information of the characters in the font file (font), wherein the 6 vertexes form 2 triangles.
This step can be implemented in the Unity engine, and for the remaining types of game engines, similarly, when this function is implemented in the Unity engine, the operational steps that can be performed are: newly building an empty game object GameObject; adding a Text component to the GameObject; and modifying the font property and the content of the Text component, wherein the Unity engine generates corresponding vertex information for the Text component.
And step 12, generating mesh information, namely a first map, according to the setting of the user in the character component, wherein the mesh information comprises vertex coordinates, vertex colors, texture coordinates and the like.
The step can be realized in a Unity engine, and when the function is realized in the Unity engine, the operation steps can be similar to those in the step 11, and the Unity engine generates Text component vertex information and simultaneously generates Text mesh information, namely a first map.
And step 13, modifying the mesh information according to the setting of the user in the stroke component, mainly recording the UV area (namely the first area) of the original character, expanding the UV area of the character according to the stroke size, and marking the expanded area as the second area.
The UV coordinate is the texture coordinate, and the texture range corresponding to the vertex is indicated, and the range is usually the minimum range containing the whole characters, so that the UV range can be enlarged to accommodate the edge tracing result on the premise of not increasing the number of the vertexes, and the edge tracing result is prevented from being cut.
As shown in fig. 10, the "text" in the left image is the UV range when the Mesh is not modified, as shown in fig. 10, the "text" in the right image is the UV range after the Mesh is modified according to the size of the stroke, and it can be seen that the range of the right image is larger than that of the left image.
In order to more intuitively compare the UV ranges before and after modification, as shown in fig. 11, the original UV range before modification is in a red frame (i.e., a frame shown by "a"), and the UV range after modification is in a yellow frame (i.e., a frame shown by "B").
Storing the UV before modification and the UV after modification in a tandent variable in a UI Vertex structure body, and transmitting the parameters to the Shader to serve as a data source.
The steps can be realized in a Unity engine, an interface for modifying the mesh information of the Text component in Unity is used, after the character mesh is generated, whether a component inheriting BaseEffective exists on a GameObject corresponding to the Text component or not is detected at the same time, and if yes, a ModifyMesh method for rewriting in the BaseEffective component is called.
The steps required in the scheme of step 13 are:
1) realizing a component inherited from BaseMesheffect, wherein the class name is TextModMesh, and adding the component to the GameObject where the Text component is located;
2) modifying the mesh in a Modifymesh method rewritten by the TextModmesh, wherein the specific modification method comprises the following steps: mesh vertex information is obtained by using a vertex () method in a Unity engine, 1 triangle is formed by 3 adjacent vertices, the maximum value Xmax, Ymax and the minimum value Xmin of the x direction (or called first direction) and the y direction (or called second direction) in the triangle are taken, Ymin is an original UV region, the outline size OutlineSize attribute (also called first threshold) set by a user on TextModMesh is read, a new newXmax (also called third parameter), newYmax (also called fourth parameter), and the new newXmin (also called fifth parameter) and newYmin (also called sixth parameter) are obtained by subtracting OutlineSize from Xmin and Xmin. The value obtained in this step is the new UV range to which the outlined area is added.
The above calculation steps are as follows
Xmax=Math.Max(Vertex1.x,Vertex2.x,Vertex3.x);
Ymax=Math.Max(Vertex1.y,Vertex2.y,Vertex3.y);
Xmin=Math.Min(Vertex1.x,Vertex2.x,Vertex3.x);
Ymin=Math.Min(Vertex1.y,Vertex2.y,Vertex3.y);
The function Math.Max is used for obtaining the maximum value of a plurality of parameters in the bracket (), Math.Min is used for obtaining the minimum value of a plurality of parameters in the bracket (), the Vertex1 to the Vertex3 respectively represent three vertexes of the triangle, the Vertex1.x represents the x coordinate of the Vertex1 of the triangle, and the rest, such as the Vertex2.x, have similar meanings.
(Xmin, Ymin) and (Xmax, Ymax) are original UV regions;
newXmax=Xmax+OutlineSize;
newYmax=Ymax+OutlineSize;
newXmin=Xmin–OutlineSize;
newYmin=Ymin–OutlineSize;
(newXmin, newYmin) and (newXmax, newYmax) are UV regions (i.e., second regions) to which the stroked region is added.
(Xmin, Ymin) and (Xmax, Ymax), (newXmin, newYmin) and (newXmax, newYmax) are passed to Shader through the interface vertexhler.
And 14, processing the vertex information in a geometric Stage (Geometry Stage) of the rendering pipeline, and transforming the vertex coordinates into a screen space.
The step can be realized by modifying the TextShader in the Unity engine, and in the first rendering channel pass of the Shader, multiplying each vertex coordinate transmitted into the Shader by the current model view projection matrix to obtain the screen space coordinate corresponding to the vertex.
The calculation formula is as follows: float4vertexPos ═ mul (UNITY _ MATRIX _ MVP, input.vertex), float4 represents one data type, vertexPos represents the screen space coordinates corresponding to the vertices, mul represents the multiplication, UNITY _ MATRIX _ MVP represents the current model view projection MATRIX, and input.vertex represents the vertex coordinates.
Step 15, interpolating texture coordinates, vertex colors and the like vertex by vertex in a rasterization Stage (Rasterizer Stage), sampling original texture color values of the current pixel in 8 directions including the upper left direction, the upper right direction, the lower left direction, the lower right direction, the upper, the lower left direction, the left direction and the right direction during pixel-by-pixel output, and fusing the color values obtained after averaging with the texture color values of the current pixel to obtain a final output result, namely a second chartlet.
In the pixel-by-pixel processing and output, in the pixel-by-pixel processing Stage of the rasterization Stage (Rasterizer Stage), as shown by arrows in fig. 12, the sum of original texture color values (or called texture pixel values) in 8 directions, i.e., upper left, upper right, lower left, lower right, upper, lower left, and right, of each pixel is sampled, and the sampling matrix in 8 directions is as follows, where the actual sampling coordinates are values in the sampling matrix multiplied by the size of a stroke, and the unit of the size of the stroke is a pixel.
{-0.7,0.7} {0,1} {0.7,0.7}
{-1,0} {1,0}
{-0.7,-0.7} (0,-1} (0.7,-0.7}
Averaging the values obtained in the previous step to obtain an intermediate result of the color value of the current pixel, and fusing the intermediate result obtained in the previous step with the original texture color value obtained by sampling the current pixel to obtain a final output result of the current pixel, wherein an Alpha value (namely α value) of the original texture can be considered in the fusion process, because the original texture is covered on the stroke, a calculation formula during fusion is as follows, wherein colorResult is the output result, colorOrigin is the original texture color value, and color is the intermediate result obtained in the previous step.
This step can be implemented by modifying the TextShader in the Unity engine, and in the 2 nd rendering channel pass of the Shader, a sampling matrix can be defined as follows:
float2sampleOffsets[8]=
{{-0.7,-0.7},{0,-1},{0.7,-0.7},{-1,0},{1,0},{-0.7,0.7},{0,1},{0.7,0.7}}
the sampling result is sampleOffsets [8], the resulting data type is float2, the sampling matrix is sampled pixel by pixel, each pixel including the current Position is sampled 9 times.
Sampling for the sampling matrix may be performed using one cycle, and the sampled values are summed.
ColorResult+=(tex2D(_MainTex,curPos)+_TextureSampleAdd)*_OutlineColor;
In the above formula, ColorResult is the result of summation, the tex2D is the method for sampling the texture map in the Unity engine, _ MainTex is the texture map corresponding to the current font, currpos is the coordinate value obtained by circularly traversing the sampling matrix _ sampleOffsets, _ texturesample add is the default parameter when sampling by the Unity engine, and _outlinecoloris the strode color value set by the user of the incoming Shader.
After sampling the sampling matrix pixel by pixel, averaging the obtained summation result ColorResult with ColorResult which is ColorResult/8, and obtaining the character edge-drawing result after processing.
And finally, fusing the stroke result of the characters with the original texture, wherein the original texture needs to be covered on the stroke result, and the following formula is sampled for fusion:
ColorResult.rgb=ColorOrigin.rgb*ColorOrigin.a+ColorResult.rgb*(1-ColorOrigin.a),
wherein ColorResult is the stroking result, ColorOrigin is the original texture, RGB is the RGB value of the color, a (i.e., α) is the Alpha channel value of the color, and it should be noted that the R value of the red channel, the G value of the green channel, and the B value of the blue channel can be calculated according to the above formulas.
In an embodiment of the present application, the step output result may include: the intermediate result obtained by averaging the data in 8 directions may be output, and an optional texture map obtained is shown in fig. 13; and (5) outputting the intermediate result after being fused with the original texture color value, as shown in fig. 14.
Optionally, for dirty pixels at the edge in the first map, the process filtering may be performed as follows:
by sampling the technical scheme, the stroke effect of the characters is obtained, but the problem of dirty pixels may exist in actual operation, because the UGUI puts all the character maps together on one large map, the characters are closely arranged, and the arrangement is random, as shown in fig. 15, a temporary map (i.e., a first map) in a certain operation can include a large number of characters, the character layout is compact, and there is no absolute independent space between the characters.
In the above-described embodiment of the present application, the stroke area of the character is enlarged, which may cause the problem that the sampling crosses the border to the adjacent character, and the texture not belonging to the character area is not excluded in the further stroke calculation, so that the problem of dirty pixels occurs, as shown in fig. 16.
Such dirty pixels are unpredictable and random, so when calculating the stroking result, the dirty pixels need to be removed in a pixel-by-pixel processing stage, and the removing method is that in the process of enlarging the UV range, the enlarged UV range is transmitted, and simultaneously, the original UV range is also transmitted, and the generation of the dirty pixels can be shielded by judging whether the current pixel coordinate is in the original UV range, and the pixels beyond the range are directly set to be 0 and do not participate in the subsequent calculation.
Optionally, the IsInRect () function may be used to determine whether the current pixel is within the original UV range, and the IsInRect () function uses a Step method to calculate whether the current pixel is within the range, where a float inside Step (xy, fPoint) Step (fPoint, zw) is used to determine whether the x coordinate and the y coordinate of the current pixel are within a certain range (denoted by fPoint), if Step returns a value of 1, otherwise, the value is 0. After the method is sampled to remove dirty pixels, a final stroked character output result can be obtained; a step of removing dirty pixels is added.
Optionally, in the pixel-by-pixel processing stage, redundant determination of the transparent pixel may be reduced, for example, when the original texture of a pixel is all 0, and when 4 directions of top left, bottom left, top right, and bottom right are also 0, a point where the pixel is far from the valid texture may be obtained, and at this time, the number of sampling times may be reduced.
The technical scheme of this application has solved and has used the lower problem of typeface effect efficiency of tracing to the side in the recreation that the Unity engine was developed, has reduced the difference problem between realization effect and the artistic effect picture, when having promoted the recreation fine arts quality, has optimized the performance of recreation, has promoted player's gaming experience greatly.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
According to another aspect of the embodiment of the present invention, there is also provided a text information rendering apparatus for implementing the text information rendering method. Fig. 17 is a schematic diagram of an alternative text information rendering apparatus according to an embodiment of the present invention, and as shown in fig. 17, the apparatus may include:
an acquisition unit 1701 for acquiring instruction information for instructing rendering of target text information having a stroke in a target image;
a sampling unit 1703, configured to obtain a second map by sampling the first map in response to the indication information, where the first map is used to represent a stroke texture of the target text information, and the second map is used to represent a stroke texture of the target text information;
and a rendering unit 1705, configured to render the target text information with the stroke in the target image by using the second map.
It should be noted that the obtaining unit 1701 in this embodiment may be configured to execute step S402 in this embodiment, the sampling unit 1703 in this embodiment may be configured to execute step S404 in this embodiment, and the rendering unit 1705 in this embodiment may be configured to execute step S406 in this embodiment.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 3, and may be implemented by software or hardware.
Acquiring indication information through the module, wherein the indication information is used for indicating target text information with a stroke is rendered in a target image; obtaining a second map by sampling the first map, wherein the first map is used for representing stroke textures of target text information, and the second map is used for representing stroking textures of the target text information; in other words, in the embodiment of the present application, the second map may be rendered only once instead of five times in the related art, and the number of character vertices using this scheme is greatly reduced, which may solve the technical problem that more terminal operation resources are required to be consumed for performing the stroke on the characters in the related art, thereby achieving the technical effect of reducing the terminal operation resources required to be consumed for performing the stroke.
The technical scheme is that character components and the like are utilized in an engine of Unity and the like, and character textures are sampled for multiple times in a texture sampling stage of the components such as a fragment shader and the like to form a character stroke effect.
In the above embodiment, the sampling unit may include: the sampling module is used for sampling the first map to obtain a texture pixel value of a pixel point in the first map; and the determining module is used for determining the texture pixel value of the pixel point in the third map according to the texture pixel value of the pixel point in the first map to obtain a second map, wherein the second map comprises the pixel point of the stroked texture for representing the target text information.
Optionally, the indication information is further used for indicating a target direction in which the target text information needs to be stroked and a first threshold value used for representing a stroking range of the target text information, wherein the determining module may include: the obtaining submodule is used for obtaining a first parameter and a second parameter, wherein the first parameter is the sum of texture pixel values of first pixel points in a first map, the first pixel points are located in the target direction of second pixel points in the first map and are away from the second pixel points by a first threshold value, and the second parameter is the number of the pixel points away from the second pixel points by the first threshold value; and the determining submodule is used for determining the texture pixel value of a third pixel point in a third map according to the target ratio and the texture pixel value of the second pixel point, wherein the target ratio is the ratio between the first parameter and the second parameter, and the position of the third pixel point in the third map is the same as the position of the second pixel point in the first map.
The determination submodule described above may be further operable to: acquiring a first product between a texture pixel value of a second pixel point and a first weight, and acquiring a second product between a target ratio and a second weight, wherein the first weight is determined according to the transparency of the second pixel point, and the sum of the first weight and the second weight is one; and taking the sum of the first product and the second product as the texture pixel value of a third pixel point in the third map.
The above-described acquisition sub-module may also be configured to: under the condition that the target direction comprises one direction, determining first pixel points which are positioned in the target direction of the second pixel points and are away from the second pixel points by a first threshold value, and taking the sum of the texture pixel values of all the first pixel points as a first parameter; and under the condition that the target direction comprises a plurality of directions, determining a first pixel point which is positioned in any one of the plurality of directions of the second pixel point and is away from the second pixel point by a first threshold value, and taking the sum of the texture pixel values of all the first pixel points as a first parameter.
Optionally, the apparatus of the present application may further include a texture obtaining unit, configured to obtain a stroke texture of the target text information from a font file of the target text information before obtaining the second map by sampling the first map; the creating unit is used for creating a first map which comprises the stroke texture of the target text information and first position information, wherein the first position information is used for representing that the stroke texture of the target text information is mapped in a first area in the target image.
The sampling module described above may also be configured to: under the condition that the pixel points in the first map are the pixel points mapped in the first area, acquiring texture pixel values of the pixel points in the first map; and under the condition that the pixel points in the first map are not the pixel points mapped in the first region, taking the second threshold value as the texture pixel values of the pixel points in the first map.
The sampling unit determines second position information in a following mode when the sampling unit obtains the second map by sampling the first map, wherein the second position information is used for representing a second area of the stroke texture mapping of the target text information in the second map in the target image: and determining a third parameter, a fourth parameter, a fifth parameter and a sixth parameter for representing the second area, wherein the third parameter is the sum of the maximum value of the first area in the first direction and the first threshold, the fourth parameter is the sum of the maximum value of the first area in the second direction and the first threshold, the fifth parameter is the difference between the minimum value of the first area in the first direction and the first threshold, the sixth parameter is the difference between the minimum value of the first area in the second direction and the first threshold, and the first direction and the second direction are two directions of the target image in a two-dimensional coordinate system.
Optionally, the target image comprises a game frame animation, wherein the rendering unit is further operable to: and rendering the target text information with the stroke in the game frame animation by using the second map.
By adopting the technical scheme of the application, the following problems can be solved: in the related technology, the original characters are moved and then overlapped according to the designated direction, and the character edge tracing effect is not consistent and uniform; in the related technology, multiple times of rendering are needed, more vertex data need to be processed, the number of vertexes of the method is consistent with the number of character vertexes without stroking, the number of vertexes cannot be increased additionally, and the hardware requirement on a terminal is reduced; the stroke size support range is large (specifically, the stroke size support range can be realized by adjusting the value of the first threshold), so that a small stroke effect in a normal application range can be provided, and a thickening stroke effect in a special application environment can be provided.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may be run in a hardware environment as shown in fig. 3, may be implemented by software, and may also be implemented by hardware, where the hardware environment includes a network environment.
According to another aspect of the embodiment of the present invention, there is also provided a server or a terminal for implementing the rendering method of text information.
Fig. 18 is a block diagram of a terminal according to an embodiment of the present invention, and as shown in fig. 18, the terminal may include: one or more processors 1801 (only one shown in fig. 18), a memory 1803, and a transmitting device 1805, as shown in fig. 18, the terminal may also include an input-output device 1807.
The memory 1803 may be configured to store software programs and modules, such as program instructions/modules corresponding to the text information rendering method and apparatus in the embodiment of the present invention, and the processor 1801 executes various functional applications and data processing by running the software programs and modules stored in the memory 1803, that is, implements the text information rendering method. The memory 1803 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memory 1803 can further include memory located remotely from the processor 1801 and connectable to a terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmitting device 1805 is used for receiving or sending data via a network, and may also be used for data transmission between the processor and the memory. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 1805 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices so as to communicate with the internet or a local area Network. In one example, the transmission device 1805 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
Among them, the memory 1803 is used to store an application program, in particular.
The processor 1801 may call the application stored in the memory 1803 through the transmission device 1805 to execute the following steps:
acquiring indication information, wherein the indication information is used for indicating that target text information with a stroke is rendered in a target image;
obtaining a second chartlet by sampling the first chartlet in response to the indication information, wherein the first chartlet is used for representing the stroke texture of the target text information, and the second chartlet is used for representing the stroked texture of the target text information;
and rendering the target text information with the stroke in the target image by using the second map.
The processor 1801 is further configured to perform the following steps:
acquiring a first parameter and a second parameter, wherein the first parameter is the sum of texture pixel values of first pixel points in a first map, the first pixel points are located in the target direction of second pixel points in the first map and are separated from the second pixel points by a first threshold value, and the second parameter is the number of the pixel points separated from the second pixel points by the first threshold value;
and determining the texture pixel value of a third pixel point in a third map according to the target ratio and the texture pixel value of the second pixel point, wherein the target ratio is the ratio between the first parameter and the second parameter, and the position of the third pixel point in the third map is the same as the position of the second pixel point in the first map.
By adopting the embodiment of the invention, the indication information is obtained and used for indicating the target text information with the stroked edges in the target image in a rendering mode; obtaining a second map by sampling the first map, wherein the first map is used for representing stroke textures of target text information, and the second map is used for representing stroking textures of the target text information; in other words, in the embodiment of the present application, the second map may be rendered only once instead of five times in the related art, and the number of character vertices using this scheme is greatly reduced, which may solve the technical problem that more terminal operation resources are required to be consumed for performing the stroke on the characters in the related art, thereby achieving the technical effect of reducing the terminal operation resources required to be consumed for performing the stroke.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
It will be understood by those skilled in the art that the structure shown in fig. 18 is only an illustration, and the terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a Mobile Internet Device (MID), a PAD, etc. Fig. 18 is a diagram illustrating a structure of the electronic device. For example, the terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 18, or have a different configuration than shown in FIG. 18.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The embodiment of the invention also provides a storage medium. Alternatively, in this embodiment, the storage medium may be a program code for executing a rendering method of text information.
Optionally, in this embodiment, the storage medium may be located on at least one of a plurality of network devices in a network shown in the above embodiment.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
s12, acquiring indication information, wherein the indication information is used for indicating that target text information with a stroke is rendered in a target image;
s14, responding to the indication information, and obtaining a second map by sampling the first map, wherein the first map is used for representing the stroke texture of the target text information, and the second map is used for representing the stroke texture of the target text information;
and S16, rendering the target text information with the stroked edges in the target image by using the second chartlet.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s22, acquiring a first parameter and a second parameter, wherein the first parameter is the sum of texture pixel values of first pixel points in a first map, the first pixel points are located in the target direction of second pixel points in the first map and are away from the second pixel points by a first threshold value, and the second parameter is the number of pixel points away from the second pixel points by the first threshold value;
and S24, determining the texture pixel value of a third pixel point in the third map according to the target ratio and the texture pixel value of the second pixel point, wherein the target ratio is the ratio between the first parameter and the second parameter, and the position of the third pixel point in the third map is the same as the position of the second pixel point in the first map.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (11)

1. A method for rendering text information, comprising:
acquiring indication information, wherein the indication information is used for indicating target text information with a stroke is rendered in a target image, and the indication information is also used for indicating a target direction of the target text information needing the stroke and a first threshold value used for representing the stroke range of the target text information;
responding to the indication information, and acquiring a texture pixel value of a pixel point in a first map by sampling the first map;
acquiring a first parameter and a second parameter, wherein the first parameter is the sum of texture pixel values of first pixel points in the first map, the first pixel points are located in the target direction of second pixel points in the first map and are separated from the second pixel points by the first threshold, and the second parameter is the number of pixel points separated from the second pixel points by the first threshold;
determining a texture pixel value of a third pixel point in a third map according to a target ratio and the texture pixel value of the second pixel point to obtain a second map, wherein the first map is used for representing stroke texture of the target text information, the second map is used for representing stroked texture of the target text information, the second map comprises pixel points used for representing stroked texture of the target text information, and the third map is a copy or blank map of the first map;
the target ratio is a ratio between the first parameter and the second parameter, and the position of the third pixel point in the third map is the same as the position of the second pixel point in the first map;
rendering the target text information with a stroke in the target image using the second map.
2. The method of claim 1, wherein determining the texel value of a third pixel in the third map based on the target ratio and the texel value of the second pixel comprises:
acquiring a first product between a texture pixel value of the second pixel point and a first weight, and acquiring a second product between the target ratio and a second weight, wherein the first weight is determined according to the transparency of the second pixel point, and the sum of the first weight and the second weight is one;
and taking the sum of the first product and the second product as a texture pixel value of a third pixel point in the third map.
3. The method of claim 1, wherein obtaining the first parameter comprises:
determining the first pixel point which is located in the target direction of the second pixel point and is apart from the second pixel point by the first threshold value under the condition that the target direction comprises one direction, and taking the sum of texture pixel values of all the first pixel points as the first parameter;
and under the condition that the target direction comprises a plurality of directions, determining the first pixel point which is positioned in any one direction of the plurality of directions of the second pixel point and is away from the second pixel point by the first threshold value, and taking the sum of the texture pixel values of all the first pixel points as the first parameter.
4. The method of any of claims 1 to 3, wherein prior to obtaining the second map by sampling the first map, the method further comprises:
acquiring the stroke texture of the target text information from the font file of the target text information;
creating the first map comprising the stroke texture of the target text information and first position information, wherein the first position information is used for representing that the stroke texture of the target text information is mapped to a first area in the target image.
5. The method of claim 4, wherein obtaining texel values for pixels in the first map by sampling the first map comprises:
under the condition that the pixel points in the first map are the pixel points mapped in the first area, acquiring texture pixel values of the pixel points in the first map;
and under the condition that the pixel points in the first map are not the pixel points mapped in the first region, taking a second threshold value as the texture pixel values of the pixel points in the first map.
6. The method of claim 4, wherein obtaining a second map by sampling the first map comprises determining second location information for a second region of the target image to which a stroke texture representing the target text information in the second map is mapped, as follows:
determining a third parameter, a fourth parameter, a fifth parameter and a sixth parameter for representing the second area, wherein the third parameter is the sum of the maximum value of the first area in the first direction and a first threshold, the fourth parameter is the sum of the maximum value of the first area in the second direction and the first threshold, the fifth parameter is the difference between the minimum value of the first area in the first direction and the first threshold, the sixth parameter is the difference between the minimum value of the first area in the second direction and the first threshold, and the first direction and the second direction are two directions of the target image in a two-dimensional coordinate system.
7. The method of any one of claims 1-3, wherein the target image comprises a game frame animation, wherein rendering the target text information with a stroke in the target image using the second map comprises:
rendering the target text information with a stroke in a game frame animation using the second map.
8. An apparatus for rendering text information, comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring indication information, and the indication information is used for indicating target text information with a stroke to be rendered in a target image;
a sampling unit, configured to obtain a second map by sampling a first map in response to the indication information, where the first map is used to represent a stroke texture of the target text information, and the second map is used to represent a stroke texture of the target text information;
a rendering unit, configured to render the target text information with a stroke in the target image using the second map;
the sampling unit includes: the sampling module is used for sampling the first map to obtain a texture pixel value of a pixel point in the first map; a determining module, configured to determine a texel value of a pixel point in a third map according to a texel value of a pixel point in the first map to obtain the second map, where the second map includes a pixel point used to represent a stroked texture of the target text information, and the third map is a copy or a blank map of the first map;
the indication information is further used for indicating a target direction in which the target text information needs to be stroked and a first threshold value used for representing a stroking range of the target text information, and the determining module comprises:
the obtaining sub-module is used for obtaining a first parameter and a second parameter, wherein the first parameter is the sum of texture pixel values of first pixel points in the first map, the first pixel points are located in the target direction of second pixel points in the first map and are separated from the second pixel points by the first threshold value, and the second parameter is the number of pixel points separated from the second pixel points by the first threshold value;
and the determining submodule is used for determining the texture pixel value of a third pixel point in the third map according to a target ratio and the texture pixel value of the second pixel point, wherein the target ratio is the ratio between the first parameter and the second parameter, and the position of the third pixel point in the third map is the same as the position of the second pixel point in the first map.
9. The apparatus of claim 8, wherein the determination sub-module is further configured to:
acquiring a first product between a texture pixel value of the second pixel point and a first weight, and acquiring a second product between the target ratio and a second weight, wherein the first weight is determined according to the transparency of the second pixel point, and the sum of the first weight and the second weight is one;
and taking the sum of the first product and the second product as a texture pixel value of a third pixel point in the third map.
10. A storage medium, characterized in that the storage medium comprises a stored program, wherein the program when executed performs the method of any of the preceding claims 1 to 7.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the method of any of the preceding claims 1 to 7 by means of the computer program.
CN201810482906.XA 2018-05-18 2018-05-18 Text information rendering method and device, storage medium and electronic device Active CN108765520B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810482906.XA CN108765520B (en) 2018-05-18 2018-05-18 Text information rendering method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810482906.XA CN108765520B (en) 2018-05-18 2018-05-18 Text information rendering method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN108765520A CN108765520A (en) 2018-11-06
CN108765520B true CN108765520B (en) 2020-07-28

Family

ID=64008414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810482906.XA Active CN108765520B (en) 2018-05-18 2018-05-18 Text information rendering method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN108765520B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948581B (en) * 2019-03-28 2023-05-05 腾讯科技(深圳)有限公司 Image-text rendering method, device, equipment and readable storage medium
CN111105474B (en) * 2019-12-19 2023-09-29 广州酷狗计算机科技有限公司 Font drawing method, font drawing device, computer device and computer readable storage medium
CN111951367B (en) * 2020-08-04 2024-04-19 广州虎牙科技有限公司 Character rendering method, character processing method and device
CN112426711B (en) * 2020-10-23 2024-03-26 杭州电魂网络科技股份有限公司 Method, system, electronic device and storage medium for processing Bloom effect
CN112619138A (en) * 2021-01-06 2021-04-09 网易(杭州)网络有限公司 Method and device for displaying skill special effect in game
CN113240779B (en) * 2021-05-21 2024-02-23 北京达佳互联信息技术有限公司 Method and device for generating text special effects, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102081731A (en) * 2009-11-26 2011-06-01 中国移动通信集团广东有限公司 Method and device for extracting text from image
CN102122502A (en) * 2011-03-15 2011-07-13 深圳芯邦科技股份有限公司 Method and related device for displaying three-dimensional (3D) font
CN105160646A (en) * 2015-10-21 2015-12-16 广州视睿电子科技有限公司 Character edging realization method and apparatus
CN105447010A (en) * 2014-08-12 2016-03-30 博雅网络游戏开发(深圳)有限公司 Text rendering method and system
CN106384373A (en) * 2016-08-31 2017-02-08 广州博冠信息科技有限公司 Character display method and device
CN107424137A (en) * 2017-08-01 2017-12-01 深信服科技股份有限公司 A kind of Text enhancement method and device, computer installation, readable storage medium storing program for executing
CN108176048A (en) * 2017-11-30 2018-06-19 腾讯科技(深圳)有限公司 The treating method and apparatus of image, storage medium, electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2013248248B2 (en) * 2013-10-25 2015-12-24 Canon Kabushiki Kaisha Text rendering method with improved clarity of corners

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102081731A (en) * 2009-11-26 2011-06-01 中国移动通信集团广东有限公司 Method and device for extracting text from image
CN102122502A (en) * 2011-03-15 2011-07-13 深圳芯邦科技股份有限公司 Method and related device for displaying three-dimensional (3D) font
CN105447010A (en) * 2014-08-12 2016-03-30 博雅网络游戏开发(深圳)有限公司 Text rendering method and system
CN105160646A (en) * 2015-10-21 2015-12-16 广州视睿电子科技有限公司 Character edging realization method and apparatus
CN106384373A (en) * 2016-08-31 2017-02-08 广州博冠信息科技有限公司 Character display method and device
CN107424137A (en) * 2017-08-01 2017-12-01 深信服科技股份有限公司 A kind of Text enhancement method and device, computer installation, readable storage medium storing program for executing
CN108176048A (en) * 2017-11-30 2018-06-19 腾讯科技(深圳)有限公司 The treating method and apparatus of image, storage medium, electronic device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于背景融合的机载字符生成;沈梦杰 等;《电子技术应用》;20150430;第41卷(第4期);25-28 *
网络视频字幕提取识别系统的设计与实现;刁月华;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20150915(第09期);I138-1370 *

Also Published As

Publication number Publication date
CN108765520A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN108765520B (en) Text information rendering method and device, storage medium and electronic device
CN112215934B (en) Game model rendering method and device, storage medium and electronic device
CN108564646B (en) Object rendering method and device, storage medium and electronic device
CN107358649B (en) Processing method and device of terrain file
CN111145326B (en) Processing method of three-dimensional virtual cloud model, storage medium, processor and electronic device
CN112070873B (en) Model rendering method and device
CN106447756B (en) Method and system for generating user-customized computer-generated animations
CN111080780A (en) Edge processing method and device of virtual character model
CN107657648B (en) Real-time efficient dyeing method and system in mobile game
CN111127576A (en) Game picture rendering method and device and electronic equipment
CN112274934A (en) Model rendering method, device, equipment and storage medium
US20230125255A1 (en) Image-based lighting effect processing method and apparatus, and device, and storage medium
CN110866967A (en) Water ripple rendering method, device, equipment and storage medium
CN112927365A (en) Method and device for rendering mountain in three-dimensional virtual scene of application program
CN111710020A (en) Animation rendering method and device and storage medium
CN113034658B (en) Method and device for generating model map
CN111260767B (en) Rendering method, rendering device, electronic device and readable storage medium in game
EP4231243A1 (en) Data storage management method, object rendering method, and device
CN111462343B (en) Data processing method and device, electronic equipment and storage medium
JP7301453B2 (en) IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS, COMPUTER PROGRAM, AND ELECTRONIC DEVICE
CN114742970A (en) Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device
CN114627225A (en) Method and device for rendering graphics and storage medium
CN113440845A (en) Rendering method and device of virtual model, storage medium and electronic device
CN115311395A (en) Three-dimensional scene rendering method, device and equipment
CN117689756A (en) Image processing method, device, nonvolatile storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant