CN117974872A - Rendering method and device of three-dimensional text, electronic equipment and readable storage medium - Google Patents

Rendering method and device of three-dimensional text, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN117974872A
CN117974872A CN202311597516.4A CN202311597516A CN117974872A CN 117974872 A CN117974872 A CN 117974872A CN 202311597516 A CN202311597516 A CN 202311597516A CN 117974872 A CN117974872 A CN 117974872A
Authority
CN
China
Prior art keywords
text
image
dimensional
text image
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311597516.4A
Other languages
Chinese (zh)
Inventor
李健蓬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311597516.4A priority Critical patent/CN117974872A/en
Publication of CN117974872A publication Critical patent/CN117974872A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application discloses a rendering method and device of three-dimensional characters, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring a directional distance field image which is generated in advance for a two-dimensional text image to be rendered; the pixel value in the directed distance field image represents the shortest vector from the text graphic pixel point in the two-dimensional text image to the edge of the text graphic where the text graphic pixel point is located; determining a normal vector of a character pattern in the two-dimensional character image according to the directed distance field image; and generating a stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image and the normal vector. The method can obtain high-quality rendering effect with fewer pixel points, reduces the cost of rendering performance, reduces the rendering cost and improves the rendering efficiency.

Description

Rendering method and device of three-dimensional text, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of computers, and in particular, to a method and apparatus for rendering stereoscopic text, an electronic device, and a computer readable storage medium.
Background
The stereo text is a text effect widely applied to propaganda and decoration, can be applied to games and video packages, and has the characteristics of attractive appearance, reminding and the like. At present, with the rapid development of 3D games, two-dimensional characters cannot meet the requirements of game development, and three-dimensional characters are often required to be rendered in real time in the games, so that a high-quality and high-efficiency three-dimensional character rendering method needs to be developed.
At present, in a method for realizing three-dimensional text rendering, a two-dimensional text image is generally input, the input two-dimensional text image is subjected to fuzzy processing, then the fuzzy processed two-dimensional image is subjected to multiple sampling processing, and a normal vector of the text image is determined according to a sampling result of the multiple sampling processing, so that three-dimensional text is rendered. It can be understood that in the above rendering manner, in order to achieve the effect that the rendered text does not have fuzzy distortion, a two-dimensional text image with higher resolution is generally required to be rendered, so that the number of pixels in the rendering process is larger.
It should be noted that the data disclosed in the foregoing background section is only for enhancement of understanding of the background of the present disclosure and thus may include data that does not form the prior art that is already known to those of ordinary skill in the art.
Disclosure of Invention
The application provides a rendering method, a rendering device, electronic equipment and a computer readable storage medium for three-dimensional characters, which are capable of obtaining a high-quality rendering effect with a small number of pixels by rendering a directional distance field image with a small number of pixels generated in advance, reducing the cost of rendering performance, lowering the cost of rendering and improving the rendering efficiency.
In a first aspect, an embodiment of the present application provides a method for rendering stereo text, where the method includes:
Acquiring a directional distance field image which is generated in advance for a two-dimensional text image to be rendered; the pixel value in the directed distance field image represents the shortest vector from the text graphic pixel point in the two-dimensional text image to the edge of the text graphic where the text graphic pixel point is located;
Determining a normal vector of a character pattern in the two-dimensional character image according to the directed distance field image;
And generating a stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image and the normal vector.
In a second aspect, an embodiment of the present application provides a stereoscopic text rendering device, including: an acquisition unit, a determination unit and a generation unit;
The device comprises an acquisition unit, a rendering unit and a rendering unit, wherein the acquisition unit is used for acquiring a directional distance field image which is generated in advance for a two-dimensional text image to be rendered; the pixel value in the directed distance field image represents the shortest vector from the text graphic pixel point in the two-dimensional text image to the edge of the text graphic where the text graphic pixel point is located;
The determining unit is used for determining the normal vector of the character graph in the two-dimensional character image according to the directed distance field image;
and the generating unit is used for generating a stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image and the normal vector.
In a third aspect, an embodiment of the present application provides an electronic device, including:
A processor; and
A memory for storing a data processing program, the electronic device being powered on and executing the program by the processor, to perform the method as in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a data processing program for execution by a processor to perform a method as in the first aspect.
Compared with the prior art that the high-resolution two-dimensional text image is directly used for rendering, the rendering method of the three-dimensional text provided by the application can be used for rendering by acquiring the directional distance field image which is generated in advance for the two-dimensional text image to be rendered in the rendering process; the pixel value in the directed distance field image represents the shortest vector from the text graphic pixel point in the two-dimensional text image to the edge of the text graphic where the text graphic pixel point is located; the shortest vector is the shortest directed distance from the text image pixel point to the text image edge where the text image pixel point is positioned; because the vector interpolation is a situation that no fuzzy distortion occurs, the directional distance field image with the number of the corresponding lower pixel points can be generated in advance through the low-resolution two-dimensional character image, then the normal vector of the character graphics in the two-dimensional character image is determined according to the directional distance field image, and the stereoscopic character image corresponding to the two-dimensional character image is generated according to the two-dimensional character image and the normal vector.
Therefore, the rendering quality can be improved according to the characteristic that the pixel value in the directed distance field image represents the shortest vector from the text graphic pixel point to the edge of the text graphic where the text graphic pixel point is located in the two-dimensional text image. In addition, the directional distance field image with fewer pixels is generated in advance to be rendered, so that a high-quality rendering effect can be obtained with fewer pixels, the rendering cost is reduced, and the rendering efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments of the present application will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram showing an example of a stereoscopic text with a concave effect according to an embodiment of the present application;
FIG. 2 is a schematic diagram showing an example of a stereoscopic text with a protruding effect according to an embodiment of the present application;
FIG. 3 is a diagram illustrating an example of rendering stereoscopic text according to the related art according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating an example of a method for rendering stereo text according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an example of a blurred image of a method for rendering stereo text according to an embodiment of the present application;
FIG. 6 is a diagram showing another example of rendering stereoscopic text in the related art according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an exemplary rendering system according to an embodiment of the present application;
FIG. 8 is a flowchart illustrating an example of a method for rendering stereo text according to an embodiment of the present application;
FIG. 9 is a schematic diagram of an example of a directed distance field provided by an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating an example of a normal effect of a stereo text according to an embodiment of the present application;
FIG. 11 is a schematic diagram showing an example of the effect of the smoothed directed distance field according to the embodiment of the present application;
fig. 12 is a schematic structural diagram of a rendering device for stereoscopic text according to an embodiment of the present application;
Fig. 13 is a block diagram of an electronic device for rendering stereo text according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. The present application may be embodied in many other forms than those herein described, and those skilled in the art will readily appreciate that the present application may be similarly embodied without departing from the spirit or essential characteristics thereof, and therefore the present application is not limited to the specific embodiments disclosed below.
It should be noted that the terms "first," "second," "third," and the like in the claims, description, and drawings of the present application are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. The data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and their variants are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in embodiments of the present application, "at least one" means one or more and "a plurality" means two or more. "and/or" is merely an association relationship describing an association object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. "comprising A, B and/or C" means comprising any 1 or any 2 or 3 of A, B, C.
It should be understood that in embodiments of the present application, "B corresponding to a", "a corresponding to B", or "B corresponding to a" means that B is associated with a from which B may be determined. Determining B from a does not mean determining B from a alone, but may also determine B from a and/or other information.
Before describing the rendering method of the stereo text provided by the application in detail, the related concepts related to the application are described first.
1. Rendering pipeline: the processing procedure of converting the data of the application program into the two-dimensional image on the display screen is defined, and the image data is abstract to a specific flow.
2. Rendering target (RENDERTARGET, RT): is a term in computer graphics that refers to buffers in a rendering pipeline for storing image data, read for use when needed. Rendering targets may be used to store pixels on a screen, as well as other types of data, such as depth values or normals.
3. Directional distance field (SIGNED DISTANCE FIELD, SDF): is a data structure describing the surface of an object and can be used to represent distance information of the surface of the object. The principle is that the attribute value in the alpha channel corresponding to each pixel point in the space is replaced by the shortest distance between the pixel point and the object surface, and as the object surface is a continuous smooth curved surface without any protruding or recessed part, a negative value can be used for representing the inside of the object, a positive value can be used for representing the outside of the object, the distance between the object and the object surface is 0, greater than 0, less than 0 and equal to 0, and the range of the directional distance value in the directional distance field of the object is [ -1,1] in general. Directed distance fields are commonly used in the fields of rendering, graphics, etc., and may be used to render complex object surfaces, such as text, patterns, etc.
In this embodiment, the directional distance field of the text is referred to, and at the text boundary, the text edge is a discontinuous line, so that some protruding or recessed portions may occur, and the directional distance value of these portions may be greater than 0 or less than 0, that is, may cause the displayed text to be different in size or illegible. Therefore, in this case, in order to make the directional distance field more accurate so as to obtain a better effect when displaying, the value range of the directional distance value [ -1,1] is usually mapped to be [0,1], and the directional distance value 0.5 is used as the text boundary, so that the shape and the position of the text boundary can be better represented. Pixels having a directional distance value of less than 0.5 in the directional distance field of the character are regarded as part of the pixels of the character, and pixels having a directional distance value of more than 0.5 are regarded as pixels outside the character.
4. DDX/DDY: the partial derivative function is generally used for obtaining the difference value of a certain attribute between adjacent pixels. The DDX/DDY function calculates the pixel derivative in the x/y direction over a given surface by subtracting the values of the adjacent pixels. For example, if we have pixels A and B, and B is to the right/below A, then DDX (A)/DDY (A) will calculate the difference between the values of A and B. In this embodiment, the partial derivative function is used to find the slope of the text pixel point in the text image to be rendered.
5. SmoothStep: a smooth transition function that depends on three parameters, namely a "left edge" a, a "right edge" b and an input x, and when x is less than a, the Smoothstep function has a value of 0. When x is greater than b, the Smoothstep function has a value of 1. When x is between [ a, b ], the Smoothstep function transitions smoothly from 0 to 1.
6. Word weight: in font design, font weight refers to the degree of font thickness.
The related art will be further described.
The stereo text is a text effect widely applied to propaganda and decoration, can be applied to games and video packages, and has the characteristics of attractive appearance, reminding and the like. For example, in real life, the stone tablet or plaque can be engraved or embossed, and the advertising board, the propaganda mark and the like are more beautiful and are more easy to draw attention of people.
In the 3D game, by rendering the stereoscopic text in the game virtual scene, the virtual scene can be more realistic, and the immersion of the user is enhanced. For example, as shown in fig. 1, stereo characters with concave effects are added on stone tablets in games, or as shown in fig. 2, stereo characters with convex effects are added on boards of houses, so that users have better visual experience, and the game simulation is further enhanced.
In order to achieve the above-described effect, in some games, there are three-dimensional characters which do not change in the game, for example, characters on a guideboard where a user goes to a destination in the game, characters on boards of houses in the game, and the like do not change. Corresponding stereoscopic models, concave-convex maps or normal maps can be prefabricated in the game development process, and stereoscopic characters are rendered and generated according to the prefabricated stereoscopic models when a user plays a game. The method has the advantages that three-dimensional characters with any shape can be constructed, three-dimensional models can be quickly constructed for three-dimensional characters with fewer characters, such as English letters, arabic numerals and the like, but for Chinese characters, the number of commonly used characters is about 6500 characters, and the construction of the three-dimensional models takes time.
In addition, part of texts in the game can be changed in real time according to the game progress, and real-time rendering of stereoscopic texts cannot be realized by a method of manufacturing corresponding stereoscopic models in advance. For example, a ranking list with a three-dimensional model is set in a game, the ranking of the game achievements of the user is displayed on the ranking list, text information on the ranking list can be updated in real time according to the game achievements of the user, the updated text information needs to be rendered on the ranking list in real time, and in order to enable the game to have strong simulation, a stereoscopic text display method can be adopted for the user information on the ranking list. In the game development stage, the name of the user cannot be determined, so that rendering of the stereoscopic text cannot be realized by adopting a method of manufacturing a corresponding stereoscopic model in advance.
Or in the current popular games provided with game editors, the game editors are user-generated content (User Generated Content, UGC) editors. The user can create or modify relevant stereo text in the game by using a game editor preset in the game. In such games, a method capable of rendering stereoscopic text in real time is required to meet the game demands of users.
In the related art, one implementation method for rendering stereo text in a game is to superimpose one more layer of the same text image on the bottom layer of a two-dimensional text image, and perform appropriate offset to simulate the projection effect. As shown in fig. 3, fig. 3 is a schematic diagram showing an example of the effect of realizing the stereoscopic text by the superimposition method. Compared with the above-mentioned fig. 1 and 2, the method can render three-dimensional character effect to a certain extent, but the effect is single, the three-dimensional character effect of lettering and embossment cannot be represented, the false projection effect realized by offset cannot be applied to dynamic illumination game scenes, and the immersion is insufficient.
Another implementation method for implementing rendering of stereoscopic text in a game is to render a two-dimensional text image in real time based on a rendering target, as shown in fig. 4, fig. 4 is a flowchart showing an example of rendering a two-dimensional text image in real time based on a rendering target. The rendering target is a buffer area for storing two-dimensional text image data in the rendering process. When rendering the stereoscopic text, a rendering target with a specified size is required to be created as a canvas, a two-dimensional text image to be rendered is rendered into the rendering target, and then the blurring process and the sampling process are executed based on the rendering target storing the text image to be rendered, so that the rendered stereoscopic text is obtained.
Based on the rendering target rendering text, the three-dimensional text can be rendered in real time in the game, and the three-dimensional text can meet the requirement of dynamic illumination change in the game. However, in order to achieve a better rendering effect, the method ensures that the rendered stereo text does not generate saw teeth or an enlarged non-fuzzy effect, and needs to input a high-resolution two-dimensional text image, namely, a two-dimensional text image with a large number of pixels is input, so that the number of pixels participating in rendering in the rendering process is large, the calculated amount is large, and the rendering efficiency is low.
In addition, during rendering, a rendering target is required to be created, that is, the real-time running memory required in the rendering process is increased, the running memory of the electronic device is easily insufficient due to the fact that the running memory is too large, the rendering speed is low or the rendering fails, the running memory requirement on the electronic device of the user is high, and the viscosity of the game user is reduced. In addition, the rendering target has a size, when the rendering target is created, the size of the rendering target, namely, the resolution after rendering of the stereoscopic text is designated, if the rendering target is smaller, the resolution of the rendered stereoscopic text is lower, obvious edge blurring and saw teeth can occur, and the rendering effect is poor.
Meanwhile, in order to obtain concave-convex information of the text image to be rendered, the method adopts fuzzy processing on the rendering target stored with the text image information to be rendered. The blurring process can be understood as taking the average value of the pixel values of the peripheral pixel points of each pixel point in the picture to be written, which is equivalent to that each pixel loses visual focus, and the whole text image can generate blurring effect. In the related art, the blurring process may be any blurring algorithm, for example, taking gaussian blurring as an example, pixel point data of an entire text image needs to be obtained, then, the width and height of a text image are cycled, a focus of each pixel point is obtained, in the cycle, a weighted average is performed according to a normal distribution formula according to the focus, new pixel values around each focus are calculated, and then, each pixel point is written into the text image. Therefore, the blurring process needs to process each pixel point, which consumes a long time and can not meet the rendering performance requirement.
For example, as shown in fig. 5, the visual appearance of the character image after the blurring process is such that blurring transition occurs at the edge of the character. After the blurred text image is obtained, the blurred text image is subjected to vertical and horizontal shifting or shifting in more directions, and the slope of the concave-convex information is calculated by sampling for a plurality of times. For example, the image shown in fig. 5 is shifted upward by 1 pixel, and a text image after shifting is obtained. And then, respectively calculating the difference between each pixel point in the character image after multiple shifts and each pixel point of the character image before the shifts, namely calculating slope information corresponding to the character image to be rendered by a finite difference method, further obtaining concave-convex information corresponding to the character image to be rendered according to the slope, and then rendering according to the character image to be rendered and the concave-convex information to obtain the three-dimensional character.
As shown in fig. 6, fig. 6 is an effect diagram of the stereoscopic text rendered based on the rendering target. Compared with fig. 3, the stereoscopic effect of the stereoscopic text rendered by this method is better, and can meet the effect requirement under dynamic illumination environment in the game, but in order to achieve better definition, the two-dimensional text image with high resolution needs to be input for rendering, and the rendering performance is limited by the number of pixels of the text image in a fuzzy processing and multi-sampling mode, when the number of pixels of the text image is more, the data amount is larger, the fuzzy processing time is longer, and the effect on the rendering performance is larger. In addition, in order to obtain concave-convex information corresponding to the text image, the blurred two-dimensional image needs to be sampled for multiple times, the rendering process is complex, and the rendering efficiency of the three-dimensional text is low.
Based on the above problems, the embodiments of the present application provide a method, an apparatus, an electronic device, and a computer readable storage medium for rendering stereo text.
The rendering method of the stereo text provided by the embodiment of the application can be executed by electronic equipment, and the electronic equipment can be a terminal or a server and other equipment. The terminal can be terminal equipment such as a smart phone, a tablet personal computer, a notebook computer and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms, and the like.
The method for rendering the three-dimensional characters provided by the embodiment of the application can be applied to the fields of games, advertisement production, video packaging and the like, is not limited, and is used for performing method citizen by taking three-dimensional character rendering in games as an example.
In an alternative embodiment, when the method for rendering the stereoscopic text runs on the terminal device, the terminal device stores a game application program for rendering the text. The terminal device interacts with the user through a graphical user interface. The way in which the terminal device presents the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device, or presented by holographic projection.
In an alternative embodiment, when the method for rendering the stereo text runs on the server, the method can be implemented and executed based on the cloud game system. Cloud gaming systems refer to cloud computing-based services. The cloud game system comprises a server and client equipment. The running main body of the game application program and the game picture presentation main body are separated, and the storage and the running of the rendering method of the stereo text are completed on a server. The rendering of the game screen including the stereoscopic text is completed at the client, and the client is mainly used for receiving and sending the game data and rendering the game screen, for example, the client may be a display device with a data transmission function near the user side, such as a mobile terminal, a television, a computer, a palm computer, a personal digital assistant, a head-mounted display device (head display device), etc., but the terminal device performing rendering of the stereoscopic text is a cloud server. When playing the game, the user operates the client to send an instruction to the server, the server controls the game to run according to the instruction, codes and compresses data such as game pictures and the like, the data is returned to the client through a network, and finally, the data is decoded through the client and the game pictures are output.
It should be noted that, in the embodiment of the present application, the execution body of the rendering method of the stereo text may be a terminal device or a server, where the terminal device may be a local terminal device or a client device in the foregoing cloud game. The embodiment of the application does not limit the type of the execution body.
By way of example, in connection with the above description, fig. 7 illustrates a rendering system 100 for implementing a rendering method of stereoscopic text according to an embodiment of the present application, where the rendering system 100 may include at least one terminal 110, at least one server 120, at least one database 130, and a network. The terminal 110 held by the user may be connected to different servers through a network. A terminal is any device having computing hardware capable of supporting software application tools corresponding to executing a game.
Wherein the terminal 110 includes a display screen for presenting game pictures and receiving operations of a user for game picture generation, and a processor. The game screen may include a portion of a virtual game scene, the virtual game scene being a virtual world of virtual objects. The processor is configured to run the game, generate a game screen, respond to an operation, and control display of the game screen on the display screen. When the user operates the game screen through the display screen, the game screen can control the local content of the terminal in response to the received operation instruction, and can also control the content of the opposite terminal server 120 in response to the received operation instruction.
In addition, when the system 100 includes a plurality of terminals, a plurality of servers, and a plurality of networks, different terminals may be connected to each other through different networks, through different servers. The network may be a wireless network or a wired network, such as a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a cellular network, a 2G network, a 3G network, a 4G network, a 5G network, etc. In addition, the different terminals may be connected to other terminals or to a server or the like using their own bluetooth network or hotspot network. In addition, the system 100 may include multiple databases coupled to different servers and information related to the game may be continuously stored in the databases as different users play the multi-user game online.
It should be noted that the schematic game system shown in fig. 7 is only an example, and the system 100 described in the embodiment of the present application is for more clearly describing the technical solution of the embodiment of the present application, and is not limited to the technical solution provided in the embodiment of the present application, and those skilled in the art can know that the technical solution provided in the embodiment of the present application is equally applicable to similar technical problems with evolution of the game system and occurrence of new service scenarios.
The technical scheme of the application is described in detail through specific examples. It should be noted that the following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 8 is a flowchart illustrating an example of rendering of stereoscopic text according to an embodiment of the present application. It should be noted that the steps shown may be performed in a different logical order than that shown in the method flow diagram. The method may include the following steps S210 to S230.
Step S210: acquiring a directional distance field image which is generated in advance for a two-dimensional text image to be rendered; the pixel values in the directed distance field image represent the shortest vector from the text graphic pixel point in the two-dimensional text image to the edge of the text graphic where the text graphic pixel point is located.
It can be appreciated that in the game provided with the UGC editor, which is popular at present, a user can build a game scene in the game by using the UGC editor preset in the game, for example, new or modify related stereo text on a three-dimensional model in the game scene, so as to generate the game scene which accords with the game preference of the user and has strong reality.
Or in some games, there are stereo text that changes in real time according to the user's game progress. For example, a ranking list with a three-dimensional model is set in a game, the ranking of the game achievements of the user is displayed on the ranking list, text information on the ranking list can be updated in real time according to the game achievements of the user, the updated text information needs to be rendered on the ranking list in real time, and in order to enable the game to have strong simulation, a stereoscopic text display method can be adopted for the user information on the ranking list. In the game development stage, the name of the user cannot be determined, so that rendering of the stereoscopic text cannot be realized by adopting a method of manufacturing a corresponding stereoscopic model in advance.
According to the game scene, the two-dimensional text image to be rendered may be a two-dimensional text image input by a user through a UGC editor, a pre-stored two-dimensional text image corresponding to the generated stereoscopic text to be rendered according to the game progress requirement of the user, or a two-dimensional text image input by the user in other scenes, such as an application scene of advertisement production, video packaging and the like, of a stereoscopic text rendering pipeline. This embodiment is not limited thereto.
It can be understood that the two-dimensional text image includes an original background image and a text graphic, and the text graphic pixel points are pixel points forming the text graphic.
In the embodiment of the present application, the font of the two-dimensional text image to be rendered may be a font that is input by the player according to the requirement, and the font of the generated stereoscopic text image is consistent with the font of the input two-dimensional text image, for example, the stereoscopic text image rendered after the user inputs the text image of the Song body is Song Ti.
It can be understood that the directional distance field of the two-dimensional text image to be rendered may be pre-generated, and according to the foregoing description, it can be known that the pixel value in the directional distance field image represents the shortest vector from the text graphic pixel point to the text graphic edge where the text graphic pixel point is located in the two-dimensional text image, that is, the direction and the distance from each text graphic pixel point to the text graphic edge where the text graphic pixel point is located in the two-dimensional text image are recorded in the directional distance field image, in this process, field information formed by collecting different value planes may be formed, and in the process of stretching the stereoscopic text image generated by using the directional distance field, the distance field information of the stereoscopic text image may not be changed, so that the stereoscopic text image may maintain higher definition. As shown in fig. 9, an example of a directed distance field image corresponding to a two-dimensional text image "word" according to an embodiment of the present application is shown.
It will be appreciated that the pixel values in the directed distance field are stored in Alpha channels for each literal graphic pixel of the directed distance field. In this embodiment, the range of values of the shortest vector recorded in the directional distance field corresponding to the two-dimensional text image to be rendered may be [0,1], where the shortest vector with a size of 0.5 represents the outline of the text graphic, the vector with a value less than 0.5 represents the pixel point located inside the font image, and the vector with a value greater than 0.5 represents the pixel point located outside the font image.
The pre-generated directed distance field may be a directed distance field stored in a directed distance field map set including a plurality of words corresponding to each word generated offline in a game development stage, and the directed distance field map set may be understood as a map in which directed distance fields corresponding to different words may be stored. When the directional distance field image corresponding to each text image is generated in advance, the directional distance field image can be reconstructed through low-resolution sampling according to one high-definition bitmap, so that the storage space occupied by the directional distance field image is small. For example, a directional distance field image with a resolution of 64×64 is generated by sampling a two-dimensional text image with high resolution, so that a better quality rendering result can be obtained, and the same rendering result can be achieved in the related art, which may need to be achieved by using a two-dimensional text image with a resolution of 512×512. Moreover, the directed distance field atlas can be used directly in a fixed pipeline, which allows this embodiment to address the needs of low-end models. When the directional distance field corresponding to the two-dimensional rendering text image to be rendered is obtained, the coordinate information of the two-dimensional rendering text image in the directional distance field image set can be obtained through sampling.
It can be understood that when rendering a large number of characters, a plurality of characters can share one directed distance field atlas, and the directed distance field images corresponding to the characters can be acquired simultaneously, and are converted into one time by multiple drawing call mapping, so that the character rendering performance is improved.
In the embodiment of the application, after the directional distance field image corresponding to the text image to be rendered is obtained, various effect parameters required for generating the stereoscopic text image can be determined according to the directional distance field image. For example, step S220 is as follows.
Step S220: and determining the normal vector of the character graphics in the two-dimensional character image according to the directed distance field image.
It can be understood that the normal vector of the text graphic is the normal vector corresponding to each pixel point forming the text graphic in the two-dimensional text image to be rendered, as shown in fig. 10, which is an example of the normal vector effect diagram of the three-dimensional text provided in the embodiment. The normal vector can be used for representing the concave-convex effect of the stereoscopic text image corresponding to the two-dimensional text graph to be rendered, namely the effect of the stereoscopic text bulge or the stereoscopic text concavity. As shown in fig. 1, the words of "thistle-door tobacco tree" on the stone tablet have a concave effect, and the words of "guest stack" on the tablet shown in fig. 2 have a convex effect.
After obtaining the normal vector capable of characterizing the concave-convex effect, a stereoscopic text image can be rendered and generated by the following step S230.
Step S230: and generating a stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image and the normal vector.
That is, in this step, the normal vector is superimposed on the two-dimensional character image, so that a stereoscopic character image corresponding to the two-dimensional character image is rendered and generated. The generation effect can be referred to in fig. 1 and 2.
Compared with the prior art that the high-resolution two-dimensional text image is directly used for rendering, the rendering method of the three-dimensional text provided by the application can be used for rendering by acquiring the directional distance field image which is generated in advance for the two-dimensional text image to be rendered in the rendering process; the pixel value in the directed distance field image represents the shortest vector from the text graphic pixel point in the two-dimensional text image to the edge of the text graphic where the text graphic pixel point is located; the shortest vector is the shortest directed distance from the text image pixel point to the text image edge where the text image pixel point is positioned; because the vector interpolation is a situation that no fuzzy distortion occurs, the directional distance field image with the number of the corresponding lower pixel points can be generated in advance through the low-resolution two-dimensional character image, then the normal vector of the character graphics in the two-dimensional character image is determined according to the directional distance field image, and the stereoscopic character image corresponding to the two-dimensional character image is generated according to the two-dimensional character image and the normal vector.
Therefore, the rendering quality can be improved according to the characteristic that the pixel value in the directed distance field image represents the shortest vector from the text graphic pixel point to the edge of the text graphic where the text graphic pixel point is located in the two-dimensional text image. In addition, the directional distance field image with fewer pixels is generated in advance to be rendered, so that a high-quality rendering effect can be obtained with fewer pixels, the rendering cost is reduced, and the rendering efficiency is improved.
In addition, the method does not depend on operations such as rendering targets, multi-sampling and the like, and even if a large number of characters are obvious at the same time, the directed distance field atlas is sampled only once, the multi-sampling is not needed, the rendering flow is simplified, and the rendering efficiency is improved.
In the following, some specific examples of the above embodiments are described.
In an optional embodiment, before executing the step S210, the method for rendering stereo text provided by the present application further includes a step S240. Namely, the start timing of the rendering process of the stereoscopic text will be described.
In a specific embodiment, after receiving a rendering instruction for a two-dimensional text image, the electronic device may obtain, in response to the rendering instruction, a directional distance field image that is generated in advance for the two-dimensional text image to be rendered.
It can be understood that the rendering instruction may be a rendering instruction generated when a user triggers a rendering control provided with rendering logic of the stereoscopic text in the game, or may be a rendering instruction automatically generated when the stereoscopic text is required to be rendered according to the progress of the game in the game.
It can be appreciated that in this embodiment, after receiving a rendering instruction, the electronic device parses the rendering instruction, determines information of a two-dimensional text image to be rendered carried in the rendering instruction, samples, according to the information of the two-dimensional text image to be rendered, a directional distance field image corresponding to the two-dimensional text image to be rendered in a set of directional distance field images generated in advance, and executes a subsequent rendering process. That is, in this embodiment, the online rendering process directly starts rendering by using the directed distance field image corresponding to the two-dimensional text image to be rendered. In the related art, a new rendering target is needed, and then the two-dimensional text image to be rendered is stored in the rendering target, and then the calculation of the normal vector is started. Therefore, the rendering flow of the stereoscopic text provided by the embodiment is simpler, the efficiency is higher when the stereoscopic text is rendered in real time, and the game experience of the user is better.
In an alternative embodiment, the stereoscopic text image generated by rendering may be text existing in the game scene alone, or may be a game function of rendering on a three-dimensional model, such as rendering a text image of "thistle-door tobacco tree" on a three-dimensional model stone tablet as shown in fig. 1, or rendering a text image of "guest stack" on a three-dimensional model tablet as shown in fig. 2.
In order to achieve the effect of rendering and generating the stereoscopic text on the three-dimensional model, the three-dimensional model to be rendered may be acquired before step S230, and then the stereoscopic text image corresponding to the two-dimensional text image is generated by superposition rendering on the three-dimensional model according to the two-dimensional text image and the normal vector. In a specific embodiment, the overlay rendering mode may include a patch-penetrating overlay or decal mode, or other overlay rendering modes, and the embodiment is not particularly limited. Decals (Decal), generally, refer to techniques for adding textures or patterns in a Decal-like manner in a game scene. Are often used to achieve special effects such as bullet holes, blood stains, graffiti, etc. In the present embodiment, the stereoscopic text image with the normal vector may be superimposed and displayed on the three-dimensional model in a sticker-like manner.
In an alternative embodiment, the above-mentioned "determining the normal vector of the text graphic in the two-dimensional text image according to the directed distance field image" in step S220 may be implemented specifically by the following steps S221 to S223. It is to be understood that the execution order of the steps S221 and S222 is not particularly limited in this embodiment.
Step S221, determining concave-convex variation parameters corresponding to the text and graphics in the two-dimensional text image according to the directed distance field image.
Step S222, determining tangent vectors, auxiliary tangent vectors and tangent included angles corresponding to the text graphics in the two-dimensional text image.
Step S223: determining a normal vector corresponding to the Chinese character pattern according to the concave-convex variation parameter, the tangent vector, the auxiliary tangent vector and the tangent included angle corresponding to the Chinese character pattern; the tangential included angle is used for determining the direction of the normal vector relative to the surface of the two-dimensional text image.
Next, the above steps S221 to S223 will be described in detail.
The above-mentioned concave-convex variation parameter in step S221 is used to represent a variation rate of concave-convex characteristics corresponding to each character pattern pixel point constituting a character pattern in a two-dimensional character image.
In some embodiments, the above-mentioned concave-convex variation parameter may be determined by the following steps A1 to A3.
A1, smoothing the directed distance field image according to the directed distance field image, a preset first pixel value, a preset second pixel value and a preset concave-convex degree parameter; the preset first pixel value and the preset second pixel value are any two pixel values in the directed distance field image.
It can be understood that the smoothing processing is performed on the directed distance field image, that is, the gradual transition of the pixel values of the pixels of each character and figure is realized, the distortion effect caused by the gradient numerical value change in the directed distance field is reduced, and the smoothed directed distance field image can be smoother. In an alternative embodiment, the directional distance field may be smoothed by a smoothing transition function Smoothstep to make the stereoscopic text image appear more rounded and natural.
The above-described preset first pixel value and preset second pixel value can be regarded as boundary values provided at the time of the smoothing process, that is, interpolation of each pixel value of the forward distance field image is achieved within the boundary value range, thereby achieving smooth transition of the forward distance field image. It will be appreciated that the preset first pixel value and the preset second pixel value may be any two pixel values in the directional distance field image entered by the user prior to rendering.
The above-mentioned concave-convex degree parameter may be a parameter input by a user in the course of a rendering process, or may be a concave-convex degree parameter preset in a game development stage. The concave-convex degree parameter refers to a dynamically adjustable parameter which can be used for adjusting the smoothing result, namely, a parameter used for controlling the concave-convex degree of the stereoscopic text image, and the stereoscopic text can be more real and natural through adjusting the concave-convex degree parameter. The concave-convex degree parameter can be adjusted according to a specific application scene so as to obtain the optimal rendering effect. In this embodiment, the roughness parameter takes a positive value.
The Smoothstep function is a function used to smooth the transition between two values. It will be appreciated from the foregoing that this function depends on three parameters, namely "left edge" a, "right edge" b and input x. In the process of smoothing the directed distance field, parameters of Smoothstep functions are pixel values of text graphic pixels representing the thickness of the stereoscopic text image in the directed distance field image and the directed distance field image respectively. The directional distance field image is the pixel values of the text graphic pixels representing the thickness of the stereoscopic text image in the input "x" directional distance field image are respectively "left edge" a and "right edge" b, and in this embodiment, the "left edge" a is referred to as the start position of the text graphic, and the "right edge" b is referred to as the end position of the text graphic. The specific form of smoothing the directional distance field is shown in formula (1).
Directional distance field after smoothing =
Smoothstep (initial position, end position, directed distance field image). Times.Concavo-convex degree parameter (1)
The starting position in the formula (1) is the preset first pixel value, and the ending position is the preset second pixel value.
In this embodiment, the directional distance field image may be smoothed one or more times until the requirement is met. It should be noted that, in performing the smoothing process multiple times, the third parameter "directional distance field" of the Smoothstep functions is the smoothing result after the previous smoothing process on the directional distance field.
It can be appreciated that the starting position and the ending position in the parameters of the Smoothstep functions can be adjusted according to a specific application scenario to obtain the best rendering effect. It should be noted that, the range of the directional distance values of the directional distance field is [0,1], and the range of the adjustment values of the start position and the end position is also [0,1].
In the embodiment of the application, the starting position and the ending position can determine the concave-convex direction and the character weight of the stereo character image and the smoothness of the edge chamfer of the stereo character. Chamfering the edges of a word is a technique for improving the appearance of the word. Refers to trimming or trimming at the edges or corners of the font to make it smooth or beveled, thereby making the font look more rounded and natural. The edge chamfering of text is typically accomplished using rounded rectangles or circles that smoothly transition the edges of the font, thereby making the font look more aesthetically pleasing.
The size relation between the pixel value of the character graphic pixel point at the starting position and the pixel value of the character graphic pixel point at the ending position determines the concave-convex direction of the stereoscopic character image, when the pixel value of the character graphic pixel point at the starting position is larger than the pixel value of the character graphic pixel point at the ending position, the stereoscopic character image is represented to be concave downwards, and when the pixel value of the character graphic pixel point at the starting position is smaller than the pixel value of the character graphic pixel point at the ending position, the stereoscopic character image is represented to be convex upwards. For example, if the pixel value of the text graphic pixel at the starting position is 0.6 and the pixel value of the text graphic pixel at the ending position is 0.1, the stereoscopic text image is convex; and the pixel value of the character graphic pixel point at the initial position is 0.1, and the pixel value of the character graphic pixel point at the final position is 0.6, so that the three-dimensional character image is sunken.
And the difference value between the pixel value of the character graphic pixel point at the starting position and the pixel value of the character graphic pixel point at the ending position. For example, the pixel value of the text graphic pixel at the start position and the pixel value of the text graphic pixel at the end position are 0.4 and 0.6, respectively, and the stereoscopic text image corresponding to 0.4 and 0.6 is thicker than the stereoscopic text image corresponding to 0.2 and 0.8 compared with the pixel value of the text graphic pixel at the start position and the pixel value of the text graphic pixel at the end position being 0.2 and 0.8, respectively.
The difference between the pixel value of the text graphic pixel at the starting position and the pixel value of the text graphic pixel at the ending position determines the smoothness of the edge chamfer. The larger the difference between the pixel value of the text graphic pixel point at the start position and the pixel value of the text graphic pixel point at the end position is, the smaller the edge chamfer representing the stereoscopic text image is, and the smaller the difference between the pixel value of the text graphic pixel point at the start position and the pixel value of the text graphic pixel point at the end position is, the larger the edge chamfer representing the stereoscopic text image is.
As shown in fig. 11, fig. 11 (a) to 11 (d) show the resulting smoothed directional distance field image at different start and end positions, respectively. The smoothing parameters corresponding to the smoothed directional distance field images shown in fig. 11 (a) to 11 (d) may be parameters shown in table 1, respectively.
Table 1 smoothing parameter example in smoothing process
Start position pixel value End position pixel value Concave-convex direction Edge chamfer size Corresponding image
0.2 0.6 Protruding upward Edge chamfer is big In FIG. 11 (a)
0.1 0.8 Protruding upward Edge chamfer is small In FIG. 11 (b)
0.6 0.2 Is recessed downwards Edge chamfer is big In FIG. 11 (c)
0.8 0.1 Is recessed downwards Edge chamfer is small In FIG. 11 (d)
It will be appreciated that the magnitude of the edge chamfer in table 1 is for the relative magnitude where the direction of the relief is the same.
In fig. 11, (a) and (b) are directed distance field images after the smoothing process, the pixel value of the text graphic pixel point at the start position is smaller than the pixel value of the text graphic pixel point at the end position. And the difference in pixel value between the start position and the end position corresponding to the image of fig. 11 (a) is smaller than the difference in pixel value between the start position and the end position corresponding to the image of fig. 11 (b). Therefore, the image shown in (a) in fig. 11 is a smoothed directed distance field image having "convex large chamfer", and the image shown in (b) in fig. 11 is a smoothed directed distance field image having "convex small chamfer".
Similarly, in fig. 11, (c) and (d) are directed distance field images after smoothing processing, where the pixel value of the text graphic pixel at the start position is larger than the pixel value of the text graphic pixel at the end position. And the difference in pixel value between the start position and the end position corresponding to the image of fig. 11 (c) is smaller than the difference in pixel value between the start position and the end position corresponding to the image of fig. 11 (d). Therefore, the image shown in (c) in fig. 11 is a smoothed directed distance field image having "concave large chamfer", and the image shown in (d) in fig. 11 is a smoothed directed distance field image having "concave small chamfer".
After the smoothed directional distance field image is obtained according to the above steps, the following step A2 may be performed.
Step A2: and determining a first concave-convex variation parameter of the character pattern in a first preset direction and a second concave-convex variation parameter of the character pattern in a second preset direction according to the smoothed directed distance field image.
Step A3: and determining the first concave-convex variation parameter and the second concave-convex variation parameter of the character pattern as concave-convex variation parameters of the character pattern.
The first preset direction and the second preset direction are perpendicular to each other. It will be appreciated that in this embodiment, the first preset direction may be an X-axis direction, or a horizontal direction, of the established world coordinate system, and the second preset direction may be a Y-axis direction, or a vertical direction, of the established world coordinate system.
The first concave-convex variation parameter may be understood as a parameter for determining a variation of concave-convex characteristics of each of the character-graphic pixels in the X-axis direction in the character-graphic to be rendered, and the second concave-convex variation parameter may be understood as a parameter for determining a variation of concave-convex characteristics of each of the character-graphic pixels in the Y-axis direction in the character-graphic to be rendered. The first concave-convex variation parameter and the second concave-convex variation parameter jointly form concave-convex variation parameters of the character and graphic pixel points.
In a specific embodiment, the first concave-convex variation parameter and the second concave-convex variation parameter can be respectively calculated through a partial derivative function DDX/DDY. The calculation formula of the first concave-convex variation parameter is shown in formula (2), and the calculation formula of the second concave-convex variation parameter is shown in formula (3).
X=ddx (roughness parameter×directional distance field after smoothing) (2)
Y= DDY (roughness parameter×directional distance field after smoothing) (3)
In this embodiment, X represents a first concave-convex variation parameter, Y represents a second concave-convex variation parameter, and the concave-convex degree parameter may be used to adjust the smoothed directional distance field to achieve a better rendering effect. In some embodiments, the relief level parameter may be omitted.
As can be seen from the foregoing description, according to the formula (2), the difference between the pixel value corresponding to the text-graphics pixel in the smoothed directional distance field and the pixel value corresponding to the text-graphics pixel adjacent to the text-graphics pixel in the X-axis direction can be determined as the first concave-convex variation parameter of the text-graphics pixel in the first preset direction. According to the formula (3), a difference between a pixel value corresponding to a text-graphics pixel in the smoothed directional distance field and a pixel value corresponding to a text-graphics pixel adjacent to the text-graphics pixel in the Y-axis direction can be determined as a second concave-convex variation parameter of the text-graphics pixel in a second preset direction.
The above is a detailed description of determining the concave-convex variation parameters corresponding to the text and graphics in the two-dimensional text image according to the directed distance field image, and the following is a detailed description of the method for determining the secondary tangent vector.
In some embodiments, the step of determining the secondary tangent vector corresponding to the text graphic in the two-dimensional text image in step S222 may include the following steps B1 to B3.
Step B1: and determining texture coordinates of the text graphic pixel points in the two-dimensional texture map corresponding to the two-dimensional text image.
Step B2: and determining a first position change parameter of the texture coordinates of the text graphic pixel points in a first preset direction.
Step B3: and carrying out cross multiplication on the first position change parameter and the axial quantity in the third preset direction to obtain a secondary tangent vector corresponding to the character pattern in the two-dimensional character image.
It will be appreciated that a two-dimensional texture map is a UV texture map, which is a collection of two-dimensional planar pixels of a two-dimensional text image. The horizontal direction is U, the vertical direction is V, and any pixel point on the two-dimensional text image can be positioned through a plane UV coordinate system. The specific position of the text graphic pixel point in the two-dimensional texture map can be known by determining the texture coordinate of the text graphic pixel point in the two-dimensional texture map corresponding to the two-dimensional text image, and then the subsequent step is executed based on the texture coordinate to determine the secondary tangent vector.
The first position change parameter is the position change condition of the text graphic pixel point in the X-axis direction when the stereoscopic text image is rendered. The position change condition can be calculated through a partial derivative function, namely, the first position change parameter is the difference value between the texture coordinates of the text graphic pixel point and the texture coordinates of the text graphic pixel point adjacent to the text graphic pixel point in the X-axis direction.
The first position change parameter may be a vector corresponding to the position of the text graphic pixel point after being changed in the X-axis direction. The third preset direction may be understood as the Z-axis direction of the world coordinate system set as described above, and the axis vector is the Z-axis vector.
The sub tangent vectors corresponding to the text and graphics are the set of sub tangent vectors corresponding to the pixels of each text and graphics forming the text and graphics. Wherein the minor tangential vector (Bitangent) may also be referred to as a minor normal vector, which is perpendicular to the normal vector.
After the first position change parameter is obtained, a secondary tangent vector corresponding to the character pattern in the two-dimensional character image can be determined according to the Z-axis vector and the first position change parameter. The specific calculation formula is shown as formula (4).
B=DDX(VUV0)×V001 (4)
Wherein B represents a secondary tangent vector, DDX (V UV0) represents a first position change parameter, V 001 represents a Z-axis vector of length 1, and V UV0 represents a vector of text-graphics pixels of texture coordinates (U, V, 0).
It can be appreciated that in this embodiment, the texture coordinates (U, V, 0) are general terms, and may include texture coordinates of each text graphic pixel point corresponding to a text graphic in a two-dimensional text image to be rendered.
The above description is directed to a method for determining a secondary tangent vector corresponding to a text pattern, and next, a method for determining a tangent vector corresponding to a text pattern in a two-dimensional text image is described in detail.
In step S222, the above-mentioned "determining the tangent vector corresponding to the text and graphics in the two-dimensional text image" may be implemented specifically by the following steps C1 to C2.
Step C1: and determining a second position parameter of the two-dimensional texture coordinates of the text graphic pixel points in a second preset direction.
Step C2: and carrying out cross multiplication on the second position change parameter and the axial quantity in the third preset direction to obtain a tangent vector corresponding to the Chinese character graph of the two-dimensional character image.
The second preset direction is less consistent with the above, and may be a Y-axis direction in the world coordinate system, and the third preset direction is a Z-axis direction in the world coordinate system.
The second position parameter is the difference between the texture coordinates of the text and graphic pixels and the texture coordinates of the text and graphic pixels adjacent to the text and graphic pixels in the Y-axis direction. The second position change parameter may be a vector corresponding to the position of the text graphic pixel point after being changed in the X-axis direction.
The tangent vector (Tangent) corresponding to the text and graphics is the set of the auxiliary tangent vectors corresponding to the pixels of each text and graphics forming the text and graphics. Wherein the tangent vector is perpendicular to the auxiliary tangent vector and the normal vector.
After the second position change parameter is obtained, a tangent vector corresponding to the text and graph in the two-dimensional text image can be determined according to the Z-axis vector and the second position change parameter. The specific calculation formula is shown as formula (5).
T=DDY(VUV0)×V001 (5)
Where TT represents a tangential vector, DDY (V UV0) represents a second position change parameter, V 001 represents a Z-axis vector of length 1, and V UV0 represents a vector of text-graphics pixels of texture coordinates (U, V, 0).
The above is an introduction to the method for determining the tangent vector corresponding to the text graphic, and the following is a detailed introduction to the method for determining the tangent included angle corresponding to the text graphic in the two-dimensional text image.
In an alternative embodiment, the determining the tangential included angle corresponding to the text and graphics in the two-dimensional text image in step S222 includes: and carrying out dot multiplication on the tangent vector corresponding to the text image pixel point and the first position change parameter to obtain a tangent included angle corresponding to the text image in the two-dimensional text image.
It can be understood that the tangential included angle is an included angle between a tangential vector and a vector corresponding to the position of the text graphic pixel point in the X-axis direction, and the numerical value of the included angle can be used for determining the forward and backward directions of the normal vector of the text graphic pixel point, i.e. determining whether the normal vector of the text graphic pixel point faces upwards or downwards relative to the surface of the two-dimensional text image.
In this embodiment, a specific calculation formula for the tangential included angle is shown in formula (6).
D=T·DDX(VUV0) (6)
Wherein D represents the value of the tangential included angle, T represents the tangential vector, and DDX (V UV0) is the first position change parameter. As shown in the formula (6), the point multiplication product of the tangential vector T and the first position change parameter is determined as the value of the tangential included angle.
After the concave-convex variation parameters, the tangent vectors, the auxiliary tangent vectors and the tangent included angles corresponding to the character patterns are obtained based on the steps, the normal vectors corresponding to the character patterns in the two-dimensional character images can be determined through the following formula (7).
N=normalize(|D|×V(0,0,1)-(X×T+Y×B)×sign(D)) (7)
Wherein N represents a normal vector corresponding to each pixel, normalize is a normalization function, D represents a tangential angle, D represents an absolute value of the tangential angle, V (0,0,1) represents an axial amount in a third preset direction, X represents a first concave-convex variation parameter corresponding to the graphic pixel, Y represents a second concave-convex variation parameter corresponding to the graphic pixel, T represents a tangential vector, B represents a secondary tangential vector, sign (D) is used for representing a value of sign of the tangential angle, sign (D) =1 when D >0, and sign (D) = -1 when D < 0.
It is understood that the normalization function described above refers to a function in which the length of the normal vector is set to 1.
In an alternative embodiment, the authenticity of the rendered stereoscopic text image under dynamic illumination can be further improved. For example, the effect of changing the brightness when the light received on the surface of the stereoscopic text image is blocked by surrounding objects. In this embodiment, to achieve the above illumination change effect, the following step S250 may be added in the rendering process.
Step S250: a shading parameter is determined that characterizes a degree of darkness of illumination received at a surface of the stereoscopic text image when the illumination is shaded by surrounding objects.
Accordingly, in the step S230, the "generating the stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image and the normal vector" may be performed by adding the masking parameter for rendering, that is, may be specifically performed by the following step S231.
Step S231: and generating a stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image, the normal vector and the shielding parameter.
It is understood that the shading parameter is a parameter for representing the brightness of illumination received by the surface of the stereo text image when the illumination is shaded by surrounding objects, so that the objects can look more real and natural. The occlusion parameter may be referred to as ambient light occlusion (Ambient Occlusion, AO), a computer graphics technique used to simulate the amount of ambient light received when an object is occluded by other objects.
In a specific embodiment, the above step S250 may be implemented by the following steps S251 and S252, or by the steps S251 and S253, where a shading parameter for characterizing a degree of darkness when illumination received by a surface of the stereoscopic text image is shaded by surrounding objects is determined.
Step S251: and determining concave-convex variation parameters corresponding to the character patterns in the two-dimensional character image according to the directed distance field image.
Step S252: and when the concave-convex variation parameter is larger than a preset threshold value, determining the product of the preset shielding intensity parameter and the concave-convex variation parameter as a shielding parameter.
Step S253: and when the concave-convex variation parameter is smaller than or equal to a preset threshold value, determining a default parameter for representing that illumination received by the surface of the stereoscopic text image is not blocked by surrounding objects as a blocking parameter.
For determining the concave-convex variation parameters corresponding to the text and graphics in the step S251, reference may be made to the foregoing description, and details are not repeated here.
After the concave-convex variation parameter is obtained, the value of the concave-convex variation parameter and the preset masking parameter threshold may be compared, and step S252 or step S253 may be performed according to the comparison result.
It can be appreciated that the preset masking intensity parameter is used to globally control the indirect illumination intensity received by the surface of the stereoscopic text image. The preset shielding intensity parameter can be an intensity parameter value which is input by a user in the rendering process and meets the rendering requirement.
In a specific embodiment, the above calculation formula of the masking parameter may refer to the following formula (8).
Masking parameter=concave-convex variation parameter > preset threshold?
Masking intensity parameter x relief variation parameter: 1 (8)
The above formula (8) is a three-order operator, that is, represents the above step S252 and the above step S253. The method is characterized in that whether the concave-convex variation parameter is larger than a preset threshold value or not is indicated, if yes, the product of the shielding intensity parameter and the concave-convex variation parameter is returned to serve as a shielding parameter, if not, 1 is returned to serve as a shielding parameter, and 1 is a default parameter representing that illumination received by the surface of the stereoscopic text image is not shielded by surrounding objects.
In an optional embodiment, in addition to the above-mentioned rendering by the normal vector to obtain the stereoscopic text image with concave-convex feeling, and improving the effect of changing the brightness degree when the illumination received by the surface of the stereoscopic text image is blocked by surrounding objects by using the shielding parameter, the embodiment may further set the transparency of the rendered stereoscopic text image, so that the transparency of the stereoscopic text image can meet the needs of the user. In this embodiment, in order to achieve the above transparency effect, the following step S260 may be added in the rendering process.
Step S260: a transparency parameter is determined for characterizing the transparency of the stereoscopic text image.
Accordingly, in the step S230, the "generating the stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image and the normal vector" may increase the transparency parameter to perform rendering, that is, may be specifically implemented by the following step S232.
Step S232: and generating a stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image, the normal vector and the transparency parameter.
In a specific embodiment, the transparency parameter for determining the transparency of the stereoscopic text image may be an opacity mask clipping value, and the transparency parameter may be used to control when the stereoscopic text image is transparent and opaque, i.e. adjust the transparency of the stereoscopic text image.
It will be appreciated that the opacity mask cut-out value is a threshold value of transparency, and when the gray value of the stereoscopic text image is less than the threshold value, the stereoscopic text image is rendered opaque, and when the displayed gray value of the stereoscopic text image is greater than the threshold value, the stereoscopic text image is rendered transparent. For example, in the rendering engine, the opaque mask clipping value may be set to 0.333, when the gradation value of the stereoscopic text image is less than 0.333, the stereoscopic text image is an opaque image, and when the display gradation value of the stereoscopic text image is greater than 0.333, the stereoscopic text image is a transparent image.
The transparency of the stereoscopic text image is determined through the opaque shearing value, the setting method is simple, the stereoscopic text image can be quickly and efficiently rendered, and in addition, the transparency of the stereoscopic text image can be more accurately controlled, so that an object can look more real and natural.
Thus, the method provided by the embodiment is described, and the real-time stereo text rendering method based on the directed distance field can render stereo text with smaller rendering cost and can be suitable for different types of stereo text rendering requirements. In addition, based on the directional distance field, the three-dimensional character image is rendered in real time, the generating effect is good, and the edge of the three-dimensional character image does not have obvious saw teeth. Moreover, the method is used for carrying out ambient light shielding and transparency calculation, has strong immersion sense, and can meet the changing requirement of a dynamic illumination environment.
In addition, the method provided by the embodiment can carry out parameter adjustment on a plurality of effect parameters to achieve different display effects, for example, the font thickness, the font concave-convex degree, the ambient light shielding, the transparency and the like are adjusted in real time, and the universality is strong.
Corresponding to the method for rendering the stereoscopic text provided in the embodiment of the present application, the embodiment of the present application further provides a device 300 for rendering the stereoscopic text, as shown in fig. 12, where the device 300 includes: an acquisition unit 301, a determination unit 302, and a generation unit 303;
An acquisition unit 301, configured to acquire a directional distance field image that is generated in advance for a two-dimensional text image to be rendered; the pixel value in the directed distance field image represents the shortest vector from the text graphic pixel point in the two-dimensional text image to the edge of the text graphic where the text graphic pixel point is located;
A determining unit 302, configured to determine a normal vector of a text graphic in the two-dimensional text image according to the directed distance field image;
The generating unit 303 generates a stereoscopic text image corresponding to the two-dimensional text image based on the two-dimensional text image and the normal vector.
Optionally, the determining unit 302 is further configured to determine a shading parameter for characterizing a brightness level of the illumination received by the surface of the stereo text image when the illumination is shaded by surrounding objects;
the generating unit 303 is specifically configured to generate a stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image, the normal vector and the masking parameter.
Optionally, the determining unit 302 is further configured to determine a transparency parameter for characterizing transparency of the stereo text image;
The generating unit 303 is specifically configured to generate a stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image, the normal vector, and the transparency parameter.
Optionally, the determining unit 302 is specifically configured to determine, according to the directed distance field image, a concave-convex variation parameter corresponding to a text graphic in the two-dimensional text image; determining tangential vectors, auxiliary tangential vectors and tangential included angles corresponding to the text graphics in the two-dimensional text image; determining a normal vector corresponding to the Chinese character pattern according to the concave-convex variation parameter, the tangent vector, the auxiliary tangent vector and the tangent included angle corresponding to the Chinese character pattern; the tangential included angle is used for determining the direction of the normal vector relative to the surface of the two-dimensional text image.
Optionally, the determining unit 302 is specifically configured to perform smoothing processing on the directed distance field image according to the directed distance field image, the preset first pixel value and the preset second pixel value; the preset first pixel value and the preset second pixel value are any two pixel values in the directed distance field image; determining a first concave-convex variation parameter of the character pattern in a first preset direction and a second concave-convex variation parameter of the character pattern in a second preset direction according to the smoothed directed distance field image; the first preset direction and the second preset direction are mutually perpendicular; and determining the first concave-convex variation parameter and the second concave-convex variation parameter of the character pattern as concave-convex variation parameters of the character pattern.
Optionally, the determining unit 302 is specifically further configured to determine texture coordinates of the text graphic pixel point in a two-dimensional texture map corresponding to the two-dimensional text image to which the text graphic pixel point belongs; determining a first position change parameter of texture coordinates of the text graphic pixel points in a first preset direction; and carrying out cross multiplication on the first position change parameter and the axial quantity in the third preset direction to obtain a secondary tangent vector corresponding to the character pattern in the two-dimensional character image.
Optionally, the determining unit 302 is specifically further configured to determine a second position parameter of the two-dimensional texture coordinate of the text graphic pixel point in a second preset direction; and carrying out cross multiplication on the second position change parameter and the axial quantity in the third preset direction to obtain a tangent vector corresponding to the Chinese character graph of the two-dimensional character image.
Optionally, the determining unit 302 is specifically further configured to perform dot multiplication on the tangent vector corresponding to the text graphic pixel point and the first position change parameter, so as to obtain a tangent included angle corresponding to the text graphic in the two-dimensional text image.
Optionally, the determining unit 302 is specifically further configured to determine a normal vector corresponding to the text and graphics according to the following formula:
N=normalize(|D|×V(0,0,1)-(X×T+Y×B)×sign(D))
Wherein N represents a normal vector corresponding to each pixel, normalize is a normalization function, D represents a tangential angle, D represents an absolute value of the tangential angle, V (0,0,1) represents an axial amount in a third preset direction, X represents a first concave-convex variation parameter corresponding to the graphic pixel, Y represents a second concave-convex variation parameter corresponding to the graphic pixel, T represents a tangential vector, B represents a secondary tangential vector, sign (D) is used for representing a value of sign of the tangential angle, sign (D) =1 when D >0, and sign (D) = -1 when D < 0.
Optionally, the determining unit 302 is specifically further configured to determine, according to the directed distance field image, a concave-convex variation parameter corresponding to a text graphic in the two-dimensional text image; when the concave-convex variation parameter is larger than a preset shielding parameter threshold value, determining the product of the preset shielding intensity parameter and the concave-convex variation parameter as a shielding parameter; and when the concave-convex variation parameter is smaller than or equal to a preset shielding parameter threshold value, determining a default parameter for representing that illumination received by the surface of the stereoscopic text image is not shielded by surrounding objects as a shielding parameter.
Optionally, the stereo text rendering device 300 further includes a receiving unit 304;
a receiving unit 304, configured to receive a rendering instruction for a text image to be two-dimensional;
the acquiring unit 301 is specifically configured to acquire, in response to a rendering instruction, a directional distance field image that is generated in advance for a two-dimensional text image to be rendered.
Optionally, the acquiring unit 301 is further configured to acquire a three-dimensional model to be rendered.
The generating unit 303 is specifically configured to superimpose and generate a stereoscopic text image corresponding to the two-dimensional text image on the three-dimensional model.
Corresponding to the method for rendering the stereoscopic text provided by the embodiment of the present application, the embodiment of the present application further provides an electronic device for rendering the stereoscopic text, as shown in fig. 13, where the electronic device includes: a processor 401; and a memory 402 for storing a program of a rendering method of a stereoscopic text, the apparatus being powered on and executing the program of the rendering method of a stereoscopic text by a processor, and performing the steps of:
Acquiring a directional distance field image which is generated in advance for a two-dimensional text image to be rendered; the pixel value in the directed distance field image represents the shortest vector from the text graphic pixel point in the two-dimensional text image to the edge of the text graphic where the text graphic pixel point is located;
Determining a normal vector of a character pattern in the two-dimensional character image according to the directed distance field image;
And generating a stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image and the normal vector.
Corresponding to the method for rendering the stereoscopic text provided by the embodiment of the application, the embodiment of the application also provides a computer readable storage medium storing a program of the method for rendering the stereoscopic text, the program being run by a processor and executing the following steps:
Acquiring a directional distance field image which is generated in advance for a two-dimensional text image to be rendered; the pixel value in the directed distance field image represents the shortest vector from the text graphic pixel point in the two-dimensional text image to the edge of the text graphic where the text graphic pixel point is located;
Determining a normal vector of a character pattern in the two-dimensional character image according to the directed distance field image;
And generating a stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image and the normal vector.
It should be noted that, for the detailed description of the apparatus, the electronic device, and the computer readable storage medium provided in the embodiments of the present application, reference may be made to the related description of the embodiment of the method for rendering the stereoscopic text provided in the embodiments of the present application, which is not repeated here.
While the application has been described in terms of preferred embodiments, it is not intended to be limiting, but rather, it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the spirit and scope of the application as defined by the appended claims.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
1. Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable operations, data structures, modules of the program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer readable media, as defined herein, does not include non-transitory computer readable media (transmission media), such as modulated data signals and carrier waves.
2. It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
While the application has been described in terms of preferred embodiments, it is not intended to be limiting, but rather, it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the spirit and scope of the application as defined by the appended claims.

Claims (15)

1. A method for rendering stereoscopic text, the method comprising:
acquiring a directional distance field image which is generated in advance for a two-dimensional text image to be rendered; the pixel value in the directed distance field image represents the shortest vector from the text graphic pixel point in the two-dimensional text image to the edge of the text graphic where the text graphic pixel point is located;
determining a normal vector of a character graphic in the two-dimensional character image according to the directed distance field image;
and generating a stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image and the normal vector.
2. The method according to claim 1, wherein the method further comprises:
Determining a shading parameter for representing the brightness degree of illumination received by the surface of the stereoscopic text image when the illumination is shaded by surrounding objects;
the generating a stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image and the normal vector comprises the following steps:
and generating a stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image, the normal vector and the shielding parameter.
3. The method according to claim 1, wherein the method further comprises:
determining a transparency parameter for representing the transparency of the stereoscopic text image;
the generating a stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image and the normal vector comprises the following steps:
And generating a stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image, the normal vector and the transparency parameter.
4. The method of claim 1, wherein prior to the acquiring the directed distance field image pre-generated for the two-dimensional literal image to be rendered, the method further comprises:
Receiving a rendering instruction aiming at the character image to be two-dimensional;
The obtaining the directional distance field image pre-generated for the two-dimensional text image to be rendered comprises the following steps:
and responding to the rendering instruction, and acquiring a directional distance field image which is generated in advance for the two-dimensional text image to be rendered.
5. The method of claim 1, wherein determining a normal vector of a text graphic in the two-dimensional text image from the directed distance field image comprises:
determining concave-convex variation parameters corresponding to the text and graphics in the two-dimensional text image according to the directed distance field image;
determining tangential vectors, auxiliary tangential vectors and tangential included angles corresponding to the text graphics in the two-dimensional text image;
Determining a normal vector corresponding to the Chinese character pattern according to the concave-convex variation parameter, the tangent vector, the auxiliary tangent vector and the tangent included angle corresponding to the Chinese character pattern; and the tangent included angle is used for determining the orientation of the normal vector relative to the surface of the two-dimensional text image.
6. The method of claim 5, wherein determining, from the directed distance field image, a concave-convex variation parameter corresponding to a text graphic in the two-dimensional text image comprises:
Smoothing the directed distance field image according to the directed distance field image, a preset first pixel value and a preset second pixel value; the preset first pixel value and the preset second pixel value are any two pixel values in the directed distance field image;
determining a first concave-convex variation parameter of the character and graph in a first preset direction and a second concave-convex variation parameter of the character and graph in a second preset direction according to the smoothed directed distance field image; the first preset direction and the second preset direction are perpendicular to each other;
and determining the first concave-convex variation parameter and the second concave-convex variation parameter of the character pattern as concave-convex variation parameters of the character pattern.
7. The method of claim 6, wherein the determining the secondary tangent vector for the text graphic in the two-dimensional text image comprises:
Determining texture coordinates of the text graphic pixel points in a two-dimensional texture map corresponding to the two-dimensional text image;
determining a first position change parameter of texture coordinates of the text graphic pixel points in the first preset direction;
And carrying out cross multiplication on the first position change parameter and the axial quantity in the third preset direction to obtain a secondary tangent vector corresponding to the character pattern in the two-dimensional character image.
8. The method of claim 7, wherein determining a tangent vector corresponding to a chinese graphic of the two-dimensional text image comprises:
determining a second position parameter of the two-dimensional texture coordinates of the text graphic pixel points in the second preset direction;
And carrying out cross multiplication on the second position change parameter and the axial quantity in the third preset direction to obtain a tangent vector corresponding to the Chinese character pattern of the two-dimensional character image.
9. The method of claim 8, wherein the determining the tangential included angle corresponding to the text graphic in the two-dimensional text image comprises:
and carrying out dot multiplication on the tangent vector corresponding to the text graphic pixel point and the first position change parameter to obtain a tangent included angle corresponding to the text graphic in the two-dimensional text image.
10. The method according to claim 9, wherein determining the normal vector corresponding to the text pattern according to the concave-convex variation parameter, the tangent vector, the secondary tangent vector and the tangent included angle corresponding to the text pattern comprises:
And determining a normal vector corresponding to the text and the graph according to the following formula:
N=normalize(|D|×V(0,0,1)-(X×T+Y×B)×sign(D))
Wherein N represents a normal vector corresponding to each pixel point, normalize is a normalization function, D represents the tangential angle, D represents an absolute value of the tangential angle, V (0,0,1) represents an axial amount in the third preset direction, X represents a first concave-convex variation parameter corresponding to the graphic pixel point, Y represents a second concave-convex variation parameter corresponding to the graphic pixel point, T represents a tangential vector, B represents a secondary tangential vector, sign (D) is used for representing a value of sign of the tangential angle, sign (D) = 1 when D >0, and sign (D) = -1 when D <0.
11. The method of claim 2, wherein determining the masking parameter that characterizes the degree of darkness of illumination received by the surface of the stereoscopic text image when it is masked by surrounding objects comprises:
determining concave-convex variation parameters corresponding to the text and graphics in the two-dimensional text image according to the directed distance field image;
when the concave-convex variation parameter is larger than a preset shielding parameter threshold value, determining the product of a preset shielding intensity parameter and the concave-convex variation parameter as the shielding parameter;
and when the concave-convex variation parameter is smaller than or equal to the preset shielding parameter threshold, determining a default parameter used for representing that illumination received by the surface of the stereoscopic text image is not shielded by surrounding objects as the shielding parameter.
12. The method according to any one of claims 1 to 11, further comprising:
Acquiring a three-dimensional model to be rendered;
the generating the stereoscopic text image corresponding to the two-dimensional text image comprises the following steps:
and superposing and generating a stereoscopic text image corresponding to the two-dimensional text image on the three-dimensional model.
13. A stereoscopic text rendering device, the device comprising: an acquisition unit, a determination unit and a generation unit;
The acquisition unit is used for acquiring a directional distance field image which is generated in advance for a two-dimensional text image to be rendered; the pixel value in the directed distance field image represents the shortest vector from the text graphic pixel point in the two-dimensional text image to the edge of the text graphic where the text graphic pixel point is located;
the determining unit is used for determining the normal vector of the character graph in the two-dimensional character image according to the directed distance field image;
And the generating unit is used for generating a stereoscopic text image corresponding to the two-dimensional text image according to the two-dimensional text image and the normal vector.
14. An electronic device, comprising:
A processor; and
A memory for storing a data processing program, the electronic device being powered on and executing the program by the processor, to perform the method of any one of claims 1 to 12.
15. A computer readable storage medium, characterized in that a data processing program is stored, which program is run by a processor, performing the method according to any of claims 1-12.
CN202311597516.4A 2023-11-27 2023-11-27 Rendering method and device of three-dimensional text, electronic equipment and readable storage medium Pending CN117974872A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311597516.4A CN117974872A (en) 2023-11-27 2023-11-27 Rendering method and device of three-dimensional text, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311597516.4A CN117974872A (en) 2023-11-27 2023-11-27 Rendering method and device of three-dimensional text, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN117974872A true CN117974872A (en) 2024-05-03

Family

ID=90863535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311597516.4A Pending CN117974872A (en) 2023-11-27 2023-11-27 Rendering method and device of three-dimensional text, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117974872A (en)

Similar Documents

Publication Publication Date Title
Johnston Lumo: Illumination for cel animation
Lu et al. Illustrative interactive stipple rendering
JP4075039B2 (en) Texture mapping method, program and apparatus
JP3626144B2 (en) Method and program for generating 2D image of cartoon expression from 3D object data
US6816159B2 (en) Incorporating a personalized wireframe image in a computer software application
WO2023066121A1 (en) Rendering of three-dimensional model
CN111951156B (en) Method for drawing photoelectric special effect of graph
CN113240783B (en) Stylized rendering method and device, readable storage medium and electronic equipment
CN112995534B (en) Video generation method, device, equipment and readable storage medium
Winnemöller NPR in the Wild
CN115100337A (en) Whole body portrait video relighting method and device based on convolutional neural network
Gooch Interactive non-photorealistic technical illustration
CN111632376A (en) Virtual model display method and device, electronic equipment and storage medium
CN117974872A (en) Rendering method and device of three-dimensional text, electronic equipment and readable storage medium
JP2003168130A (en) System for previewing photorealistic rendering of synthetic scene in real-time
CN116485967A (en) Virtual model rendering method and related device
CN116363288A (en) Rendering method and device of target object, storage medium and computer equipment
CN115311395A (en) Three-dimensional scene rendering method, device and equipment
JP2001070634A (en) Game machine and its playing method
CN113936080A (en) Rendering method and device of virtual model, storage medium and electronic equipment
CN113763525B (en) Hair highlight rendering method, device, equipment and storage medium
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
Curtis et al. Real-time non-photorealistic animation for immersive storytelling in “Age of Sail”
Spindler et al. Enhanced Cartoon and Comic Rendering.
JP4078926B2 (en) Image generation method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination