CN116051716A - Rendering method and device of model and electronic equipment - Google Patents

Rendering method and device of model and electronic equipment Download PDF

Info

Publication number
CN116051716A
CN116051716A CN202310035470.0A CN202310035470A CN116051716A CN 116051716 A CN116051716 A CN 116051716A CN 202310035470 A CN202310035470 A CN 202310035470A CN 116051716 A CN116051716 A CN 116051716A
Authority
CN
China
Prior art keywords
vertex
parameter
information
model
target model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310035470.0A
Other languages
Chinese (zh)
Inventor
吴宛婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310035470.0A priority Critical patent/CN116051716A/en
Publication of CN116051716A publication Critical patent/CN116051716A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a rendering method and device of a model and electronic equipment; wherein the method comprises the following steps: performing offset processing on the vertex position of the target model based on preset parameters in a model space to obtain a first vertex offset position, and converting the first vertex offset position into a clipping space to obtain a second vertex offset position in the clipping space; acquiring first normal line information of the target model from vertex color information of the target model, and converting the first normal line information into a clipping space to obtain second normal line information; the first normal information is obtained by correcting initial normal information of the target model; and determining a outward-drawing effect parameter based on the second normal information, and rendering to obtain the outward-drawing effect of the target model through the outward-drawing effect parameter and the second vertex offset position. The method can flexibly adjust the thickness of the outward-drawing edge, correct the problems of breakage, incompleteness and the like of the outward-drawing edge, and improve the visual effect of the outward-drawing edge of the model.

Description

Rendering method and device of model and electronic equipment
Technical Field
The present invention relates to the field of model rendering technologies, and in particular, to a method and an apparatus for rendering a model, and an electronic device.
Background
The two-dimensional stylized character model has a strongly defined boundary between light and dark, and has a sketch edge, as compared to a realistic style character model. In the model rendering engine, the secondary stylized character model needs to be colored by a specific loader Shader to achieve the ideal secondary stylized effect. When rendering a exographic, a graphic channel is typically created in a loader, where the model's exographic is rendered by the normal to the character model. The outward drawing edge generated by the method can generate thickness change along with the distance of the virtual camera, the thickness degree is difficult to control, and even the outward drawing edge is broken or incomplete sometimes, so that the visual effect of the outward drawing edge of the model is poor.
Disclosure of Invention
Accordingly, the present invention is directed to a method, an apparatus, and an electronic device for rendering a model, so as to flexibly adjust the thickness of a sketch, correct the problems of breakage, incompleteness, etc. of the sketch, and improve the visual effect of the sketch of the model.
In a first aspect, an embodiment of the present invention provides a method for rendering a model, where the method includes: performing offset processing on the vertex position of the target model based on preset parameters in a model space to obtain a first vertex offset position, and converting the first vertex offset position into a clipping space to obtain a second vertex offset position in the clipping space; acquiring first normal line information of the target model from vertex color information of the target model, and converting the first normal line information into a clipping space to obtain second normal line information; the first normal information is obtained by correcting initial normal information of the target model; and determining a outward-drawing effect parameter based on the second normal information, and rendering to obtain the outward-drawing effect of the target model through the outward-drawing effect parameter and the second vertex offset position.
In a second aspect, an embodiment of the present invention provides a rendering apparatus for a model, including: the vertex position processing module is used for carrying out offset processing on the vertex position of the target model based on preset parameters in the model space to obtain a first vertex offset position, and converting the first vertex offset position into the cutting space to obtain a second vertex offset position in the cutting space; the normal processing module is used for acquiring first normal information of the target model from the vertex color information of the target model, and converting the first normal information into a clipping space to obtain second normal information; the first normal information is obtained by correcting initial normal information of the target model; and the rendering module is used for determining the outward-drawing effect parameter based on the second normal information, and rendering to obtain the outward-drawing effect of the target model through the outward-drawing effect parameter and the second vertex offset position.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, where the memory stores machine executable instructions executable by the processor, and the processor executes the machine executable instructions to implement a rendering method of the model.
In a fourth aspect, embodiments of the present invention provide a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement a method of rendering the above model.
The embodiment of the invention has the following beneficial effects:
the method, the device and the electronic equipment for rendering the model are characterized in that in a model space, the vertex position of a target model is subjected to offset processing based on preset parameters to obtain a first vertex offset position, and the first vertex offset position is converted into a cutting space to obtain a second vertex offset position in the cutting space; acquiring first normal line information of the target model from vertex color information of the target model, and converting the first normal line information into a clipping space to obtain second normal line information; the first normal information is obtained by correcting initial normal information of the target model; and determining a outward-drawing effect parameter based on the second normal information, and rendering to obtain the outward-drawing effect of the target model through the outward-drawing effect parameter and the second vertex offset position.
In the method, initial normal line information of the model is corrected, first normal line information obtained through correction is stored to obtain vertex color of the model, effect parameters of the outward drawing edge are determined through normal line information in the vertex color, and then outward drawing edge effect of the model is obtained through combination of vertex offset position rendering.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are some embodiments of the invention and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a visual effect of a writing style of a spherical model according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a visual effect of a quadratic element stylization of a spherical model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a fracture of a hard surface model according to an embodiment of the present invention;
FIG. 4 is a schematic illustration showing an incomplete view of the outward facing edge of a sharp model according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method for rendering a model according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a model without rendering the outward effect according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a model for rendering a stroked edge effect according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of an illumination result in a two-dimensional stylized rendering according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a rendering device of a model according to an embodiment of the present invention;
fig. 10 is a schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The technical terms related to the present embodiment will be described first.
1、Shader
The Shader is a Shader that can impart color and light effects to a model in a game engine, and even achieve special effects such as tracing, vertex animation, etc.
2、Pass
Pass is an integral part of the loader, and if some Pass provides shadow colors for the model, and if some Pass is used to generate shadows, multiple Pass means that multiple Pass are used in one loader. In stylized rendering, the stroking effect of the character model may be achieved through Multipass.
3、Unity
Unity is a game engine.
4、Maya
Maya is a piece of three-dimensional modeling software.
5、Python
Python is a programming language that plug-ins can be written in maya using Python.
6. Vertex color
The model is composed of vertexes, the vertexes are connected to form lines, the lines are connected to form faces, and N faces form the model. The vertices may store color information, i.e., vertex colors. The color information generally includes four values, which can be understood as four channels, which can be denoted by letters (R, G, B, a).
7. Model space, world space, clipping space
Model space, world space, clip space are denoted as ObjectSpace, worldSpace and ClipSpace, respectively. Space is an expression of position or vector information, for example object a, object B. The object a is in front of the object B, the expression of this position information is the concept of model space, the object a is in the northeast of the object B, the expression of this position information is the concept of world space, because the northeast is a direction reference established based on world space. The different spaces are converted by corresponding space conversion functions.
The clipping space is also called homogeneous clipping space, and the matrix used to convert the data into the clipping space is called clipping matrix (clip matrix), also called projection matrix (projection matrix). The object of clipping space is to be able to clip rendering primitives conveniently: primitives that lie entirely within the volume will be retained, primitives that lie entirely outside the volume will be culled, and primitives that intersect the volume boundary will be clipped. The clipping space is typically determined by the view cone.
The two-dimensional stylized character model has a strongly defined boundary between light and dark, and has a sketch edge, as compared to a realistic style character model. As an example, fig. 1 shows the visual effect of the writing style of a spherical model, and fig. 2 shows the visual effect of the quadratic element stylization of the spherical model. In the model rendering engine, the secondary stylized character model needs to be colored by a specific loader Shader to achieve the ideal secondary stylized effect.
Specifically, a multitass is used in the loader to generate a tracing Pass, and a tracing is generated by rendering once more. In the description Pass, a character model normal is obtained, the character model normal is scaled to obtain a value N, and the value N is added to a model space Objectspace with the vertex of the character model, so that description is formed, and the effect that the character model has outward description is realized.
The outward edge effect obtained in the above manner has the following disadvantages:
1. when the distance between the virtual scene in the virtual scene and the character model is changed, the outward drawing edge is scaled along with the distance between the virtual camera and the character model, the farther the virtual camera is from the character model, the thicker the outward drawing edge is, the closer the virtual camera is from the character model, and the thinner the outward drawing edge is; resulting in an aesthetically unacceptable edge-out effect.
2. The developer can not control and adjust the thickness degree of the outward drawing edge, so that the visual effect of the outward drawing edge is single.
3. The normal of the character model affects the effect of the character model in receiving the light shadow, and the normal is also used for generating the outward drawing edge; for hard surface models with more turning, such as cube models, when the normal line information of the models is used for generating the outward drawing edges, the outward drawing edges are broken, as shown in fig. 3; for sharp models, such as hair tips, when the normal information of the model is used for generating a sketch edge, the sketch edge is incomplete, as indicated by the position indicated by an arrow in fig. 4; leading to poor visual performance of the outward facing edge; however, modifying the normals of the models themselves affects the effect of these models on receiving shadows.
Based on the above, the embodiment of the invention provides a rendering method, a rendering device and electronic equipment for a model, and the technology can be applied to various model rendering processes, in particular to the processes of outward drawing edge rendering and secondary element stylization of the model.
The inventor finds that in the research process, the outward appearance effect of the model has a plurality of defects because the outward appearance effect is directly generated based on the model normal, the association degree of the outward appearance effect and the model normal is too high, and the model normal is also used for controlling the shadow effect of the model and cannot be modified according to the rendering requirement of the outward appearance edge, so that in order to avoid the defects of the outward appearance effect, the association degree of the model normal and the outward appearance edge needs to be reduced, the outward appearance edge and the model normal have certain association, and the visual effect of the outward appearance edge is also controllable and adjustable, so that the outward appearance edge effect meeting the actual requirement is obtained.
Referring to a flowchart of a rendering method of a model shown in fig. 5, the method includes the steps of:
step S502, performing offset processing on the vertex position of the target model based on preset parameters in a model space to obtain a first vertex offset position, and converting the first vertex offset position into a clipping space to obtain a second vertex offset position in the clipping space;
The model space is understood to be a space indicated by a three-dimensional coordinate system established with a designated position of the target model as an origin, which is a model space. The surface of the target model has a plurality of vertexes, and the coordinates of each vertex in the model space are vertex positions. The preset parameters may include an orientation, a viewing angle direction, and the like of the virtual camera in the virtual scene, and may further include parameters set by a developer, for example, an offset length, an offset distance, and the like. In actual implementation, the offset processing can be performed on each vertex position of the target model through preset parameters, so that a first vertex offset position corresponding to each vertex position is obtained.
Then, a conversion matrix of the model space domain clipping space is obtained, and the first vertex offset position is converted into the clipping space based on the conversion matrix, so that the second vertex offset position is obtained. The transformation matrix may specifically be a clipping matrix or a projection matrix.
Step S504, first normal line information of the target model is obtained from the vertex color information of the target model, and the first normal line information is converted into a clipping space to obtain second normal line information; the first normal information is obtained by correcting initial normal information of the target model;
The initial normal information of the target model can be obtained from the normal map of the target model, and the initial normal information is used for controlling the shadow effect of the target model. In view of the fact that the initial normal line information may cause the overspray to be broken or incomplete when the overspray is rendered, in this embodiment, the initial normal line information needs to be corrected in advance, for example, the initial normal line information is subjected to filtering processing, smoothing processing, or manually corrected by a developer, etc., so as to obtain the first normal line information; the first normal information, which may be stored in the vertex color information of the target model, is not typically used to control the lighting effect of the target model.
As can be seen from the foregoing, the vertex color information generally includes four channels in total, and therefore, the first normal line information may be saved to a part of the channels in the vertex color information, for example, the first normal line information may be saved to one channel in the vertex color information, or the first normal line information may be decomposed into components in three directions of XYZ and saved to RGB channels in the vertex color information, respectively. If the array range of the vertex color information is limited, the first normal line information is mapped to the numerical range allowed by the vertex color information, and the first normal line information is stored in the vertex color information.
In the above step, the first normal line information is acquired from the vertex color information of the target model, and is information in the model space, and therefore, it is also necessary to convert the first normal line information into the clipping space by the conversion matrix to obtain the second normal line information.
Step S506, determining a outward-drawing effect parameter based on the second normal line information, and rendering to obtain the outward-drawing effect of the target model through the outward-drawing effect parameter and the second vertex offset position.
In actual implementation, the second normal information may be adjusted by a preset adjustment parameter to obtain the above-mentioned exoedge effect parameter, where the adjustment parameter may include a width parameter, for example, the width parameter may be an overall width parameter for overall adjustment of all exoedges of the target model, or a local width parameter for adjusting local exoedges of the target model. The stroking effect parameter acts on the second vertex offset position, so that the stroking effect can be rendered, for example, the stroking effect parameter and the second vertex offset position are added, multiplied or other function operation is performed, and the stroking effect is obtained.
In the related art, firstly, the normal N of the model is obtained, the value of the normal N is added to the vertex position under the model space to obtain the value of the position os1, and then the position os1 is converted from the model space to the clipping space to finally render the tracing, so that the value of the normal N undergoes a change from the model space to the clipping space, and the thickness of the outer tracing is affected by the distance of the virtual camera.
In this embodiment, the initial normal line information of the target model is converted into a clipping space, so as to solve the problem that the distance of the virtual camera affects the thickness of the external drawing edge, and the principle is that the offset vertex position is directly converted into the clipping space to obtain a second vertex offset position, the normal line information is directly converted into the clipping space to obtain second normal line information, and then the external edge effect is rendered based on the second vertex offset information and the second normal line information; in the mode, the numerical value of the initial normal line information is not subjected to the change from the model space to the clipping space, and the numerical value of the initial normal line information is directly converted into the clipping space, so that the problem that the distance of the virtual camera influences the thickness of the outward drawing edge is avoided.
According to the rendering method of the model, the vertex position of the target model is subjected to offset processing based on the preset parameters in the model space to obtain a first vertex offset position, and the first vertex offset position is converted into the clipping space to obtain a second vertex offset position in the clipping space; acquiring first normal line information of the target model from vertex color information of the target model, and converting the first normal line information into a clipping space to obtain second normal line information; the first normal information is obtained by correcting initial normal information of the target model; and determining a outward-drawing effect parameter based on the second normal information, and rendering to obtain the outward-drawing effect of the target model through the outward-drawing effect parameter and the second vertex offset position.
In the method, initial normal line information of the model is corrected, first normal line information obtained through correction is stored to obtain vertex color of the model, effect parameters of the outward drawing edge are determined through normal line information in the vertex color, and then outward drawing edge effect of the model is obtained through combination of vertex offset position rendering.
The following examples describe the manner of shifting the vertex positions of the target model.
Acquiring a viewing angle direction parameter and an offset adjustment parameter in a model space; and carrying out offset processing on the vertex position of the target model based on the viewing angle direction parameter and the offset adjustment parameter to obtain a first vertex offset position. The viewing angle direction parameter may be determined according to an orientation of a virtual camera in the virtual scene. The offset adjustment parameter is also called OutlineOffset, and can be used for receiving the setting parameter of the staff through the window control, and taking the setting parameter as the offset adjustment parameter, that is, the staff can adjust the offset adjustment parameter in the process of making the model, so that the visual effect of the outward drawing edge is adjusted. The vertex position of the target model may specifically be a vertex position coordinate of a vertex of the target model in the model space, where the vertex position coordinate is calculated together with the viewing angle direction parameter and the offset adjustment parameter to obtain the first vertex offset position.
In a specific implementation manner, the viewing angle direction parameter and the offset adjustment parameter are multiplied to obtain a multiplication result; and adding the multiplication result to the vertex position of the target model to obtain a first vertex offset position. The offset can be expressed by the following expression:
positionOSOffset=positionOS+ObjViewDir*_OutlineOffse
wherein, positionOSOffset represents the first vertex offset position, and positionOS represents the vertex position of the target model in the model space; objViewDir represents the viewing angle direction parameter; outlineOffse represents an offset adjustment parameter.
Then, the positionOSOffset is converted from the model space to the clipping space through a function TransformObjectToHClip, and becomes a vertex position positionCS under the model clipping space; the specific formula is as follows:
float4 positionCS=TransformObjectToHClip(positionOSOffset);
wherein float4 represents a basic data type; the positionCS is the second vertex offset position in the clipping space; the positionOSOffset is the first vertex offset position in model space.
In the mode, the depth deviation is carried out on the vertex position, so that the problem that the thickness degree cannot be controlled by the outward drawing edge, and the visual effect is single can be solved, and the visual effect expression of the outward drawing edge is enriched.
As can be seen from the foregoing embodiments, the first normal line information of the target model is stored in advance in the vertex color information of the target model. Therefore, before rendering the model, it is necessary to first process the initial normal information of the target model to obtain the first normal information.
In one mode, initial normal information of a target model is obtained; smoothing the initial normal information to obtain smoothed initial normal information; and determining first normal line information of the target model based on the smoothed initial normal line information, and storing the first normal line information into a designated channel of vertex color information of the target model. Initial normal information for the target model may be obtained from a normal map for the target model, which is typically used to control the lighting effect of the model.
If the outward-drawing effect is generated based on the initial normal line information, when the target model is a hard surface object or a sharp model with more turning, the external surface effect is broken and incomplete, and in order to avoid the situation, in this embodiment, the initial normal line information is smoothed, and the smoothing can be implemented through a smoothing algorithm, or the initial normal line information can be manually modified, so as to obtain final first normal line information. And then the first normal line information is stored in the vertex color information of the target model, specifically, the first normal line information can be decomposed into normal line components in three directions of XYZ, and then the normal line components are respectively stored in RGB channels in the vertex color information.
Since the first normal line information is smoothed and modified, it is difficult for the first normal line information to accurately control the illumination effect of the target model, and in this embodiment, the first normal line information is used only to generate the outward drawing edge, and the initial normal line information is used to control the illumination effect of the target model.
In actual implementation, the initial normal information may be modified in Maya software, and then the modified first normal information may be stored in the RGB channel of the vertex color information. This first normal information is subsequently used in this embodiment to calculate the stroking. Because the first normal line information is modified normal line information, if the first normal line information is used, the light receiving effect of the character is affected, in order to obtain a correct light receiving effect and a sketch edge with good effect, two sets of normal lines need to be made, the first normal line information for generating the sketch edge exists in the vertex color, and the initial normal line information for controlling the light receiving effect is positioned in the normal line mapping, so that the model still has correct light receiving in Unity software, and meanwhile, the sketch edge effect with good effect is also obtained.
In Maya, two tools, maya loader and Python normal stock vertex color plug-in tools, are first made. In the Maya loader, two Pass are used, the first Pass calculates the light shadow, and the second Pass calculates the tracing, and the calculation of the two Pass is completely consistent with the loader of Unity, so that the role of stylized rendering can be displayed in Maya.
The vertex color information of the model is modified in real time in Maya, so that the change of the effect of the outward drawing edge can be seen in real time, and the modified effect can be seen without being imported into Unity after modification. For example, modifying the vertex color A channel controls the stroking thickness, and modifying the first normal information of the target model to correct the stroking effect, all of which can see the effect in real time in Maya.
In a specific implementation, further processing of the first normal information is required before saving the first normal information to the vertex color information. Mapping the smoothed initial normal information into a preset first numerical range to obtain a mapping result; wherein the first numerical range is matched with the numerical range of the vertex color information; and carrying out coordinate conversion on the mapping result according to a preset coordinate axis mapping relation to obtain first normal information of the target model.
Typically, the numerical range of the normal information of the model is typically larger than the numerical range of the vertex color information, e.g., the numerical range of the normal information is [ -1,1] and the numerical range of the vertex color is [0,1], so that the mapping result in the first numerical range needs to be obtained by mapping the initial normal information from [ -1,1] to [0,1 ]. In addition, the embodiment is applied to Maya and Unity software, and because the coordinate axes in the two software are different, the coordinate conversion is required to be performed on the mapping result based on the coordinate axis mapping relation between the two software, so as to obtain the final first normal information, and the first normal is newly saved in the vertex color information.
Specifically, in Maya, a vertex color plugin written by Python may be used to smooth initial normal information or manually modify parameters to obtain first normal information, and then the smooth normal operation or the first normal information after modifying the normal operation is stored in three channels of vertex color RGB of the model, so as to solve the problem of edge tracing fracture.
The principle of the vertex color plug-in written by Python is as follows: the normal value Nos0 of the initial normal information of each vertex of the target model is acquired by the Python command cmds, polynormalperfex, and then the values Nos1 are obtained by mapping Nos0 from the range of [ -1,1] to [0,1], the normal value itself being in the range of [ -1,1], the mapping to [0,1] being because the range of model vertex colors is [0,1]. The mapping process may be implemented by the following equation:
(Nos0.rgb+1)*0.5=Nos1
where nos0.rgb represents the normal value of the initial normal information, and Nos1 represents the normal value of the first normal information.
Because the coordinate axes of Maya and Unity are different, and the X axis is opposite, the problem of coordinate axis difference of two software is solved, and the R channel value of the Nos1 is multiplied by-1 to obtain the Nos2. The coordinate conversion may be achieved by the following equation:
Nos2.rgb=float3(-Nos1.r,Nos1.gb)
Wherein, nos2.Rgb represents the normal value of the first normal information after coordinate conversion, nos1.R represents the array in the R channel in the first normal information before coordinate conversion, namely the R channel value of the mapping result; nos1.gb represents the GB channel value in the mapping result; float3 represents a basic data type.
Nos2 is then stored in RGB three channels of vertex color information by the Python command cmds. Thus, the target model has two sets of normals, one is the normal information of the model, namely the initial normal information, used for controlling the shadow effect of the target model, and the other is the new model normal Nos2 stored in the vertex color information, namely the first normal information, used for influencing the outward-drawing effect; the new model normals stored in the vertex color can solve the problem of the fracture or poor performance of the outward facing edge.
In the rendering process, after first normal line information is obtained from the vertex color information, the first normal line information is required to be mapped into a preset second numerical range, and the first normal line information in the second numerical range is obtained; wherein the second range of values matches the range of values of the initial normal information. As can be seen from the foregoing, the second range of values is generally larger than the first range of values corresponding to the vertex color information, and the second range of values is [ -1,1].
Specifically, in the second pass of the loader, the RGB channel vertexcolor. RGB of the vertex color information is obtained, and the first normal information stored therein is remapped from [0,1] back to the range of [ -1,1 ]. The method can be realized by the following algorithm:
ModelNormal=VertexColor.rgb*2–1;
wherein Modelnormal represents the first normal information in the second numerical range, and VertexColor. Rgb represents the first normal information in the first numerical range.
After the first normal line information is obtained from the vertex color information and mapped, the model normal may be converted into the clipping space ClipSpace, modelNormal by using the function transformobjecttohclippair, which is originally in the model space ObjectSpace, and the normal line value normolcs in the clipping space may be obtained after the space conversion. The problem that the farther the camera is away from the character, the thicker the tracing is, the closer the camera is, and the thinner the tracing is can be solved by calculating the outward tracing under the clipping space ClipSpace.
In a specific implementation manner, the second normal information is adjusted based on a preset outward-drawing edge width parameter, so that an outward-drawing edge effect parameter is obtained. The second normal line information, i.e., the first normal line information, is converted into the normal line information after clipping the space. The width parameters of the outsides can be preset and manually adjusted by a worker, the width parameters of the outsides can be used for controlling the width of the outsides, and when the outsides are more, the width parameters of the local outsides can be modified to control the width of the local outsides. The stroked edge width parameter may be multiplied, added, or other functional operation with the second normal information to calculate a stroked edge effect parameter.
When in actual implementation, acquiring preset whole width adjustment parameters and local width adjustment parameters; wherein the local width adjustment parameter is stored in a first channel in the vertex color information; the second channel in the vertex color information stores the first normal information; the second channel is different from the first channel; and determining the product of the whole width adjustment parameter, the local width adjustment parameter and the second normal line information as the outward-drawing effect parameter. The overall width adjustment parameter may be one or a small number of values for controlling the overall width of the outward facing edge, and the operator may adjust the parameter to control the width of the outward facing edge as a whole. The local width adjustment parameter may be set up for each vertex and stored in the vertex color information, and the local width adjustment parameter may be stored in the a channel in the vertex color information, that is, the first channel, and the RGB channel in the vertex color information, that is, the second channel, where the first normal line information is stored in the first channel, and the values between the second channel and the first channel do not affect each other.
As an example, mapping the first normal information to the clipping space is achieved by the following equation:
float3 normalCS=TransformObjectToHClipDir(ModelNormal);
wherein float3 represents a basic numerical type; normalCS is first normal information in the clipping space; transformaObjectToHClippair is a conversion function for converting model space into clipping space; modelnormal is the first normal information in model space.
The outward edge effect parameter is obtained by the following formula:
float3 extendDis=normalCS*_OutlineWidth*VertexColor.a;
wherein float3 represents a basic numerical type; extenmddis stands for the stroking effect parameter; normalCS represents first normal information in the clipping space; outlineWidth represents the overall width adjustment parameter; the vertexcolor.a represents a local width adjustment parameter, i.e., a channel value of the a channel in the vertex color information.
In the above mode, the parameter_outlinewidth for the adjustment of the staff is multiplied by the A channel of the vertex color and the normalCS to obtain the numerical value extendsis, so that the outlining with controllable thickness can be obtained according to the_outlinewidth and the A channel of the vertex color.
From the foregoing, the local width adjustment parameters may be modified by a worker. Specifically, in response to a parameter drawing operation acting on a target model, setting a channel value of a first channel based on preset drawing control parameters and an operation path of the parameter drawing operation; the channel value of the first channel is determined as a local width adjustment parameter. The drawing control parameters may include parameters such as a size and a drawing strength of the drawing brush, and a worker controls the drawing brush to move on the target model, and local width adjustment parameters in vertices on the moving path are updated according to the drawing control parameters, so that correction of the local width adjustment parameters is achieved.
In one example, an operation panel may be provided through the plug-in, where an a channel of the apex color of the target model may be modified, and the a channel may also be referred to as an Alpha channel, and in the operation panel, a worker may directly modify the foregoing overall width adjustment parameter, or may set a brush width and a brush strength, and then control the brush to move on the target model, so as to modify a local width adjustment parameter on a brush moving path. In addition, the local width adjustment parameter corresponding to a certain vertex can be modified among the panels.
After the parameters of the outward-drawing effect are obtained through the foregoing embodiments, the parameters of the outward-drawing effect need to be adjusted according to the aspect ratio of the display screen in consideration of the different sizes of the display planes of the terminal device, so as to avoid stretching the outward-drawing edge when the outward-drawing effect is displayed on the terminal device.
Specifically, determining a drawing edge stretching parameter based on screen parameters of preset display equipment; and adjusting the outward-drawing effect parameters through the outward-drawing stretching parameters to obtain the adjusted outward-drawing effect parameters. The screen parameters may be obtained from a loader, and the screen parameters may include a screen width and a screen height, and a ratio of the screen width to the screen height is taken as a drawing edge stretching parameter, or a ratio of the screen height to the screen width is taken as a drawing edge stretching parameter.
In one example, the adjustment of the stroking effect parameter is achieved by the following equation:
ScaleX = screen width/screen height
extendDis.x=extendDis.x1/ScaleX
Wherein, scaleX represents the sketch edge stretching parameter; extenmddis.x1 represents a stroking effect parameter; extenmddis.x represents the adjusted stroked effect parameter.
And then, adding the outward-drawing effect parameter and the second vertex offset position to obtain the outward-drawing effect of the target model. Specifically, since the vertex offset position includes components in three directions of XYZ, the array of each channel of the exo-edge effect parameter may be added to the corresponding component of the vertex offset position, or the array of part of the channels of the exo-edge effect parameter may be added to the corresponding component of the vertex offset position. For example, the outward effect of the target model is obtained by the following expression:
positionCS.xy1=positionCS.xy+extendDis.xy
wherein, positioncs.xy represents the components of the vertex offset position in the X-direction and Y-direction, extenddis.xy represents the components of the stroking edge effect parameter in the X-direction and Y-direction; the positioncs.xy1 represents the resulting outward effect of rendering.
As an example, referring to fig. 6 and 7, fig. 6 is a model in which no outward effect is rendered, and fig. 7 is a model in which outward effect is rendered, the outward effect rendered by the present embodiment is relatively complete, continuous, and good in visual effect.
In addition to rendering the outward effect, in the two-dimensional stylized rendering, a shadow effect of the rendering model is also required. Specifically, the parallel light direction in the virtual scene and the initial normal information of the target model are obtained; determining a first intermediate parameter based on the parallel light direction and the initial normal information; determining a secondary stylized illumination result based on the first intermediate parameter and a preset shadow map; and carrying out interpolation processing on the bright part color mapping and the dark part color mapping of the target model based on the illumination result to obtain the secondary stylized light and shadow effect of the target model.
The parallel light direction may be a sunlight direction, and the dot product processing is performed on the parallel light direction and initial normal information, so as to obtain a first intermediate parameter, which may also be called Lambert; lambert=dot (N, L); where N represents initial normal information, L represents parallel light direction, and dot represents dot product function.
When determining the illumination result, a specific mode is that the first intermediate parameter is mapped to a preset third numerical range to obtain a second intermediate parameter; adjusting the second intermediate result based on the preset size control parameter and the eclosion control parameter to obtain a third intermediate result; wherein the size control parameter is for: controlling the size proportion of the dark part in the illumination effect; the feathering control parameters are used to: controlling the eclosion degree of the dark part in the illumination effect; and determining a secondary stylized illumination result based on the third intermediate result and a preset shadow map.
It will be appreciated that the value range of the first intermediate parameter is typically [ -1,1] and the aforementioned third value range is typically [0,1], in which case the mapping can be performed by the following equation: halolambert=lambert 0.5+0.5; wherein HalfLambert is the second intermediate parameter and Lambert is the first intermediate parameter.
Further, the half lambert was modified using SmoothStep function to obtain a quadratic stylized lighting result Shadow. The formula is as follows: shadow = SmoothStep (controllable parameter dark size, controllable parameter dark feathering value, halfplay); wherein the size of the dark portion of the controllable parameter is the size control parameter, and the eclosion value of the dark portion of the controllable parameter is the eclosion control parameter. Multiplying the calculation of SmoothStep with the shadow map gives more shadow detail to the model. The light map may be a black-and-white gray scale map, black represents the backlight surface, white represents the light receiving surface, and the light map may be drawn by a worker. FIG. 8 illustrates an example of illumination results in a two-dimensional stylized rendering, with the location indicated by the arrow being the shadow effect produced by the illumination results.
The rendering method of the model enriches the visual effect of the outward drawing edge and solves the problem that the multitass renders the outward drawing edge too monotonous; the problem that the larger the distance between the virtual camera and the model is, the thicker the outward drawing edge is, and the smaller the distance between the virtual camera and the model is, the thinner the outward drawing edge is solved; the thickness variation can be added to the outward-drawing edge through parameter adjustment, so that the visual expressive force of the outward-drawing edge is enriched; solves the problems that in a hard surface object model or a sharp model with more turning, the outward drawing edge is broken and performs poorly.
Corresponding to the above method embodiment, referring to a schematic structural diagram of a rendering device of a model shown in fig. 9, the device includes:
the vertex position processing module 90 is configured to perform offset processing on a vertex position of the target model based on a preset parameter in the model space to obtain a first vertex offset position, and convert the first vertex offset position into a clipping space to obtain a second vertex offset position in the clipping space;
the normal processing module 92 is configured to obtain first normal information of the target model from vertex color information of the target model, and convert the first normal information into a clipping space to obtain second normal information; the first normal information is obtained by correcting initial normal information of the target model;
the rendering module 94 is configured to determine a stroked edge effect parameter based on the second normal information, and render the stroked edge effect of the target model by the stroked edge effect parameter and the second vertex offset position.
The rendering device of the model performs offset processing on the vertex position of the target model based on preset parameters in a model space to obtain a first vertex offset position, and converts the first vertex offset position into a clipping space to obtain a second vertex offset position in the clipping space; acquiring first normal line information of the target model from vertex color information of the target model, and converting the first normal line information into a clipping space to obtain second normal line information; the first normal information is obtained by correcting initial normal information of the target model; and determining a outward-drawing effect parameter based on the second normal information, and rendering to obtain the outward-drawing effect of the target model through the outward-drawing effect parameter and the second vertex offset position.
In the method, initial normal line information of the model is corrected, first normal line information obtained through correction is stored to obtain vertex color of the model, effect parameters of the outward drawing edge are determined through normal line information in the vertex color, and then outward drawing edge effect of the model is obtained through combination of vertex offset position rendering.
The vertex position processing module is further configured to: acquiring a viewing angle direction parameter and an offset adjustment parameter in a model space; and carrying out offset processing on the vertex position of the target model based on the viewing angle direction parameter and the offset adjustment parameter to obtain a first vertex offset position.
The vertex position processing module is further configured to: multiplying the viewing angle direction parameter and the offset adjustment parameter to obtain a multiplication result; and adding the multiplication result to the vertex position of the target model to obtain a first vertex offset position.
The device further comprises: the normal line storage module is used for acquiring initial normal line information of the target model; smoothing the initial normal information to obtain smoothed initial normal information; and determining first normal line information of the target model based on the smoothed initial normal line information, and storing the first normal line information into a designated channel of vertex color information of the target model.
The normal line preservation module is further used for: mapping the smoothed initial normal information into a preset first numerical range to obtain a mapping result; wherein the first numerical range is matched with the numerical range of the vertex color information; and carrying out coordinate conversion on the mapping result according to a preset coordinate axis mapping relation to obtain first normal information of the target model.
The device further comprises: the mapping module is used for mapping the first normal information into a preset second numerical range to obtain the first normal information in the second numerical range; wherein the second range of values matches the range of values of the initial normal information.
The rendering module is further configured to: and adjusting the second normal information based on a preset outward-drawing edge width parameter to obtain an outward-drawing edge effect parameter.
The rendering module is further configured to: acquiring preset overall width adjustment parameters and local width adjustment parameters; wherein the local width adjustment parameter is stored in a first channel in the vertex color information; the second channel in the vertex color information stores the first normal information; the second channel is different from the first channel; and determining the product of the whole width adjustment parameter, the local width adjustment parameter and the second normal line information as the outward-drawing effect parameter.
The rendering module is further configured to: setting a channel value of a first channel based on preset drawing control parameters and an operation path of the parameter drawing operation in response to a parameter drawing operation acting on a target model; the channel value of the first channel is determined as a local width adjustment parameter.
The device further comprises: the stretching adjustment module is used for determining the drawing edge stretching parameters based on screen parameters of preset display equipment; and adjusting the outward-drawing effect parameters through the outward-drawing stretching parameters to obtain the adjusted outward-drawing effect parameters.
The rendering module is further configured to: and adding the outward-drawing effect parameter and the second vertex offset position to obtain the outward-drawing effect of the target model.
The device further comprises a light ray rendering module for: acquiring parallel light directions in a virtual scene and initial normal information of a target model; determining a first intermediate parameter based on the parallel light direction and the initial normal information; determining a secondary stylized illumination result based on the first intermediate parameter and a preset shadow map; and carrying out interpolation processing on the bright part color mapping and the dark part color mapping of the target model based on the illumination result to obtain the secondary stylized light and shadow effect of the target model.
The light ray rendering module is further used for: mapping the first intermediate parameter to a preset third numerical range to obtain a second intermediate parameter; adjusting the second intermediate result based on the preset size control parameter and the eclosion control parameter to obtain a third intermediate result; wherein the size control parameters are for: controlling the size proportion of the dark part in the illumination effect; the feathering control parameters are used to: controlling the eclosion degree of the dark part in the illumination effect; and determining a secondary stylized illumination result based on the third intermediate result and a preset shadow map.
The present embodiment also provides an electronic device including a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the rendering method of the model. The electronic device may be a server or a terminal device.
Referring to fig. 10, the electronic device includes a processor 100 and a memory 101, the memory 101 storing machine executable instructions that can be executed by the processor 100, the processor 100 executing the machine executable instructions to implement the above-described model rendering method.
Further, the electronic device shown in fig. 10 further includes a bus 102 and a communication interface 103, and the processor 100, the communication interface 103, and the memory 101 are connected through the bus 102.
The memory 101 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 103 (which may be wired or wireless), and may use the internet, a wide area network, a local network, a metropolitan area network, etc. Bus 102 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 10, but not only one bus or type of bus.
The processor 100 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 100 or by instructions in the form of software. The processor 100 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and, in combination with its hardware, performs the steps of the method of the previous embodiment.
The processor in the electronic device may implement the following operations in the rendering method of the model by executing machine executable instructions:
performing offset processing on the vertex position of the target model based on preset parameters in a model space to obtain a first vertex offset position, and converting the first vertex offset position into a clipping space to obtain a second vertex offset position in the clipping space; acquiring first normal line information of the target model from vertex color information of the target model, and converting the first normal line information into a clipping space to obtain second normal line information; the first normal information is obtained by correcting initial normal information of the target model; and determining a outward-drawing effect parameter based on the second normal information, and rendering to obtain the outward-drawing effect of the target model through the outward-drawing effect parameter and the second vertex offset position.
Acquiring a viewing angle direction parameter and an offset adjustment parameter in a model space; and carrying out offset processing on the vertex position of the target model based on the viewing angle direction parameter and the offset adjustment parameter to obtain a first vertex offset position.
Multiplying the viewing angle direction parameter and the offset adjustment parameter to obtain a multiplication result; and adding the multiplication result to the vertex position of the target model to obtain a first vertex offset position.
Acquiring initial normal information of a target model; smoothing the initial normal information to obtain smoothed initial normal information; and determining first normal line information of the target model based on the smoothed initial normal line information, and storing the first normal line information into a designated channel of vertex color information of the target model.
Mapping the smoothed initial normal information into a preset first numerical range to obtain a mapping result; wherein the first numerical range is matched with the numerical range of the vertex color information; and carrying out coordinate conversion on the mapping result according to a preset coordinate axis mapping relation to obtain first normal information of the target model.
Mapping the first normal information into a preset second numerical range to obtain the first normal information in the second numerical range; wherein the second range of values matches the range of values of the initial normal information.
And adjusting the second normal information based on a preset outward-drawing edge width parameter to obtain an outward-drawing edge effect parameter.
Acquiring preset overall width adjustment parameters and local width adjustment parameters; wherein the local width adjustment parameter is stored in a first channel in the vertex color information; the second channel in the vertex color information stores the first normal information; the second channel is different from the first channel; and determining the product of the whole width adjustment parameter, the local width adjustment parameter and the second normal line information as the outward-drawing effect parameter.
Setting a channel value of a first channel based on preset drawing control parameters and an operation path of the parameter drawing operation in response to a parameter drawing operation acting on a target model; the channel value of the first channel is determined as a local width adjustment parameter.
Determining a drawing edge stretching parameter based on screen parameters of preset display equipment; and adjusting the outward-drawing effect parameters through the outward-drawing stretching parameters to obtain the adjusted outward-drawing effect parameters.
And adding the outward-drawing effect parameter and the second vertex offset position to obtain the outward-drawing effect of the target model.
Acquiring parallel light directions in a virtual scene and initial normal information of a target model; determining a first intermediate parameter based on the parallel light direction and the initial normal information; determining a secondary stylized illumination result based on the first intermediate parameter and a preset shadow map; and carrying out interpolation processing on the bright part color mapping and the dark part color mapping of the target model based on the illumination result to obtain the secondary stylized light and shadow effect of the target model.
Mapping the first intermediate parameter to a preset third numerical range to obtain a second intermediate parameter; adjusting the second intermediate result based on the preset size control parameter and the eclosion control parameter to obtain a third intermediate result; wherein the size control parameters are for: controlling the size proportion of the dark part in the illumination effect; the feathering control parameters are used to: controlling the eclosion degree of the dark part in the illumination effect; and determining a secondary stylized illumination result based on the third intermediate result and a preset shadow map.
In the method, initial normal line information of the model is corrected, first normal line information obtained through correction is stored to obtain vertex color of the model, effect parameters of the outward drawing edge are determined through normal line information in the vertex color, and then outward drawing edge effect of the model is obtained through combination of vertex offset position rendering.
The present embodiment also provides a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement a rendering method of the above model.
The machine-executable instructions stored on the machine-readable storage medium may implement the following operations in the method for rendering the model:
Performing offset processing on the vertex position of the target model based on preset parameters in a model space to obtain a first vertex offset position, and converting the first vertex offset position into a clipping space to obtain a second vertex offset position in the clipping space; acquiring first normal line information of the target model from vertex color information of the target model, and converting the first normal line information into a clipping space to obtain second normal line information; the first normal information is obtained by correcting initial normal information of the target model; and determining a outward-drawing effect parameter based on the second normal information, and rendering to obtain the outward-drawing effect of the target model through the outward-drawing effect parameter and the second vertex offset position.
Acquiring a viewing angle direction parameter and an offset adjustment parameter in a model space; and carrying out offset processing on the vertex position of the target model based on the viewing angle direction parameter and the offset adjustment parameter to obtain a first vertex offset position.
Multiplying the viewing angle direction parameter and the offset adjustment parameter to obtain a multiplication result; and adding the multiplication result to the vertex position of the target model to obtain a first vertex offset position.
Acquiring initial normal information of a target model; smoothing the initial normal information to obtain smoothed initial normal information; and determining first normal line information of the target model based on the smoothed initial normal line information, and storing the first normal line information into a designated channel of vertex color information of the target model.
Mapping the smoothed initial normal information into a preset first numerical range to obtain a mapping result; wherein the first numerical range is matched with the numerical range of the vertex color information; and carrying out coordinate conversion on the mapping result according to a preset coordinate axis mapping relation to obtain first normal information of the target model.
Mapping the first normal information into a preset second numerical range to obtain the first normal information in the second numerical range; wherein the second range of values matches the range of values of the initial normal information.
And adjusting the second normal information based on a preset outward-drawing edge width parameter to obtain an outward-drawing edge effect parameter.
Acquiring preset overall width adjustment parameters and local width adjustment parameters; wherein the local width adjustment parameter is stored in a first channel in the vertex color information; the second channel in the vertex color information stores the first normal information; the second channel is different from the first channel; and determining the product of the whole width adjustment parameter, the local width adjustment parameter and the second normal line information as the outward-drawing effect parameter.
Setting a channel value of a first channel based on preset drawing control parameters and an operation path of the parameter drawing operation in response to a parameter drawing operation acting on a target model; the channel value of the first channel is determined as a local width adjustment parameter.
Determining a drawing edge stretching parameter based on screen parameters of preset display equipment; and adjusting the outward-drawing effect parameters through the outward-drawing stretching parameters to obtain the adjusted outward-drawing effect parameters.
And adding the outward-drawing effect parameter and the second vertex offset position to obtain the outward-drawing effect of the target model.
Acquiring parallel light directions in a virtual scene and initial normal information of a target model; determining a first intermediate parameter based on the parallel light direction and the initial normal information; determining a secondary stylized illumination result based on the first intermediate parameter and a preset shadow map; and carrying out interpolation processing on the bright part color mapping and the dark part color mapping of the target model based on the illumination result to obtain the secondary stylized light and shadow effect of the target model.
Mapping the first intermediate parameter to a preset third numerical range to obtain a second intermediate parameter; adjusting the second intermediate result based on the preset size control parameter and the eclosion control parameter to obtain a third intermediate result; wherein the size control parameters are for: controlling the size proportion of the dark part in the illumination effect; the feathering control parameters are used to: controlling the eclosion degree of the dark part in the illumination effect; and determining a secondary stylized illumination result based on the third intermediate result and a preset shadow map.
In the method, initial normal line information of the model is corrected, first normal line information obtained through correction is stored to obtain vertex color of the model, effect parameters of the outward drawing edge are determined through normal line information in the vertex color, and then outward drawing edge effect of the model is obtained through combination of vertex offset position rendering.
The method, apparatus and computer program product of electronic device for rendering a model provided in the embodiments of the present invention include a computer readable storage medium storing program codes, where instructions included in the program codes may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment and will not be described herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood by those skilled in the art in specific cases.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention for illustrating the technical solution of the present invention, but not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the foregoing examples, it will be understood by those skilled in the art that the present invention is not limited thereto: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (16)

1. A method of rendering a model, the method comprising:
performing offset processing on the vertex position of the target model based on preset parameters in a model space to obtain a first vertex offset position, and converting the first vertex offset position into a cutting space to obtain a second vertex offset position in the cutting space;
acquiring first normal line information of the target model from vertex color information of the target model, and converting the first normal line information into the clipping space to obtain second normal line information; the first normal information is obtained by correcting initial normal information of the target model;
and determining a outward-drawing effect parameter based on the second normal information, and rendering to obtain the outward-drawing effect of the target model through the outward-drawing effect parameter and the second vertex offset position.
2. The method according to claim 1, wherein the step of performing an offset process on the vertex positions of the target model based on the preset parameters in the model space to obtain the first vertex offset position includes:
acquiring a viewing angle direction parameter and an offset adjustment parameter in a model space;
And carrying out offset processing on the vertex position of the target model based on the viewing angle direction parameter and the offset adjustment parameter to obtain a first vertex offset position.
3. The method according to claim 2, wherein the step of performing an offset process on the vertex positions of the object model based on the viewing angle direction parameter and the offset adjustment parameter to obtain the first vertex offset position includes:
multiplying the viewing angle direction parameter and the offset adjustment parameter to obtain a multiplication result;
and adding the multiplication result and the vertex position of the target model to obtain a first vertex offset position.
4. The method of claim 1, wherein prior to the step of obtaining the first normal information of the target model from the vertex color information of the target model, the method further comprises:
acquiring initial normal information of the target model;
smoothing the initial normal information to obtain smoothed initial normal information;
and determining first normal information of the target model based on the smoothed initial normal information, and storing the first normal information into a designated channel of vertex color information of the target model.
5. The method of claim 4, wherein determining the first normal information of the object model based on the smoothed initial normal information comprises:
mapping the smoothed initial normal information into a preset first numerical range to obtain a mapping result; wherein the first numerical range is matched with the numerical range of the vertex color information;
and carrying out coordinate conversion on the mapping result according to a preset coordinate axis mapping relation to obtain first normal information of the target model.
6. The method of claim 1, wherein prior to the step of converting the first normal information into the clipping space to obtain second normal information, the method further comprises:
mapping the first normal information into a preset second numerical range to obtain the first normal information in the second numerical range; wherein the second range of values matches the range of values of the initial normal information.
7. The method of claim 1, wherein the step of determining a stroked edge effect parameter based on the second normal information comprises:
and adjusting the second normal information based on a preset outward-drawing edge width parameter to obtain an outward-drawing edge effect parameter.
8. The method of claim 7, wherein the step of adjusting the second normal information based on a predetermined stroked edge width parameter to obtain a stroked edge effect parameter comprises:
acquiring preset overall width adjustment parameters and local width adjustment parameters; wherein the local width adjustment parameter is stored in a first channel in the vertex color information; the second channel in the vertex color information stores the first normal information; the second channel is different from the first channel;
and determining the product of the whole width adjustment parameter, the local width adjustment parameter and the second normal line information as a overspray effect parameter.
9. The method of claim 8, wherein the step of obtaining a predetermined local width adjustment parameter comprises:
setting a channel value of the first channel based on preset drawing control parameters and an operation path of the parameter drawing operation in response to a parameter drawing operation acting on the target model;
and determining the channel value of the first channel as the local width adjustment parameter.
10. The method of claim 1, wherein prior to the step of rendering the stroked effect of the target model by the stroked effect parameter and the second vertex offset position, the method further comprises:
Determining a drawing edge stretching parameter based on screen parameters of preset display equipment;
and adjusting the outward-drawing effect parameters through the outward-drawing edge stretching parameters to obtain the adjusted outward-drawing effect parameters.
11. The method of claim 1, wherein the step of rendering the stroked edge effect of the target model from the stroked edge effect parameter and the second vertex offset position comprises:
and adding the outward-drawing effect parameter and the second vertex offset position to obtain the outward-drawing effect of the target model.
12. The method according to claim 1, wherein the method further comprises:
acquiring a parallel light direction in a virtual scene and initial normal information of the target model;
determining a first intermediate parameter based on the parallel light direction and the initial normal information;
determining a secondary stylized illumination result based on the first intermediate parameter and a preset shadow map;
and carrying out interpolation processing on the bright part color mapping and the dark part color mapping of the target model based on the illumination result to obtain the secondary stylized light and shadow effect of the target model.
13. The method of claim 12, wherein determining a two-dimensional stylized lighting result based on the first intermediate parameter and a preset lighting map comprises:
mapping the first intermediate parameter to a preset third numerical range to obtain a second intermediate parameter;
adjusting the second intermediate result based on a preset size control parameter and an eclosion control parameter to obtain a third intermediate result; wherein the size control parameters are for: controlling the size proportion of the dark part in the illumination effect; the feathering control parameters are for: controlling the eclosion degree of the dark part in the illumination effect;
and determining a secondary stylized illumination result based on the third intermediate result and a preset shadow map.
14. A rendering device of a model, the device comprising:
the vertex position processing module is used for carrying out offset processing on the vertex position of the target model based on preset parameters in a model space to obtain a first vertex offset position, and converting the first vertex offset position into a cutting space to obtain a second vertex offset position in the cutting space;
the normal processing module is used for acquiring first normal information of the target model from vertex color information of the target model, and converting the first normal information into the clipping space to obtain second normal information; the first normal information is obtained by correcting initial normal information of the target model;
And the rendering module is used for determining a outward-drawing effect parameter based on the second normal information, and rendering to obtain the outward-drawing effect of the target model through the outward-drawing effect parameter and the second vertex offset position.
15. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the method of rendering a model of any one of claims 1-13.
16. A machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement a method of rendering a model according to any one of claims 1-13.
CN202310035470.0A 2023-01-10 2023-01-10 Rendering method and device of model and electronic equipment Pending CN116051716A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310035470.0A CN116051716A (en) 2023-01-10 2023-01-10 Rendering method and device of model and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310035470.0A CN116051716A (en) 2023-01-10 2023-01-10 Rendering method and device of model and electronic equipment

Publications (1)

Publication Number Publication Date
CN116051716A true CN116051716A (en) 2023-05-02

Family

ID=86116112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310035470.0A Pending CN116051716A (en) 2023-01-10 2023-01-10 Rendering method and device of model and electronic equipment

Country Status (1)

Country Link
CN (1) CN116051716A (en)

Similar Documents

Publication Publication Date Title
CN109427088B (en) Rendering method for simulating illumination and terminal
US9916676B2 (en) 3D model rendering method and apparatus and terminal device
US7583264B2 (en) Apparatus and program for image generation
CN112316420B (en) Model rendering method, device, equipment and storage medium
US7714866B2 (en) Rendering a simulated vector marker stroke
CN109448137B (en) Interaction method, interaction device, electronic equipment and storage medium
CN107657648B (en) Real-time efficient dyeing method and system in mobile game
CN110033507B (en) Method, device and equipment for drawing internal trace of model map and readable storage medium
US7064753B2 (en) Image generating method, storage medium, image generating apparatus, data signal and program
CN113822981B (en) Image rendering method and device, electronic equipment and storage medium
US7944443B1 (en) Sliding patch deformer
US9378579B1 (en) Creation of cloth surfaces over subdivision meshes from curves
US7180523B1 (en) Trimming surfaces
US20020118212A1 (en) Shading polygons from a three-dimensional model
CN104933746B (en) A kind of method and device that dynamic shadow is set for plane picture
CN116051716A (en) Rendering method and device of model and electronic equipment
JP2001101441A (en) Method and device for rendering, game system and computer readable recording medium storing program for rendering three-dimensional model
CN115797539A (en) Shadow effect rendering method and device and electronic equipment
KR101098830B1 (en) Surface texture mapping apparatus and its method
CN117745915B (en) Model rendering method, device, equipment and storage medium
WO2022120800A1 (en) Graphics processing method and apparatus, and device and medium
CN113763527B (en) Hair highlight rendering method, device, equipment and storage medium
US20230410406A1 (en) Computer-readable non-transitory storage medium having image processing program stored therein, image processing apparatus, image processing system, and image processing method
CN118334232A (en) Method and device for realizing highlight effect and electronic equipment
KR102449987B1 (en) Method for automatically setup joints to create facial animation of 3d face model and computer program thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination