CN117058301B - Knitted fabric real-time rendering method based on delayed coloring - Google Patents

Knitted fabric real-time rendering method based on delayed coloring Download PDF

Info

Publication number
CN117058301B
CN117058301B CN202310785438.4A CN202310785438A CN117058301B CN 117058301 B CN117058301 B CN 117058301B CN 202310785438 A CN202310785438 A CN 202310785438A CN 117058301 B CN117058301 B CN 117058301B
Authority
CN
China
Prior art keywords
rendering
knitted fabric
depth
frame buffer
buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310785438.4A
Other languages
Chinese (zh)
Other versions
CN117058301A (en
Inventor
李东盛
梁金星
孙平范
胡新荣
李立军
韩开放
王朋磊
张江龙
彭佳佳
熊超
彭涛
王兆静
徐驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Textile University
Ningbo Cixing Co Ltd
Original Assignee
Wuhan Textile University
Ningbo Cixing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Textile University, Ningbo Cixing Co Ltd filed Critical Wuhan Textile University
Priority to CN202310785438.4A priority Critical patent/CN117058301B/en
Publication of CN117058301A publication Critical patent/CN117058301A/en
Application granted granted Critical
Publication of CN117058301B publication Critical patent/CN117058301B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/40Tree coding, e.g. quadtree, octree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The invention provides a knitted fabric real-time rendering method based on delayed coloring, which comprises the following steps: establishing a space acceleration tree, and performing visibility elimination by using a view cone elimination technology; pre-rendering the fabric model into a G-buffer frame buffer; creating a depth frame buffer, and rendering a depth cube map; creating an ambient light mask frame buffer, and rendering SSAO textures; calculating illumination, shadow and applying an ambient light shade factor, realizing real-time rendering of the knitted fabric based on delay coloring, and obtaining a rendering result with high quality and reality. The invention solves the problems of efficiency limitation and sense of reality deficiency of the traditional forward rendering mode, so that the rendering result is more real and fine, and the structure and the material of the knitted fabric can be better simulated.

Description

Knitted fabric real-time rendering method based on delayed coloring
Technical Field
The invention belongs to the technical field of computer graphics, and particularly relates to a knitted fabric real-time rendering method based on delayed coloring.
Background
With the rapid development of computer technology, computer aided design (CAD, computer Aided Design) is widely used in the clothing field. The rapid development of CAD technology can help designers to master the dynamic design of clothes in time in the production process, reduce the production procedures such as proofing, shorten the production flow, and help consumers to acquire product information in real time in the consumption process. The continuous development of computer graphics promotes the display effect of clothing design to be changed from a two-dimensional plane to a three-dimensional plane, and accurate three-dimensional rendering can enable a user to obtain experience with more reality.
The knitting process takes the coil as a basic unit to form knitted fabrics, and various complex pattern tissues are formed through basic knitting actions such as looping, tucking, transferring and the like, so that the coil is taken as the basic unit of the three-dimensional model net surface of the knitted garment, and the overall visual fidelity of the three-dimensional simulation rendering of the knitted garment is determined.
In conventional rendering methods, the rendering pipeline typically includes several stages: geometry processing, illumination computation, and pixel rendering. However, for a complex material formed by mutually nesting loops such as a knitted fabric, the forward rendering method cannot always effectively process the shielding relationship of the knitted fabric, so that a great deal of calculation time is wasted in illumination calculation, and the scattering intensity of light cannot be simulated to change. Therefore, how to realize three-dimensional simulation rendering of knitted fabrics accurately and in real time becomes an important research topic in clothing CAD and computer graphics.
Disclosure of Invention
The invention aims to solve the problems in the background art and provides a knitted fabric real-time rendering method based on delayed coloring. The technical method for real-time rendering of the knitted garment is developed through OpenGL, and the coil type value points are managed by using a space acceleration structure and a view cone removing technology so as to improve the real-time rendering efficiency of the knitted garment. And pre-rendering the model by using a delayed coloring technology, using a Boolean-Feng Guangzhao model, and simultaneously increasing knitted fabric shadows and an ambient light shade to further improve the sense of reality of the garment fabric.
The specific implementation of the invention comprises the following steps:
step 1, building a space acceleration tree, performing visibility elimination by using a cone elimination technology, eliminating invisible knitted fabric parts, and reducing the calculated amount of rendering;
step 2, pre-rendering the fabric model into a G-buffer frame buffer so as to facilitate subsequent illumination, shadow calculation and sense of reality rendering;
step 3, creating a depth frame buffer, rendering a depth cube map, thereby obtaining a depth value and providing a basis for shadow calculation in subsequent delay coloring;
step 4, creating an ambient light mask frame buffer, rendering SSAO textures, and providing ambient light mask information for a subsequent rendering process;
and 5, calculating illumination and shadow, applying an ambient light masking factor, mixing with the position, normal line and color attribute texture of the knitted fabric, realizing real-time rendering of the knitted fabric based on delayed coloring, and obtaining a rendering result with high quality and sense of reality.
Further, in step 1, a spatial acceleration tree is built, and visibility elimination is performed by using a view cone elimination technology, and the specific method is as follows:
first, a spatial acceleration tree is created, and a data structure of the spatial acceleration tree is created by dividing a knitted fabric model in a scene into a series of sub-regions. The data structures can effectively organize and manage the spatial positions of the models, taking octree as an example, and the algorithm thinking is as follows: (1) A recursion termination condition, such as a node object number threshold, is set. (2) The first cube is sized (3) to store a knitted type value point in the leaf node. (4) When a new point is added to the octree, the algorithm recursively traverses the entire tree, finding the appropriate leaf node to store the point. If the number of points stored by a node exceeds a preset threshold, the node is subdivided into eight sub-nodes, and the points originally stored in the node are redistributed to the corresponding sub-nodes. (5) The process is recursively performed until the number of data points stored in all leaf nodes does not exceed a preset threshold.
Next, a view cone is determined, and based on the position and viewing angle of the camera, a view cone, i.e., a visible region within the field of view of the camera, is determined in each frame. The perspective projection matrix calculation method of the view cone is shown as (1),
wherein, FOV is view angle of the view cone, aspect is Aspect ratio of the view cone, far and Near are distances from the Far and Near clipping plane of the control view cone to the camera, and after the parameters are determined, a perspective projection matrix is constructed.
Then, view cone culling is performed, and intersection tests are performed on the view cone and the spatial acceleration tree to determine which sub-regions are completely outside the view cone, i.e., invisible portions. These subregions can be completely eliminated because the knitted fabric model therein is not visible. First, the world coordinate P needs to be set world Conversion to projection coordinates P projection The calculation method is shown as the formula (2),
after coordinate transformation, six planes of the view cone are respectively defined by using normal vectors, the distance between the vertex of the subarea and the planes is calculated through point multiplication, whether the subarea is in the view cone or intersects the planes is determined through comparison of the distances, and if the condition that the eliminating condition is not met after the circulation is finished, the condition that the object is in the view cone or intersects the view cone is indicated, and the object needs to be rendered.
And finally, fine rejection, namely, for the subarea intersected with the view cone, performing intersection test on each model value point in the view cone and the bounding box, judging which part of the subarea is positioned outside the view cone, and further rejecting invisible parts.
Through the steps, the space acceleration tree is established, the visibility elimination is carried out by using the view cone elimination technology, invisible knitted fabric parts can be eliminated, the calculated amount of rendering is reduced, and therefore more efficient real-time rendering is achieved. This helps to improve the rendering performance and interactivity of the knitted fabric scene.
Further, in step 2, after the visible knitted fabric model data is sent to the vertex buffer object, the fabric model is pre-rendered to the G-buffer frame buffer, and the specific method is as follows:
first, a G-buffer frame buffer is created for storing pre-rendering data of the fabric model. G-buffer is a multi-channel buffer area for storing various geometric and illumination attributes;
the G-buffer frame buffer is then bound as the render target so that subsequent rendering operations can write data into the frame buffer.
Next, a G-buffer frame buffer is configured, which contains a plurality of color buffers and a single depth rendering buffer object.
Finally, using a multi-render target (Multiple Render Targets, MRT) technique in the fragment shader, the location, normal, color, etc. attributes of the model are calculated and stored into the corresponding channels of the G-buffer.
Through the steps, the visible knitted fabric model data is prerendered into the G-buffer frame buffer, so that the subsequent illumination, shadow calculation and realism rendering steps can be used. Thus, the detail and texture of the knitted fabric can be ensured to be accurately reserved and presented in real-time rendering.
Further, in step 3, a depth frame buffer is created, and a depth cube map is rendered, specifically by the following method:
first, a frame buffer object is created for storing depth information. This frame buffer will be used to render the depth cube map.
Next, a depth texture attachment is created for the depth frame buffer, which attachment will store depth information. Depth texture attachments may provide more flexibility in that depth information may be read directly in a subsequent rendering step.
The depth frame buffer is then bound, and the rendering state is configured to ensure proper depth value computation and shadow casting.
Finally, for each light source, a depth cube map of the scene is rendered.
Through the above steps, creating a depth frame buffer and rendering a depth cube map may provide a basis for shadow calculations in subsequent delay shading. The depth cube map will store depth information for each region in the scene for shadow casting and rendering in real-time rendering. Thus, the sense of realism and the sense of three-dimensional of the knitted fabric can be enhanced.
Further, in step 4, an ambient light mask frame buffer is created, and SSAO textures are rendered, and the specific method is as follows:
first, a frame buffer object is created for storing ambient light mask information. This frame buffer will be used to render SSAO textures.
Next, a texture attachment is created for the ambient light mask frame buffer for storing the SSAO textures. This texture will store the ambient light mask factor for each pixel in the scene.
Then, ambient light mask frame buffering is bound, and rendering state is configured to ensure correct depth value calculation and rendering results.
Finally, the SSAO texture is rendered, and for each pixel, an ambient light mask factor is calculated. The algorithm thinking is as follows: (1) A set of random sampling vectors is generated around the pixel point. (2) The sample vector is converted from world space to screen space. (3) And comparing the depth value of the screen space sampling point with the depth value stored in the G-buffer, and if the depth value of the G-buffer is larger than the depth value of the sampling point in the hemisphere, adding the depth value to a final contribution factor. (4) And (5) averaging according to a certain sampling weight to obtain a final mask factor, and normalizing. (5) The mask factors are stored on corresponding pixels of the SSAO texture.
Through the steps, the ambient light mask frame buffer is created and SSAO textures are rendered, so that the ambient light mask information can be provided for the subsequent rendering process. The SSAO texture will store an ambient light masking factor for each pixel for simulating the masking effect of ambient light during the deferred coloring phase, thereby enhancing the realism and lighting effect of the knitted fabric.
Further, in step 5, the illumination, shadow and application of the ambient light masking factor are calculated, the real-time rendering of the knitted fabric based on delay coloring is realized, and the rendering result with high quality and reality is obtained, and the specific method is as follows:
first, necessary information such as color, normal, depth, etc. is extracted from the data rendered into the G-buffer frame buffer in step 2. This information will be used for subsequent illumination and shadow calculations.
Next, using the extracted normal and position information, an illumination value of each pixel point is calculated from the employed illumination model. This involves performing illumination calculations for each light source, taking into account the location, intensity, color, etc. of the light source. Taking a Boolean-Feng Guangzhao model as an example, the calculation principle is shown as a formula (3):
wherein,
wherein L is an illumination value, represented by L a (ambient light), L d (diffuse reflection), L s (specular) three components, k a 、k d 、k s Coefficients of the corresponding components, I a Is the energy of ambient light, (I/r) 2 ) The attenuation of the light intensity is inversely proportional to the square of the distance, n is the normal vector, l is the light source vector, max (0, n.l) is the energy received by the current point, max (0, n.h) p Indicating high light intensity, p is the high light attenuation index. h is a half-way vector, and is a unit vector in the half direction of the included angle between the light source vector l and the observation direction v.
Then, a shadow value of each pixel point is calculated based on the result of the illumination calculation and the depth value in step 3. This will take into account the position of the light source and the shielding of the knitted fabric to simulate the blocking effect on light.
Next, the SSAO texture (ambient light masking factor) rendered in step 4 is multiplied with the ambient light component in the illumination model to simulate the masking effect of ambient light. This will further enhance the realism and lighting effect of the knitted fabric.
And finally, mixing the calculated illumination, shadow and the result of applying the ambient light masking factor with the texture of the knitted fabric with the position, normal, color and other attributes to generate a final rendering image.
Aiming at the limitation of the current knitted fabric in the aspect of three-dimensional simulation rendering, the invention provides a novel knitted fabric real-time rendering method, and the high-efficiency rendering speed is realized by using an optimization algorithm and a delayed coloring technology. The calculation of illumination, shadow and ambient light masking factors in the rendering process can show more realistic illumination effect and detail for the knitted fabric. The invention solves the problems of efficiency limitation and sense of reality deficiency of the traditional forward rendering mode, so that the rendering result is more real and fine, and the structure and the material of the knitted fabric can be better simulated.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Fig. 2 is a schematic diagram of view cone rejection according to an embodiment of the present invention.
FIG. 3 is a diagram of three texture attachments stored in a G-buffer frame buffer according to an embodiment of the present invention, wherein (a) is a position and depth texture attachment diagram, (b) is a normal vector texture attachment diagram, and (c) is a color texture attachment diagram.
FIG. 4 is a schematic diagram of a depth cube map according to an embodiment of the present invention.
FIG. 5 is a schematic illustration of SSAO textures according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of a high-quality, realistic rendering result based on deferred coloring according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention can be implemented by a person skilled in the art by adopting a computer software technology.
With reference to fig. 1, the embodiment of the invention provides a knitted fabric real-time rendering method based on delayed coloring, which specifically comprises the following steps:
step 1, building a space acceleration tree, performing visibility elimination by using a cone elimination technology, eliminating invisible knitted fabric parts, and reducing the calculated amount of rendering;
step 2, pre-rendering the fabric model into a G-buffer frame buffer so as to facilitate subsequent illumination, shadow calculation and sense of reality rendering;
step 3, creating a depth frame buffer, rendering a depth cube map, thereby obtaining a depth value and providing a basis for shadow calculation in subsequent delay coloring;
step 4, creating an ambient light mask frame buffer, rendering SSAO textures, and providing ambient light mask information for a subsequent rendering process;
and 5, calculating illumination and shadow, applying an ambient light masking factor, mixing with the position, normal line and color attribute texture of the knitted fabric, realizing real-time rendering of the knitted fabric based on delayed coloring, and obtaining a rendering result with high quality and sense of reality.
The processing procedure of each step is specifically described below by way of examples: examples the method according to the invention was tested on the basis of a model file of the knitted fabric.
In step 1, the embodiment establishes a spatial acceleration tree based on the spatial coordinates of the model value points in the model file of the knitted fabric, and performs visibility elimination by using a cone elimination technology as follows:
first, a spatial acceleration tree is created, and a data structure of the spatial acceleration tree is created by dividing a knitted fabric model in a scene into a series of sub-regions. These data structures can effectively organize and manage the spatial locations of the models. Taking octree as an example, the algorithm thinking is as follows: (1) A recursion termination condition, such as a node object number threshold, is set. (2) The first cube is sized (3) to store a knitted type value point in the leaf node. (4) When a new point is added to the octree, the algorithm recursively traverses the entire tree, finding the appropriate leaf node to store the point. If the number of points stored by a node exceeds a preset threshold, the node is subdivided into eight sub-nodes, and the points originally stored in the node are redistributed to the corresponding sub-nodes. (5) The process is recursively performed until the number of data points stored in all leaf nodes does not exceed a preset threshold. In an embodiment, the node threshold for the octree is set to 3000.
Next, a view cone is determined, and based on the position and viewing angle of the camera, a view cone, i.e., a visible region within the field of view of the camera, is determined in each frame. The perspective projection matrix calculation method of the view cone is shown as (1),
where FOV is the viewing angle of the viewing cone, aspect is the Aspect ratio of the viewing cone, and Far and Near are the distances from the clipping plane to the camera that control the viewing cone. In the examples the FOV was 45℃and Aspect was 16/9 and far and Near were 100 and 0.01, respectively.
Then, view cone culling is performed, and intersection tests are performed on the view cone and the spatial acceleration tree to determine which sub-regions are completely outside the view cone, i.e., invisible portions. These subregions can be completely eliminated because the knitted fabric model therein is not visible. Firstly, the world coordinates are required to be converted into projection coordinates, the calculation method is shown as a formula (2),
after coordinate transformation, six planes of the view cone are respectively defined by using normal vectors, the distance between the vertex of the subarea and the planes is calculated through point multiplication, whether the subarea is in the view cone or intersects the planes is determined through comparison of the distances, and if the condition that the eliminating condition is not met after the circulation is finished, the condition that the object is in the view cone or intersects the view cone is indicated, and the object needs to be rendered.
And finally, fine rejection, namely, for the subarea intersected with the view cone, performing intersection test on each model value point in the view cone and the bounding box, judging which part of the subarea is positioned outside the view cone, and further rejecting invisible parts.
Through the steps, the space acceleration tree is established, the visibility elimination is carried out by using the view cone elimination technology, invisible knitted fabric parts can be eliminated, the calculated amount of rendering is reduced, and therefore more efficient real-time rendering is achieved. This helps to improve the rendering performance and interactivity of the knitted fabric scene. In an embodiment, a schematic view of view cone rejection is performed, as shown in fig. 2.
In step 2, after the visible knitted fabric model data is sent to the vertex buffer object, the fabric model is prerendered into the G-buffer frame buffer, and the specific method is as follows:
first, a G-buffer frame buffer is created for storing pre-rendering data of the fabric model. G-buffer is a multi-channel buffer for storing various geometric and illumination attributes.
The G-buffer frame buffer is then bound as the render target so that subsequent rendering operations can write data into the frame buffer.
Next, a G-buffer frame buffer is configured, which contains a plurality of color buffers and a single depth rendering buffer object. In an embodiment, three color buffers are enabled, including two high precision textures and a default texture.
Finally, using a multi-render target (Multiple Render Targets, MRT) technique in the fragment shader, the location, normal, color, etc. attributes of the model are calculated and stored into the corresponding channels of the G-buffer. In an embodiment, the position and depth data are stored in a high-precision texture, the normals are stored in a high-precision texture separately, and the color and specular intensity data are stored in a default texture.
Through the steps, the visible knitted fabric model data is prerendered into the G-buffer frame buffer, so that the subsequent illumination, shadow calculation and realism rendering steps can be used. Thus, the detail and texture of the knitted fabric can be ensured to be accurately reserved and presented in real-time rendering. In an embodiment, the three texture attachments of the position and depth, normal vector, and color stored in the G-buffer frame buffer are shown in fig. 3.
In step 3, a depth frame buffer is created, and the method for rendering the depth cube map is as follows:
first, a frame buffer object is created for storing depth information. This frame buffer will be used to render the depth cube map.
Next, a depth texture attachment is created for the depth frame buffer, which attachment will store depth information. Depth texture attachments may provide more flexibility in that depth information may be read directly in a subsequent rendering step. In an embodiment, a cube map is created as a depth attachment to the frame buffer for subsequent point source shadow mapping.
The depth frame buffer is then bound, and the rendering state is configured to ensure proper depth value computation and shadow casting.
Finally, for each light source, a depth cube map of the scene is rendered. In an embodiment, the algorithm used is: (1) For six faces of the cube, a scene is rendered from the light source locations, generating depth information. (2) The scene is rendered using the projection matrix and the corresponding view matrix to capture a portion of the scene visible from the light source location. (3) The depth information for each face is stored into a depth texture accessory.
Through the above steps, creating a depth frame buffer and rendering a depth cube map may provide a basis for shadow calculations in subsequent delay shading. The depth cube map will store depth information for each region in the scene for shadow casting and rendering in real-time rendering. Thus, the sense of realism and the sense of three-dimensional of the knitted fabric can be enhanced. In an embodiment, a schematic diagram of a depth cube map is shown in FIG. 4.
In step 4, an ambient light mask frame buffer is created, and the method for rendering SSAO textures is as follows:
first, a frame buffer object is created for storing ambient light mask information. This frame buffer will be used to render SSAO textures.
Next, a texture attachment is created for the ambient light mask frame buffer for storing the SSAO textures. This texture will store the ambient light mask factor for each pixel in the scene.
Then, ambient light mask frame buffering is bound, and rendering state is configured to ensure correct depth value calculation and rendering results.
Finally, the SSAO texture is rendered, and for each pixel, an ambient light mask factor is calculated. The algorithm thinking is as follows: (1) A set of random sampling vectors is generated around the pixel point. (2) The sample vector is converted from world space to screen space. (3) And comparing the depth value of the screen space sampling point with the depth value stored in the G-buffer, and if the depth value of the G-buffer is larger than the depth value of the sampling point in the hemisphere, adding the depth value to a final contribution factor. (4) And (5) averaging according to a certain sampling weight to obtain a final mask factor, and normalizing. (5) The mask factors are stored on corresponding pixels of the SSAO texture. In an embodiment, a sampling core with 64 sample values is used to calculate the mask factor.
Through the steps, the ambient light mask frame buffer is created and SSAO textures are rendered, so that the ambient light mask information can be provided for the subsequent rendering process. The SSAO texture will store an ambient light masking factor for each pixel for simulating the masking effect of ambient light during the deferred coloring phase, thereby enhancing the realism and lighting effect of the knitted fabric. In an embodiment, a schematic representation of SSAO texture is shown in fig. 5.
In step 5, calculating illumination, shadow and applying an ambient light masking factor, realizing real-time rendering of the knitted fabric based on delay coloring, and obtaining a rendering result with high quality and reality, wherein the method comprises the following steps of:
firstly, the required information is extracted from the data rendered into the G-buffer frame buffer in the step 2, in the embodiment, three texture attachments of position and depth, normal vector and color are extracted. This information will be used for subsequent illumination and shadow calculations.
Next, using the extracted normal and position information, an illumination value of each pixel point is calculated from the employed illumination model. This involves performing illumination calculations for each light source, taking into account the location, intensity, color, etc. of the light source. In the examples, the Boolean-Feng Guangzhao model was used. The calculation principle is shown as formula (3):
wherein,
wherein L is an illumination value, represented by L a (ambient light), L d (diffuse reflection), L s (specular) three components, k a 、k d 、k s Coefficients of the corresponding components, I a Is the energy of ambient light, (I/r) 2 ) The attenuation of the light intensity is inversely proportional to the square of the distance, n is the normal vector, l is the light source vector, max (0, n.l) is the energy received by the current point, max (0, n.h) p Indicating high light intensity, p is the high light attenuation index. h is a half-way vector, and is a unit vector in the half direction of the included angle between the light source vector l and the observation direction v.
Then, a shadow value of each pixel point is calculated based on the result of the illumination calculation and the depth value in step 3. This will take into account the position of the light source and the shielding of the knitted fabric to simulate the blocking effect on light.
Next, the SSAO texture (ambient light masking factor) rendered in step 4 is multiplied with the ambient light component in the illumination model to simulate the masking effect of ambient light. This will further enhance the realism and lighting effect of the knitted fabric.
And finally, mixing the calculated illumination, shadow and the result of applying the ambient light masking factor with the texture of the knitted fabric with the position, normal, color and other attributes to generate a final rendering image. In an embodiment, a schematic diagram of a high-quality, realistic rendering result based on deferred coloring is shown in fig. 6.
The specific embodiments described herein are offered by way of example only to illustrate the spirit of the invention. Those skilled in the art may make various modifications or additions to the described embodiments or substitutions thereof without departing from the spirit of the invention or exceeding the scope of the invention as defined in the accompanying claims.

Claims (7)

1. The knitted fabric real-time rendering method based on delay coloring is characterized by comprising the following steps of:
step 1, building a space acceleration tree, performing visibility elimination by using a cone elimination technology, eliminating invisible knitted fabric parts, and reducing the calculated amount of rendering;
the specific implementation mode of the step 1 is as follows;
firstly, creating a space acceleration tree, and creating a data structure of the space acceleration tree by dividing a knitted fabric model in a scene into a series of subareas;
next, a view cone is determined, and based on the position and angle of view of the camera, a view cone, i.e., a visible region within the field of view of the camera, is determined in each frame, a perspective projection matrix calculation method of the view cone is shown as formula (1),
(1);
in the method, in the process of the invention,for viewing cone view angle->For aspect ratio of the viewing cone +.>And->In order to control the distance from the far and near clipping plane of the view cone to the camera, after determining the parameters, constructing a perspective projection matrix;
then, performing view cone elimination, performing intersection test on the view cone and the space acceleration tree to determine subareas which are completely outside the view cone, namely invisible parts, and completely eliminating the subareas; first, world coordinates are requiredConversion to projection coordinates>The calculation method is shown as the formula (2),
(2);
after coordinate conversion, respectively defining six planes of the view cone by using normal vectors, calculating the distance between the vertex of the subregion and the planes by dot multiplication, determining whether the subregion is in the view cone or intersects the planes by comparing the distances, and if the condition that the eliminating condition is not met after the circulation is finished, indicating that the object is in the view cone or intersects the view cone, wherein the object needs to be rendered;
finally, fine rejection, for the subarea intersected with the view cone, performing intersection test on each model value point in the view cone and the bounding box, judging the part outside the view cone in the subarea, and further rejecting the invisible part;
step 2, pre-rendering the fabric model into a G-buffer frame buffer so as to facilitate subsequent illumination, shadow calculation and sense of reality rendering;
step 3, creating a depth frame buffer, rendering a depth cube map, thereby obtaining a depth value and providing a basis for shadow calculation in subsequent delay coloring;
step 4, creating an ambient light mask frame buffer, rendering SSAO textures, and providing ambient light mask information for a subsequent rendering process;
and 5, calculating illumination and shadow, applying an ambient light masking factor, mixing with the position, normal line and color attribute texture of the knitted fabric, realizing real-time rendering of the knitted fabric based on delayed coloring, and obtaining a rendering result with high quality and sense of reality.
2. The method for rendering the knitted fabric based on the delayed coloring in real time according to claim 1, wherein the method comprises the following steps: the data structure of the octree is established, and the algorithm thinking is as follows: (1) Setting a recursion termination condition comprising a node object number threshold; (2) sizing the first cube; (3) storing the knitted fabric type value points in the leaf nodes; (4) When a new point is added into the octree, recursively traversing the whole tree, finding a proper leaf node to store the point, if the point stored by the node exceeds a preset threshold value, subdividing the node into eight child nodes, and reassigning the point originally stored in the node to the corresponding child node; (5) Recursively executing until the number of data points stored in all leaf nodes does not exceed a preset threshold.
3. The method for rendering the knitted fabric based on the delayed coloring in real time according to claim 1, wherein the method comprises the following steps: the specific implementation mode of the step 2 is as follows;
firstly, creating a G-buffer frame buffer for storing prerendered data of a knitted fabric model, wherein the G-buffer is a multi-channel buffer area for storing various geometric and illumination attributes;
then, binding the G-buffer frame buffer as a rendering target so that the subsequent rendering operation writes data into the frame buffer;
next, configuring a G-buffer frame buffer, which includes a plurality of color buffers and a single depth rendering buffer object;
finally, using a multi-render target technique in the fragment shader, the position, normal, color attributes of the knitted fabric model are calculated and stored into the corresponding channels of the G-buffer.
4. The method for rendering the knitted fabric based on the delayed coloring in real time according to claim 1, wherein the method comprises the following steps: the specific implementation mode of the step 3 is as follows;
first, a frame buffer object is created for storing depth information, and this frame buffer will be used for rendering the depth cube map;
next, a depth texture attachment is created for the depth frame buffer, which attachment will store depth information;
then binding depth frame buffer, configuring rendering state to ensure correct depth value calculation and shadow casting;
finally, for each light source, a depth cube map of the scene is rendered.
5. The method for rendering the knitted fabric based on the delayed coloring in real time according to claim 1, wherein the method comprises the following steps: the specific implementation method of the step 4 is as follows:
first, a frame buffer object is created for storing ambient light mask information, and the frame buffer is used for rendering SSAO textures;
next, a texture attachment is created for the ambient light mask frame buffer for storing SSAO textures that will store the ambient light mask factor for each pixel point in the scene;
then, binding an ambient light mask frame buffer, and configuring a rendering state to ensure correct depth value calculation and rendering results;
finally, rendering SSAO textures, and calculating an ambient light masking factor for each pixel point, wherein the algorithm thinking is as follows: (1) Generating a set of random sampling vectors around the pixel point; (2) Converting the sample vector from world space to screen space; (3) Comparing the depth value of the sampling point in the screen space with the depth value stored in the G-buffer, and if the depth value of the G-buffer is larger than the depth value of the sampling point in the hemisphere, adding the depth value to a final contribution factor; (4) Averaging according to a certain sampling weight to obtain a final mask factor, and normalizing; (5) The mask factors are stored on corresponding pixels of the SSAO texture.
6. The method for rendering the knitted fabric based on the delayed coloring in real time according to claim 1, wherein the method comprises the following steps: the specific implementation mode of the step 5 is as follows;
firstly, extracting needed information including color, normal and depth from the data rendered into the G-buffer frame buffer in the step 2, wherein the information is used for subsequent illumination and shadow calculation;
next, using the extracted normal and position information, calculating an illumination value of each pixel point according to the illumination model adopted, and calculating the illumination value by using a brin-Feng Guangzhao model, wherein the calculation principle is as shown in formula (3):
(3);
wherein,
in the method, in the process of the invention,is the illumination value, is defined by ambient light +.>Diffuse reflection->High light->Three components, ->、/>、/>Coefficients of the corresponding components, respectively +.>For ambient light energy, +.>Indicating that the decay of the light intensity is inversely proportional to the square of the distance, < >>As a normal vector of the sample,for the light source vector +.>Is the energy received by the current point, +.>Indicating high light intensity +.>Is of high light attenuation index->Is half-range vector, is light source vector +.>And viewing direction->A unit vector in half direction of the included angle;
then, calculating a shadow value of each pixel point based on the result of illumination calculation and the depth value in step 3;
then multiplying the SSAO textures obtained by rendering in the step 4 with the ambient light components in the illumination model to simulate the shielding effect of the ambient light;
and finally, mixing the calculated illumination, shadow and the result of applying the ambient light masking factor with the position, normal line and color attribute texture of the knitted fabric to generate a final rendering image.
7. The method for rendering the knitted fabric based on the delayed coloring in real time according to claim 1, wherein the method comprises the following steps:45 DEG>16/9>And->100 and 0.01, respectively.
CN202310785438.4A 2023-06-29 2023-06-29 Knitted fabric real-time rendering method based on delayed coloring Active CN117058301B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310785438.4A CN117058301B (en) 2023-06-29 2023-06-29 Knitted fabric real-time rendering method based on delayed coloring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310785438.4A CN117058301B (en) 2023-06-29 2023-06-29 Knitted fabric real-time rendering method based on delayed coloring

Publications (2)

Publication Number Publication Date
CN117058301A CN117058301A (en) 2023-11-14
CN117058301B true CN117058301B (en) 2024-03-19

Family

ID=88656150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310785438.4A Active CN117058301B (en) 2023-06-29 2023-06-29 Knitted fabric real-time rendering method based on delayed coloring

Country Status (1)

Country Link
CN (1) CN117058301B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281654A (en) * 2008-05-20 2008-10-08 上海大学 Method for processing cosmically complex three-dimensional scene based on eight-fork tree
CN110111408A (en) * 2019-05-16 2019-08-09 洛阳众智软件科技股份有限公司 Large scene based on graphics quickly seeks friendship method
CN110148201A (en) * 2019-04-23 2019-08-20 浙江大学 A kind of fabric real-time rendering method of superhigh precision
CN110969692A (en) * 2019-11-26 2020-04-07 中山大学 Real-time rendering method, system and terminal for fiber-grade fabric
CN113112582A (en) * 2021-04-20 2021-07-13 浙江凌迪数字科技有限公司 Real-time rendering method of sidelight fabric in realistic clothing rendering
CN115578499A (en) * 2022-11-29 2023-01-06 北京天图万境科技有限公司 Fitting reconstruction method and device for asymmetric color misregistration consistency

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6903741B2 (en) * 2001-12-13 2005-06-07 Crytek Gmbh Method, computer program product and system for rendering soft shadows in a frame representing a 3D-scene
US10198856B2 (en) * 2013-11-11 2019-02-05 Oxide Interactive, LLC Method and system of anti-aliasing shading decoupled from rasterization

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281654A (en) * 2008-05-20 2008-10-08 上海大学 Method for processing cosmically complex three-dimensional scene based on eight-fork tree
CN110148201A (en) * 2019-04-23 2019-08-20 浙江大学 A kind of fabric real-time rendering method of superhigh precision
CN110111408A (en) * 2019-05-16 2019-08-09 洛阳众智软件科技股份有限公司 Large scene based on graphics quickly seeks friendship method
CN110969692A (en) * 2019-11-26 2020-04-07 中山大学 Real-time rendering method, system and terminal for fiber-grade fabric
CN113112582A (en) * 2021-04-20 2021-07-13 浙江凌迪数字科技有限公司 Real-time rendering method of sidelight fabric in realistic clothing rendering
CN115578499A (en) * 2022-11-29 2023-01-06 北京天图万境科技有限公司 Fitting reconstruction method and device for asymmetric color misregistration consistency

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
Dynamic load balance strategy for parallel rendering based on deferred shading;Mingqiang Yin, et al;Computer Science;20190327;全文 *
一种三维织物建模与仿真算法;钟跃崎, 王善元;系统仿真学报(第06期);全文 *
三维模拟演练系统中光照模型的研究与实现;唐宇;中国优秀硕士论文全文数据库;2016年(11);全文 *
三维激光点云数据的可视化研究;徐旭东;李泽;;计算机科学;第43卷(第6A期);186-189 *
利用激光扫描点云的碳纤维织物表面三维模型重建;程杰;陈利;;纺织学报(第04期);全文 *
基于WebGL的织物模拟展示系统开发;周博文;张森林;樊臻;;轻工机械;20170511(第03期);全文 *
基于双层候选集的快速人体-服装冲突检测;朱小龙;毛天露;夏时洪;于勇;;计算机辅助设计与图形学学报(第07期);全文 *
基于图形硬件的显式织物模拟;戎旭涛;刘卉;;计算机应用与软件;20110515(第05期);全文 *
钟跃崎 ; 王善元.一种三维织物建模与仿真算法.系统仿真学报.2001,(第006期),全文. *
面向模拟演练系统的延迟渲染框架;唐宇;于放;孙咏;王丹妮;;计算机系统应用;第25卷(第09期);232-237 *

Also Published As

Publication number Publication date
CN117058301A (en) 2023-11-14

Similar Documents

Publication Publication Date Title
KR102046616B1 (en) Graphics processing enhancement by tracking object and/or primitive identifiers
US8803879B1 (en) Omnidirectional shadow texture mapping
JP4643271B2 (en) Visible surface determination system and method for computer graphics using interval analysis
CN111508052B (en) Rendering method and device of three-dimensional grid body
Lu et al. Illustrative interactive stipple rendering
JP3759971B2 (en) How to shade a 3D image
RU2427918C2 (en) Metaphor of 2d editing for 3d graphics
CN107341853A (en) Super large virtual scene and dynamic take the photograph the virtual reality fusion method and system of screen
CN113034656B (en) Rendering method, device and equipment for illumination information in game scene
JP2012190428A (en) Stereoscopic image visual effect processing method
CN113034657B (en) Rendering method, device and equipment for illumination information in game scene
Shen et al. Interactive visualization of three-dimensional vector fields with flexible appearance control
CN117058301B (en) Knitted fabric real-time rendering method based on delayed coloring
Gois et al. Interactive shading of 2.5 D models.
CN112002019B (en) Method for simulating character shadow based on MR mixed reality
Buchholz et al. Realtime non-photorealistic rendering of 3D city models
US11087523B2 (en) Production ray tracing of feature lines
Simion et al. Practical gpu and voxel-based indirect illumination for real time computer games
Öhrn Different mapping techniques for realistic surfaces
Es et al. Accelerated regular grid traversals using extended anisotropic chessboard distance fields on a parallel stream processor
Bolla High quality rendering of large point-based surfaces
Sabino et al. Efficient use of in-game ray-tracing techniques
KIM REAL-TIME RAY TRACING REFLECTIONS AND SHADOWS IMPLEMENTATION USING DIRECTX RAYTRACING
Tao et al. Animation Rendering Optimization Based on Ray Tracing and Distributed Algorithm
Zhao et al. Realistic and detail rendering of village virtual scene based on pixel offset

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant