WO2023098344A1 - 一种图形处理方法、装置、计算机设备及存储介质 - Google Patents

一种图形处理方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2023098344A1
WO2023098344A1 PCT/CN2022/127456 CN2022127456W WO2023098344A1 WO 2023098344 A1 WO2023098344 A1 WO 2023098344A1 CN 2022127456 W CN2022127456 W CN 2022127456W WO 2023098344 A1 WO2023098344 A1 WO 2023098344A1
Authority
WO
WIPO (PCT)
Prior art keywords
map
sampling layer
processing
information
target
Prior art date
Application number
PCT/CN2022/127456
Other languages
English (en)
French (fr)
Inventor
宋田骥
刘欢
陈烨
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023098344A1 publication Critical patent/WO2023098344A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Definitions

  • the present disclosure relates to the technical field of computer graphics, and in particular, to a graphics processing method, device, computer equipment, and storage medium.
  • 3D model rendering is widely used in movies, games and other scenes.
  • styles of the 3D models in the virtual scene are also varied, not just the realistic style.
  • painting methods such as watercolor, charcoal, and cartoon
  • people began to try to use non-photorealistic graphics such as watercolor to render 3D models.
  • non-photorealistic graphics such as watercolor to render 3D models.
  • the prior art lacks vividness and watercolor painting effect when performing watercolor rendering on a three-dimensional model.
  • Embodiments of the present disclosure at least provide a graphics processing method, device, computer equipment, and storage medium.
  • an embodiment of the present disclosure provides a graphics processing method, including:
  • the texture map includes a color map corresponding to the target 3D model, a normal map reflecting the normal direction of each point in the target 3D model, and a reflection pen Brush map with brush drawing features;
  • rendering processing is performed on the target three-dimensional model to obtain a rendered target three-dimensional model.
  • the method further includes:
  • contour stroke processing on the first sampling layer to obtain a second sampling layer, including:
  • contour stroke processing is performed on the first sampling layer after brush trace processing to obtain the second sampling layer.
  • the brush drawing trace information includes at least one of brush texture intensity information, brush horizontal distortion information, brush vertical distortion information, and highlight range information.
  • the texture map further includes a noise map reflecting brush noise
  • the method further includes:
  • contour stroke processing on the first sampling layer to obtain a second sampling layer, including:
  • the texture map further includes a color scale map reflecting the color change characteristics of the light-receiving area in the target three-dimensional model
  • the method further includes:
  • the color scale information includes each color scale corresponding to color information, the ratio information of each color scale in the light-receiving area in the target three-dimensional model, and the color fusion information of adjacent color scales in each color scale;
  • contour stroke processing on the first sampling layer to obtain a second sampling layer, including:
  • contour stroke processing is performed on the first sampling layer after the color scale processing to obtain the second sampling layer.
  • the texture map further includes a channel map reflecting an area to be rendered in the target 3D model
  • the step of performing color scale processing on the first sampling layer based on the color scale information in the color scale map to obtain the first sampling layer after the color scale processing includes:
  • contour stroke processing on the first sampling layer to obtain a second sampling layer, including:
  • contour stroke processing is performed on the local sampling layer after the color scale processing to obtain the second sampling layer.
  • the color information corresponding to each color scale is determined through the following steps:
  • N+1th-order color information is obtained; wherein, N is a positive integer greater than or equal to 1; the first-order color information is determined according to the information in the third sampling layer.
  • the first sampling process is performed on the normal map, the color map and the brush map after the lighting processing to obtain a first sampling layer, including:
  • the sub-sampling layers corresponding to the respective patches are integrated to obtain the first sampling layer.
  • an embodiment of the present disclosure further provides a graphics processing device, including:
  • An acquisition module configured to acquire a texture map for rendering the target 3D model;
  • the texture map includes a color map corresponding to the target 3D model, a normal reflecting the normal direction of each point in the target 3D model Textures, and brush textures that reflect the characteristics of brush drawing;
  • the first processing module is configured to perform light processing on the normal map based on the preset light direction information, to obtain the light processed normal map;
  • the second processing module is configured to perform first sampling processing on the normal map after the lighting processing, the color map and the brush map to obtain a first sampling layer;
  • a third processing module configured to respond to the input contour line information of the target three-dimensional model, and based on the contour line information, perform contour stroke processing on the first sampling layer to obtain a second sampling layer;
  • the fourth processing module is configured to perform rendering processing on the target 3D model based on the second sampling layer to obtain a rendered target 3D model.
  • an embodiment of the present disclosure further provides a computer device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the computer device is running, the The processor communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the above-mentioned first aspect, or the steps in any possible implementation manner of the first aspect are executed.
  • the embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the above-mentioned first aspect, or the first aspect Steps in any of the possible implementations.
  • an embodiment of the present disclosure further provides a computer program, which executes the steps in the above first aspect or any possible implementation manner of the first aspect when the computer program is run by a processor.
  • the embodiments of the present disclosure further provide a computer program product, the computer program product includes a computer program, and when the computer program is run by a processor, the above-mentioned first aspect, or any possible option in the first aspect may be executed. steps in the implementation.
  • the normal map, the color map and the brush map reflecting the characteristics of the brush drawing are subjected to the first sampling processing for rendering the target three-dimensional model;
  • the information performs outline stroke processing on the first sampling layer; through this rendering method, the target 3D model can achieve brush texture and three-dimensional effect, and the obtained second sampling layer can highlight the brush drawing of the target 3D model effect, making the rendered target 3D model more vivid and closer to the watercolor drawing effect.
  • FIG. 1 shows a flowchart of a graphics processing method provided by an embodiment of the present disclosure
  • Fig. 2 shows a schematic diagram of the effect of a brush map provided by an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of the effect of a noise map provided by an embodiment of the present disclosure
  • Fig. 4 shows a schematic diagram of the effect of a color scale map provided by an embodiment of the present disclosure
  • Fig. 5 shows a schematic diagram of the effect of a rendered target three-dimensional model provided by an embodiment of the present disclosure
  • FIG. 6 shows a schematic diagram of a graphics processing device provided by an embodiment of the present disclosure
  • Fig. 7 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
  • the process of 3D model rendering mainly includes: firstly, create a 3D model, then use the shader to make textures with certain material effects, then draw the textures in the 3D model for texture effect simulation, and finally render the 3D model that has completed the effect simulation Coloring and displaying in the 3D scene, so that the 3D model displayed in the 3D scene has a corresponding style, such as realistic, cartoon, hand-painted, etc.
  • the watercolor rendering of some 3D models lacks vividness and watercolor painting effect.
  • the present disclosure provides a graphics processing method, by performing the first sampling process on the light-processed normal map, color map and brush map reflecting the brush drawing characteristics for rendering the target 3D model , the obtained first sampling layer can make the target 3D model realize the brush texture and three-dimensional effect; then the contour line information used for contour stroke of the target 3D model is processed on the first sampling layer to obtain The second sampling layer can highlight the brush drawing effect of the target 3D model more, making the rendering effect of the target 3D model more vivid and closer to the watercolor drawing effect.
  • FIG. 1 is a flowchart of a graphics processing method provided by an embodiment of the present disclosure, the method includes S101-S105, wherein:
  • S101 Obtain a texture map for rendering the target 3D model; the texture map includes a color map corresponding to the target 3D model, a normal map reflecting the normal direction of each point in the target 3D model, and A brush texture that reflects the characteristics of brush painting.
  • the target three-dimensional model may be a three-dimensional model to be rendered corresponding to a virtual object in the target game scene.
  • the virtual object may be any three-dimensional virtual object, such as a virtual character object, a virtual object object, and the like.
  • the target 3D model can be drawn by animation rendering and production software, such as 3D Studio Max (referred to as 3DS Max or 3D Max) or 3D model production software such as Maya.
  • the 3D model of the target can be expanded to obtain the two-dimensional coordinates under the UV coordinate system (U can represent the horizontal coordinate axis under the UV coordinate, and V can represent the vertical coordinate axis under the UV coordinate). image.
  • Each UV coordinate value in the obtained two-dimensional image may correspond to each point on the surface of the target three-dimensional model.
  • Rendering information for rendering the target 3D model may be stored in the texture map.
  • Texture maps can be obtained by drawing software.
  • the color map may contain color information of the virtual object itself corresponding to the target 3D model.
  • the color information contained in the color map corresponds to the UV coordinate value, and the color map may contain the color information under each UV coordinate value in the two-dimensional image obtained after the target three-dimensional model is unfolded.
  • the color map can be drawn by, for example, drawing software Photoshop or other drawing software.
  • the normal map can contain the normal direction of each point in the target 3D model, and the normal direction can be marked through the red green blue (Red Green Blue, RGB) color channel.
  • RGB Red Green Blue
  • the normal direction of each point contained in the normal map can reflect the bump effect of the target 3D model.
  • Normal maps can be created by drawing software such as Zbrush or May.
  • the brush map can contain brush drawing characteristic information such as the material of the brush, drawing shape, drawing coherence and filling degree of the paint.
  • brush drawing characteristic information such as the material of the brush, drawing shape, drawing coherence and filling degree of the paint.
  • the drawn color map, normal map and brush map can be added to the operation interface of the game engine, and the game engine can process the information contained in the color map, normal map and brush map to obtain The rendering information of the target 3D model for rendering.
  • S102 Based on preset lighting direction information, perform lighting processing on the normal map to obtain a normal map after lighting processing.
  • the preset lighting direction may be parallel natural light irradiated from the preset direction.
  • the point product of the normal direction of a point in the normal map and the preset lighting direction is greater than 0 and less than 1, the point is illuminated; when the normal direction of a point in the normal map is equal to When the dot product of the preset lighting direction is 1, the point is facing the light source, and this point is the brightest; when the dot product of the normal direction of a point in the normal map and the preset lighting direction is greater than -1 And when it is less than 0, the point is illuminated; when the normal direction of a point in the normal map and the preset lighting direction are multiplied by -1, the point is facing away from the light source, and the point is the darkest.
  • the normal map after lighting processing can contain not only the normal direction of each point, but also the lighting information received by each point.
  • each point on the target 3D model can show a visual effect of being lighted or backlit.
  • S103 Perform a first sampling process on the light-processed normal map, the color map, and the brush map to obtain a first sampling layer.
  • the first sampling process of the light-processed normal map, color map, and brush map may be: the normal direction, light information, and color of each point contained in the light-processed normal map
  • the color information of each point contained in the map and the brush painting characteristic information contained in the brush map are fused.
  • the obtained first sampling layer may include the first rendering information after the above information fusion.
  • the rendered target 3D model can present a color rendering effect drawn by a brush in a watercolor style.
  • the surface of the target 3D model is composed of multiple meshes.
  • the color map and the brush map after the light processing can be compared with the normal line after the light processing.
  • the texture and brush texture are fused to obtain the sub-sampling layers corresponding to each patch.
  • the sub-sampling layers corresponding to each patch are integrated to obtain the first sampling layer.
  • the sub-color map corresponding to each patch contains the color information of the corresponding position in the virtual object corresponding to the patch.
  • the sub-sampling layer corresponding to each patch may include the first rendering information after the lighting processing of the normal map, the color map and the brush map of the patch are first sampled and fused. Then, according to the positions of each patch in the target three-dimensional model, the sub-sampling layers corresponding to each patch are integrated to obtain the first sampling layer.
  • S104 In response to the input contour line information of the target three-dimensional model, based on the contour line information, perform contour stroke processing on the first sampling layer to obtain a second sampling layer.
  • the contour information may include the outer contour and inner contour of the target 3D model.
  • the outer contour line may be a boundary line between the target 3D model and the background, or between the target 3D model and other 3D models.
  • the inner contour line may be a boundary line between different parts in the target three-dimensional model, for example, the boundary line between the upper garment and the lower garment of the three-dimensional character model.
  • the obtained second sampling layer may include second rendering information after fusion of the first rendering information and the contour line information in the first sampling layer.
  • the target three-dimensional model can be rendered with a clear outline in a watercolor style.
  • S105 Based on the second sampling layer, perform rendering processing on the target three-dimensional model to obtain a rendered target three-dimensional model.
  • the second rendering information contained in the second sampling layer can be used for direct rendering, and the obtained target 3D model incorporates the colors corresponding to each point of the target 3D model stored in the color map Information, the normal direction of each point stored in the normal map, the preset lighting direction information, and the brush drawing characteristic information stored in the brush map, so that the rendered target 3D model can present a watercolor-style rendering effect.
  • the first sampling layer after obtaining the first sampling layer, before performing outline stroke processing on the first sampling layer, it may be drawn in response to an input brush
  • the trace information is based on the brush trace information, and the brush trace processing is performed on the first sampling layer to obtain the first sampling layer after the brush trace processing.
  • contour stroke processing is performed on the first sampling layer after brush trace processing to obtain a second sampling layer.
  • the trace information may be drawn in response to the brush input on the operation interface of the game engine.
  • the brush drawing trace information can increase the direction of the brush trace, the texture effect of the brush trace, the highlight effect, etc., so that the brush drawing effect of the rendered target 3D model can be more realistic. Therefore, in yet another manner, the brush drawing trace information may include at least one of brush texture intensity information, brush horizontal distortion information, brush vertical distortion information, and highlight range information.
  • the texture strength of the brush marks in the target three-dimensional model may be adjusted based on the brush texture strength information.
  • the brush trace processing is performed on the first sampling layer by using the brush texture intensity information, and the brush texture intensity information is integrated into the obtained first sampling layer after the brush trace processing.
  • the texture of the brush traces in the target 3D model is stronger, which means that the target 3D model presents a watercolor painting effect with stronger brush traces .
  • the horizontal distortion direction and/or the vertical distortion direction of the brush marks in the target three-dimensional model may also be adjusted based on the brush horizontal distortion information and/or the brush vertical distortion information.
  • Use the horizontal distortion information of the brush and/or the vertical distortion information of the brush to perform brush trace processing on the first sampling layer, and the obtained first sampling layer after the brush trace processing incorporates the horizontal distortion information of the brush and/or Or the brush distorts the information vertically.
  • the distortion direction of the brush traces in the target 3D model is more obvious, that is, the target 3D model presents a watercolor with a more obvious brush trace direction draw effect.
  • the highlight range of the brush marks in the target 3D model may also be adjusted based on the highlight range information.
  • the brush trace processing is performed on the first sampling layer, and the high light range is integrated into the first sampling layer obtained after the brush trace processing.
  • the highlight range of the brush trace in the target 3D model is more obvious, which means that the target 3D model presents a more prominent brush trace highlight range Watercolor drawing effect.
  • the rendering position and rendering times of the brush can also be randomly processed, so that the rendering position and rendering times are more random.
  • the texture map may further include a noise map reflecting brush noise.
  • the first sampling layer may be subjected to brush noise processing based on the brush noise information in the noise map, Get the first sampling layer after brush noise processing.
  • contour stroke processing is performed on the first sampling layer after brush noise processing to obtain a second sampling layer.
  • the noise map may include the brush noise information of the noise corresponding to the rendering position, rendering times and other information.
  • a schematic diagram of the effect of a noise map may include randomly distributed noise textures and noise positions.
  • the rendering position and rendering times of the brush on the target 3D model can be randomly distributed, thereby enhancing the watercolor painting effect authenticity and vividness.
  • the process of obtaining the second sampling layer by using the contour line information to perform contour stroke processing on the first sampling layer after the brush noise processing can refer to the step of S105 , which will not be repeated here.
  • the texture map further includes a color scale map reflecting the color change characteristics of the light-receiving area in the target three-dimensional model.
  • the first sampling layer can be processed based on the color scale information in the color scale map to obtain the color scale after processing
  • the first sampling layer includes the color information corresponding to each color scale, the proportion information of each color scale in the light-receiving area in the target 3D model, and the color fusion information of adjacent color scales in each color scale.
  • contour stroke processing is performed on the first sampling layer after the color scale processing to obtain a second sampling layer.
  • the color scale map may be a map made by drawing software that reflects the color change characteristics of the light-receiving area in the target three-dimensional model. Multiple color levels can be included in a level map. Each color scale in the color scale map can be arranged in the order of color shade change.
  • a schematic diagram of the effect of a color scale map as shown in FIG. 4 may include 4 color scales, and the 4 color scales are arranged from left to right in order from dark to light. And the proportion information of each color level in the whole color level map can be different.
  • the color values at the critical positions of two adjacent color levels may be a color fusion value obtained by fusing the color values of the two color levels.
  • the color scale information in the color scale map is used to perform color scale processing on the first sampling layer, and the color scale information in the color scale map is integrated into the obtained first sampling layer after the color scale processing.
  • the color of the light-receiving area on the target 3D model can show a gradual effect, thereby avoiding color jumps from the shadow area to the light-receiving area situation, thereby increasing the stereoscopic effect and authenticity of the target 3D model.
  • the target region in the target 3D model can be processed with color scale, for example, the leg of the avatar can be processed with color scale. Therefore, the channel map that can reflect the area to be rendered in the target 3D model can be obtained here, and the area information on the area to be rendered in the channel map is stored in the area information to be rendered in the color scale. Color scale rendering.
  • the local sampling layer corresponding to the area to be rendered in the color scale in the first sampling layer may be determined based on the channel map. Then, based on the color scale information in the color scale map, the local sampling layer is subjected to color scale processing to obtain a local sample layer after the color scale processing. Then, in response to the input contour line information of the target three-dimensional model, based on the contour line information, contour stroke processing is performed on the local sampling layer after the color scale processing to obtain a second sampling layer.
  • the local sampling layer may include area information to be rendered in color scale and first rendering information corresponding to the area indicated by the area information.
  • the color scale rendering can be performed on the area to be color scale rendering, so that the area to be color scale rendering presents a color gradient effect, increasing the stereoscopic visual effect and authenticity of the target 3D model.
  • the process of obtaining the second sampling layer by using the contour line information to perform contour stroke processing on the local sampling layer after the color scale processing can refer to the step of S105 , which will not be repeated here.
  • the color scale map used in the process of rendering the target 3D model performs color scale processing on the light-receiving area, so the color information in the color scale map is consistent with the preset lighting direction and method of the target 3D model.
  • the normal direction stored in the line map is related to the brush map.
  • the color information corresponding to each color scale can be obtained through the following steps: firstly, the second sampling process is performed on the light-processed normal map and the brush map to obtain the third sampling map layer; then, based on the Nth-order color information in the color scale map and the third sampling layer, the N+1th-order color information is obtained; where N is a positive integer greater than or equal to 1; the first-order color information is determined according to the information in the third sampling layer.
  • the third sampling layer may contain the fusion information of the lighting information and the normal direction included in the light-processed normal map, and the third rendering after fusing the brush drawing feature information included in the brush map information.
  • the color scale map contains multiple color scales, wherein the first level color information is determined according to the information in the third sampling layer. After the first-level color information is determined, the second-level color information may be determined based on the first-level color information and information in the third sampling layer. After the second-level color information is determined, the third-level color information may be determined based on the second-level color information and information in the third sampling layer. And so on, until the color information of each color level in the color level map is determined.
  • the brush texture intensity information in the light-processed normal map, brush map, noise map, and brush drawing trace information may also be subjected to a second sampling process to obtain a third sampling layer. Then use the R channel and G channel corresponding to the third sampling layer as the UV coordinate values of the color scale map to obtain the color information of each color scale in the color scale map.
  • the acquired texture map may be processed by an algorithm for watercolor rendering to obtain rendering information for rendering the target three-dimensional model.
  • the algorithms used here mainly include light shading algorithm and contour algorithm.
  • the normal map of the target 3D model can be obtained, and then the normal map can be light-processed based on the preset light direction by using the lighting shading algorithm, and the light-processed normal map can be obtained.
  • the color map is combined with the light shading algorithm, and the color map, the preset light direction, and the normal map are fused to obtain the first layer.
  • the target 3D model is rendered using the first layer.
  • the first-level color information in the level map can be obtained, and then the first-level color information, the rendering information contained in the second layer, and the brush drawing trace information include
  • the horizontal distortion information of the brush and the vertical distortion information of the brush can be used to obtain the second-level color information in the level map, and by analogy, the color information of each color level in the level map can be obtained.
  • the light shading algorithm combine the highlight range information included in the brush drawing trace information and the area information under the R channel in the channel map, to the preset lighting direction, normal map, and the highlight included in the brush drawing trace information
  • the range information and the area information under the R channel in the channel map are fused to obtain the third layer.
  • color scale map and the color map can also be combined in the light shading algorithm, and the color scale map and the color map can be fused to obtain the fourth layer.
  • the third layer and the fourth layer are fused by using the light coloring algorithm to obtain the fifth layer.
  • the peach model can be formed by contour lines, and the model surrounded by contour lines uses color maps, normal maps, brushes A watercolor model rendered by fused information such as texture maps, noise maps, and color scale maps.
  • the leaves have a first color
  • the fruits have a second color. The first color and the second color can be different.
  • the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possible
  • the inner logic is OK.
  • the embodiment of the present disclosure also provides a graphics processing device corresponding to the graphics processing method. Since the problem-solving principle of the device in the embodiment of the disclosure is similar to the above-mentioned graphics processing method in the embodiment of the disclosure, the implementation of the device Reference can be made to the implementation of the method, and repeated descriptions will not be repeated.
  • FIG. 6 it is a schematic structural diagram of a graphics processing device provided by an embodiment of the present disclosure.
  • the device includes: an acquisition module 601, a first processing module 602, a second processing module 603, a third processing module 604, a Four processing modules 605; wherein,
  • An acquisition module 601 configured to acquire a texture map for rendering the target 3D model;
  • the texture map includes a color map corresponding to the target 3D model, a normal direction reflecting the normal direction of each point in the target 3D model Line maps, and brush maps that reflect the characteristics of brush drawing;
  • the first processing module 602 is configured to perform lighting processing on the normal map based on preset lighting direction information, to obtain a normal map after lighting processing;
  • the second processing module 603 is configured to perform a first sampling process on the normal map, the color map, and the brush map after the lighting processing, to obtain a first sampling layer;
  • the third processing module 604 is configured to, in response to the input contour line information of the target three-dimensional model, perform contour stroke processing on the first sampling layer based on the contour line information, to obtain a second sampling layer;
  • the fourth processing module 605 is configured to perform rendering processing on the target 3D model based on the second sampling layer to obtain a rendered target 3D model.
  • the device further includes:
  • the fifth processing module is configured to respond to the input brush drawing trace information, and based on the brush drawing trace information, perform brush trace processing on the first sampling layer to obtain the first sample after the brush trace processing layer;
  • the third processing module 604 is specifically configured to, in response to the input contour line information of the target 3D model, perform contour stroke processing on the first sampling layer after the brush trace processing based on the contour line information, Obtain the second sampling layer.
  • the brush drawing trace information includes at least one of brush texture intensity information, brush horizontal distortion information, brush vertical distortion information, and highlight range information.
  • the texture map further includes a noise map reflecting brush noise
  • the device also includes:
  • a sixth processing module configured to perform brush noise processing on the first sampling layer based on the brush noise information in the noise map, to obtain the first sampling layer after brush noise processing;
  • the third processing module 604 is specifically configured to, in response to the input contour line information of the target 3D model, perform contour stroke processing on the first sampling layer after brush noise processing based on the contour line information , to obtain the second sampling layer.
  • the texture map further includes a color scale map reflecting the color change characteristics of the light-receiving area in the target three-dimensional model
  • the device also includes:
  • a seventh processing module configured to perform color scale processing on the first sampling layer based on the color scale information in the color scale map, to obtain the first sampling layer after the color scale processing; in the color scale information Including color information corresponding to each color level, proportion information of each color level in the light-receiving area in the target three-dimensional model, and color fusion information of adjacent color levels in each color level;
  • the third processing module 604 is specifically configured to respond to the input contour line information of the target 3D model, and based on the contour line information, perform contour stroke processing on the first sampling layer after the color scale processing, to obtain The second sampling layer.
  • the texture map further includes a channel map reflecting an area to be rendered in the target 3D model
  • the seventh processing module is specifically configured to determine, based on the channel map, a local sampling layer corresponding to the area to be rendered in the first sampling layer in the first sampling layer;
  • the third processing module 604 is specifically configured to respond to the input contour line information of the target 3D model, and based on the contour line information, perform contour stroke processing on the local sampling layer after the color scale processing to obtain the obtained Describe the second sampling layer.
  • the color information corresponding to each color scale is determined through the following steps:
  • N+1th-order color information is obtained; wherein, N is a positive integer greater than or equal to 1; the first-order color information is determined according to the information in the third sampling layer.
  • the second processing module 603 is configured to perform the color map of each patch of the target 3D model with the light-processed normal map and the brush map respectively. Fusion processing to obtain sub-sampling layers respectively corresponding to the respective patches;
  • the sub-sampling layers corresponding to the respective patches are integrated to obtain the first sampling layer.
  • FIG. 7 it is a schematic structural diagram of a computer device 700 provided by an embodiment of the present disclosure, including a processor 701 , a memory 702 and a bus 703 .
  • the memory 702 is used to store execution instructions, including a memory 7021 and an external memory 7022; the memory 7021 here is also called an internal memory, and is used to temporarily store calculation data in the processor 701 and exchange data with an external memory 7022 such as a hard disk.
  • the processor 701 exchanges data with the external memory 7022 through the memory 7021.
  • the processor 701 communicates with the memory 702 through the bus 703, so that the processor 701 executes the following instructions:
  • the texture map includes a color map corresponding to the target 3D model, a normal map reflecting the normal direction of each point in the target 3D model, and a reflection pen Brush map with brush drawing features;
  • rendering processing is performed on the target three-dimensional model to obtain a rendered target three-dimensional model.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the graphics processing method described in the above-mentioned method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • Embodiments of the present disclosure further provide a computer program, which executes the steps of the graphics processing method described in the foregoing method embodiments when the computer program is run by a processor.
  • the embodiment of the present disclosure also provides a computer program product, the computer product is loaded with program code, and the instructions included in the program code can be used to execute the steps of the graphics processing method described in the above method embodiment, for details, please refer to the above method The embodiment will not be repeated here.
  • the above-mentioned computer program product may be specifically implemented by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) etc. wait.
  • a software development kit Software Development Kit, SDK
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

本公开提供了一种图形处理方法、装置、计算机设备及存储介质,其中,该方法包括:获取用于对目标三维模型进行渲染的纹理贴图;纹理贴图包括颜色贴图、反映法线方向的法线贴图以及反映笔刷绘制特点的笔刷贴图;基于预设光照方向信息,对法线贴图进行光照处理,得到光照处理后的法线贴图;将光照处理后的法线贴图、颜色贴图以及笔刷贴图进行第一采样处理,得到第一采样图层;响应于输入的目标三维模型的轮廓线信息,基于轮廓线信息,对第一采样图层进行轮廓描边处理,得到第二采样图层;基于第二采样图层,对目标三维模型进行渲染处理,得到渲染后的目标三维模型。本公开实施例渲染后的目标三维模型更加生动、更加接近水彩的绘制效果。

Description

一种图形处理方法、装置、计算机设备及存储介质
相关申请交叉引用
本申请要求于2021年12月05日提交中国专利局、申请号为202111471456.2、发明名称为“一种图形处理方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用并入本文。
技术领域
本公开涉及计算机图形技术领域,具体而言,涉及一种图形处理方法、装置、计算机设备及存储介质。
背景技术
随着计算机图形技术的发展,三维模型渲染被广泛应用于电影、游戏等场景中。对于不同的动画或者游戏类型,虚拟场景中三维模型的风格也多种多样,而不仅仅是写实风格。特别是,受到水彩画、碳笔画、卡通画等绘画方式的影响,人们开始尝试利用水彩等非真实感图形对三维模型进行渲染。但是在卡通风格渲染的三维场景中,现有技术对三维模型进行水彩渲染时缺少生动性和水彩绘制效果。
发明内容
本公开实施例至少提供一种图形处理方法、装置、计算机设备及存储介质。
第一方面,本公开实施例提供了一种图形处理方法,包括:
获取用于对目标三维模型进行渲染的纹理贴图;所述纹理贴图包括所述目标三维模型对应的颜色贴图、反映所述目标三维模型中每个点的法线方向的法线贴图、以及反映笔刷绘制特点的笔刷贴图;
基于预设光照方向信息,对所述法线贴图进行光照处理,得到光照处理后的法线贴图;
将所述光照处理后的法线贴图、所述颜色贴图以及所述笔刷贴图进行第一采样处理,得到第一采样图层;
响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述第一采样图层进行轮廓描边处理,得到第二采样图层;
基于所述第二采样图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。
一种可选的实施方式中,在得到所述第一采样图层之后,对所述第一采样图层进行轮廓描边处理之前,所述方法还包括:
响应于输入的笔刷绘制痕迹信息,基于所述笔刷绘制痕迹信息,对所述第一采样图层进行笔刷痕迹处理,得到笔刷痕迹处理后的第一采样图层;
所述响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述第一采样图层进行轮廓描边处理,得到第二采样图层,包括:
响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述笔刷痕迹处理后的第一采样图层进行轮廓描边处理,得到所述第二采样图层。
一种可选的实施方式中,所述笔刷绘制痕迹信息包括笔刷质感强度信息、笔刷横向扭曲信息、笔刷纵向扭曲信息、高光范围信息中的至少一种。
一种可选的实施方式中,所述纹理贴图还包括反映笔刷噪波的噪波贴图;
在得到所述第一采样图层之后,对所述第一采样图层进行轮廓描边处理之前,所述方法还包括:
基于所述噪波贴图中的笔刷噪波信息,对所述第一采样图层进行笔刷噪波处理,得到笔刷噪波处理后的第一采样图层;
所述响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述第一采样图层进行轮廓描边处理,得到第二采样图层,包括:
响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述笔刷噪波处理后的第一采样图层进行轮廓描边处理,得到所述第二采样图层。
一种可选的实施方式中,所述纹理贴图中还包括反映所述目标三维模型中受光区域的颜色变化特点的色阶贴图;
在得到所述第一采样图层之后,对所述第一采样图层进行轮廓描边处理之前,所述方法还包括:
基于所述色阶贴图中的色阶信息,对所述第一采样图层进行色阶处理,得到色阶处理后的第一采样图层;所述色阶信息中包括各个色阶分别对应的颜色信息、所述各个色阶在所述目标三维模型中受光区域的占比信息和所述各个色阶中相邻色阶的颜色融合信息;
所述响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述第一采样图层进行轮廓描边处理,得到第二采样图层,包括:
响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述色阶处理后的第一采样图层进行轮廓描边处理,得到所述第二采样图层。
一种可选的实施方式中,所述纹理贴图还包括反映所述目标三维模型中待进行色阶渲染的区域的通道贴图;
所述基于所述色阶贴图中的色阶信息,对所述第一采样图层进行色阶处理,得到色阶处理后的第一采样图层,包括:
基于所述通道贴图,确定所述第一采样图层中与所述待进行色阶渲染的区域对应的局部采样图层;
基于所述色阶贴图中的色阶信息,对所述局部采样图层进行色阶处理,得到色阶处理后的局部采样图层;
所述响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述第一采样图层进行轮廓描边处理,得到第二采样图层,包括:
响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述色阶处理后的局部采样图层进行轮廓描边处理,得到所述第二采样图层。
一种可选的实施方式中,所述各个色阶分别对应的颜色信息是通过以下步骤确定的:
将所述光照处理后的法线贴图和所述笔刷贴图进行第二采样处理,得到第三采样图层;
基于所述色阶贴图中的第N阶颜色信息,以及所述第三采样图层,得到第N+1阶颜色信息;其中,N为大于或等于1的正整数;第1阶颜色信息是根据所述第三采样图层中的信息确定的。
一种可选的实施方式中,所述将所述光照处理后的法线贴图、所述颜色贴图以及所述笔刷贴图进行第一采样处理,得到第一采样图层,包括:
将所述目标三维模型的各个面片的所述颜色贴图分别与所述光照处理后的法线贴图及所述笔刷贴图进行融合处理,得到所述各个面片分别对应的子采样图层;
对所述各个面片分别对应的所述子采样图层进行整合,得到所述第一采样图层。
第二方面,本公开实施例还提供一种图形处理装置,包括:
获取模块,用于获取用于对目标三维模型进行渲染的纹理贴图;所述纹理贴图包括所述目标三维模型对应的颜色贴图、反映所述目标三维模型中每个点的法线方向的法线贴图、以及反映笔刷绘制特点的笔刷贴图;
第一处理模块,用于基于预设光照方向信息,对所述法线贴图进行光照处理,得到光照处理后的法线贴图;
第二处理模块,用于将所述光照处理后的法线贴图、所述颜色贴图以及所述笔刷贴图进行第一采样处理,得到第一采样图层;
第三处理模块,用于响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述第一采样图层进行轮廓描边处理,得到第二采样图层;
第四处理模块,用于基于所述第二采样图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。
第三方面,本公开实施例还提供一种计算机设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当所述计算机设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。
第四方面,本公开实施例还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器运行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。
第五方面,本公开实施例还提供一种计算机程序,所述计算机程序被处理器运行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。
第六方面,本公开实施例还提供一种计算机程序产品,所述计算机程序产品包括计算机程序,所述计算机程序被处理器运行时执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。
本公开实施例提供的图形处理方法,将用于对目标三维模型进行渲染的光照处理后的法线贴图、颜色贴图和反映笔刷绘制特点的笔刷贴图进行第一采样处理;然后采用轮廓线信息对第一采样图层进行轮廓描边处理;通过这样的渲染方法,可以使得目标三维模型实现笔刷质感和立体感,且得到的第二采样图层可以更加突出目标三维模型的笔刷绘制效果,使得渲染得到的目标三维模型更加生动、更加接近水彩的绘制效果。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了本公开实施例所提供的一种图形处理方法的流程图;
图2示出了本公开实施例所提供的一种笔刷贴图的效果示意图;
图3示出了本公开实施例所提供的一种噪波贴图的效果示意图;
图4示出了本公开实施例所提供的一种色阶贴图的效果示意图;
图5示出了本公开实施例所提供的一种渲染后的目标三维模型的效果示意图;
图6示出了本公开实施例所提供的一种图形处理装置的示意图;
图7示出了本公开实施例所提供的一种计算机设备的示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
三维模型渲染的过程主要包括:首先,创建三维模型,然后利用着色器来制作具有一定材质效果的贴图,然后将贴图绘制在三维模型中进行质感效果模拟,最终对完成效果模拟的三维模型进行渲染着色,并显示于三维场景,由此使得三维场景中显示的三维模型具有相应的风格,例如写实、卡通、手绘等。而在卡通风格渲染的三维场景中,对一些三维模型进行水彩渲染时缺少生动性和水彩绘制效果。
基于上述研究,本公开提供了一种图形处理方法,通过将用于对目标三维模型进行渲染的光照处理后的法线贴图、颜色贴图和反映笔刷绘制特点的笔刷贴图进行第一采样处理,得到的第一采样图层可以使得目标三维模型实现笔刷质感和立体感;然后将用于对目标三维模型进行轮廓描边的轮廓线信息对第一采样图层进行轮廓描边处理,得到的第二采样图层可以更加突出目标三维模型的笔刷绘制效果,使得目标三维模型的渲染效果更加生动、更加接近水彩的绘制效果。
针对以上方案所存在的缺陷以及所提出的解决方案,均是发明人在经过实践并仔细研究后得出的结果,因此,上述问题的发现过程以及下文中本公开针对上述问题所提出的解决方案,都应该是发明人在本公开过程中对本公开做出的贡献。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种图形处理方法进行详细介绍,下面以执行主体为服务器为例对本公开实施例提供的图形处理方法加以说明。
参见图1所示,为本公开实施例提供的图形处理方法的流程图,所述方法包括S101~S105,其中:
S101:获取用于对目标三维模型进行渲染的纹理贴图;所述纹理贴图包括所述目标三维模型对应的颜色贴图、反映所述目标三维模型中每个点的法线方向的法线贴图、以及反映笔刷绘制特点的笔刷贴图。
在本公开实施例中,目标三维模型可以为目标游戏场景下的虚拟对象所对应的、待渲染的三维模型。虚拟对象可以是任意的三维虚拟对象,例如虚拟人物对象、虚拟物体对象等。目标三维模型可以是利用动画渲染和制作软件绘制得到的,例如3D Studio Max(简称3DS Max或3D Max)或者Maya等三维模型制作软件。
目标三维模型制作完成之后,可以将制作好的目标三维模型展开,得到UV坐标系(U可以表示该UV坐标下的横向坐标轴,V可以表示该UV坐标下的纵向坐标轴)下的二维图像。得到的二维图像中的每个UV坐标值可以对应到该目标三维模型表面上的每个点。
纹理贴图中可以存储有用于对目标三维模型进行渲染的渲染信息。纹理贴图可以是通过绘图软件制作得到的。
具体地,颜色贴图(Color Map)可以包含目标三维模型对应的虚拟对象本身的颜色信息。具体地,颜色贴图中包含的颜色信息与UV坐标值是相对应的,颜色贴图中可以包含目标三维模型展开后得到的二维图像中每个UV坐标值下的颜色信息。颜色贴图可以通过例如绘图软件Photoshop或者其他绘图软件绘制得到。
法线贴图(Normal Map)中可以包含目标三维模型中每个点的法线方向,通过红绿蓝(Red Green Blue,RGB)颜色通道可以标记法线方向。通过将法线贴图应用到目标三维模型的表面,法线贴图中包含的每个点的法线方向可以体现目标三维模型的凹凸效果。法线贴图可以通过例如Zbrush或者May等绘图软件制作得到。
笔刷贴图(Brush Map)中可以包含笔刷的材质、绘制形状、绘制连贯度以及颜料的填充度等笔刷绘制特点信息。如图2所示的一种笔刷贴图的效果示意图中,可以直观地看到笔刷贴图中包含的笔刷的绘制形状。
这里,可以将绘制好的颜色贴图、法线贴图和笔刷贴图添加到游戏引擎的操作界面,游戏引擎可以根据颜色贴图、法线贴图和笔刷贴图中包含的信息进行处理,得到用于对目标三维模型进行渲染的渲染信息。
下面将详细介绍对上述多种纹理贴图进行处理,得到用于对目标三维模型进行渲染的渲染信息的步骤。
S102:基于预设光照方向信息,对所述法线贴图进行光照处理,得到光照处理后的法线贴图。
在本公开实施例中,预设光照方向可以是从预设方向照射的、平行的自然光。这里,当法线贴图中某个点的法线方向与预设光照方向的点乘为大于0且小于1的时候,该点是受光的;当法线贴图中某个点的法线方向与预设光照方向的点乘为1的时候,该点是 正对光源的,该点是最亮的;当法线贴图中某个点的法线方向与预设光照方向的点乘为大于-1且小于0的时候,该点是被光的;当法线贴图中某个点的法线方向与预设光照方向的点乘为-1的时候,该点是正背对光源的,该点是最暗的。光照处理后的法线贴图中不仅可以包含每个点的法线方向,还可以包含每个点受到的光照信息。利用光照处理后的法线贴图对目标三维模型进行渲染后,可以使得目标三维模型上的每个点呈现出受光或背光的视觉效果。
S103:将所述光照处理后的法线贴图、所述颜色贴图以及所述笔刷贴图进行第一采样处理,得到第一采样图层。
这里,将光照处理后的法线贴图、颜色贴图以及笔刷贴图进行第一采样处理的过程可以为:将光照处理后的法线贴图中包含的每个点的法线方向和光照信息、颜色贴图中包含的每个点的颜色信息以及笔刷贴图中包含的笔刷绘制特点信息进行融合。得到的第一采样图层中可包含有上述信息融合之后的第一渲染信息。利用第一采样图层对目标三维模型渲染后,可以使得渲染后的目标三维模型能够呈现出水彩风格中由笔刷绘制的颜色渲染效果。
目标三维模型的表面有多个面片构成。在一种实施方式中,对光照处理后的法线贴图、颜色贴图以及笔刷贴图进行第一采样处理的时候,可以将目标三维模型的各个面片的颜色贴图分别与光照处理后的法线贴图及笔刷贴图进行融合处理,得到各个面片分别对应的子采样图层。然后,对各个面片分别对应的子采样图层进行整合,得到第一采样图层。
其中,每个面片对应的子颜色贴图中包含该面片对应虚拟对象中相应位置的颜色信息。每个面片分别对应的子采样图层中可以包含该面片的光照处理后的法线贴图、颜色贴图以及笔刷贴图进行第一采样处理融合后的第一渲染信息。然后按照各个面片分别在目标三维模型中的位置,对各个面片对应的子采样图层进行整合,得到第一采样图层。
S104:响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述第一采样图层进行轮廓描边处理,得到第二采样图层。
这里可以响应于在游戏引擎的操作界面上输入的轮廓线信息。轮廓线信息可以包括目标三维模型的外轮廓线与内轮廓线。其中外轮廓线可以是目标三维模型与背景、或目标三维模型与其他三维模型之间的分界线。内轮廓线可以是位于目标三维模型中的、不同部位之间的分界线,例如人物三维模型的上衣与下衣之间的分界线等。
通过对第一采样图层进行轮廓描边处理,得到的第二采样图层中可以包含第一采样图层中的第一渲染信息和轮廓线信息融合之后的第二渲染信息。利用第二采样图层对目标三维模型进行渲染后,可以使得目标三维模型呈现出水彩风格中轮廓分明的渲染效果。
S105:基于所述第二采样图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。
在对目标三维模型进行渲染的时候,可以利用第二采样图层中包含的第二渲染信息直接进行渲染,得到的目标三维模型中融合了颜色贴图中存储的目标三维模型的各个点对应的颜色信息、法线贴图中存储的各个点的法线方向、预设光照方向信息以及笔刷贴图中存储的笔刷绘制特点信息,从而使得渲染后的目标三维模型可以呈现出水彩风格的渲染效果。
为了使得目标三维模型的水彩风格渲染效果更加生动,在一种实施方式中,在得到第一采样图层之后,对第一采样图层进行轮廓描边处理之前,可以响应于输入的笔刷绘制痕迹信息,基于笔刷绘制痕迹信息,对第一采样图层进行笔刷痕迹处理,得到笔刷痕迹处理后的第一采样图层。然后响应于输入的目标三维模型的轮廓线信息,基于轮廓线信息,对笔刷痕迹处理后的第一采样图层进行轮廓描边处理,得到第二采样图层。
这里可以响应于在游戏引擎的操作界面上输入的笔刷绘制痕迹信息。笔刷绘制痕迹信息可以增加笔刷的痕迹方向、笔刷痕迹的质感效果、高光效果等,因此可以使得渲染后的目标三维模型的笔刷绘制效果更加真实。因此,在又一方式中,笔刷绘制痕迹信息可以包括笔刷质感强度信息、笔刷横向扭曲信息、笔刷纵向扭曲信息、高光范围信息中的至少一种。
在具体实施中,可以基于笔刷质感强度信息,调节目标三维模型中笔刷痕迹的质感强度。利用笔刷质感强度信息对第一采样图层进行笔刷痕迹处理,得到的笔刷痕迹处理后的第一采样图层中融合了笔刷质感强度信息。在利用笔刷痕迹处理后的第一采样图层对目标三维模型进行渲染后,目标三维模型中笔刷痕迹的质感更强,也就是使得目标三维模型呈现出笔刷痕迹更强的水彩绘制效果。
在具体实施中,也可以基于笔刷横向扭曲信息和/或笔刷纵向扭曲信息,调节目标三维模型中笔刷痕迹的横向扭曲方向和/或纵向扭曲方向。利用笔刷横向扭曲信息和/或笔刷纵向扭曲信息,对第一采样图层进行笔刷痕迹处理,得到的笔刷痕迹处理后的第一采样图层中融合了笔刷横向扭曲信息和/或笔刷纵向扭曲信息。在利用笔刷痕迹处理后的第一采样图层对目标三维模型进行渲染后,目标三维模型中笔刷痕迹的扭曲方向更加明显,也就是使得目标三维模型呈现出笔刷痕迹方向更加明显的水彩绘制效果。
在具体实施中,也可以基于高光范围信息,调节目标三维模型中笔刷痕迹的高光范围。利用高光范围信息,对第一采样图层进行笔刷痕迹处理,得到的笔刷痕迹处理后的第一采样图层中融合了高光范围。在利用笔刷痕迹处理后的第一采样图层对目标三维模型进行渲染后,目标三维模型中笔刷痕迹的高光范围更加明显,也就是使得目标三维模型呈现出笔刷痕迹高光范围更加突出的水彩绘制效果。
在具体实施过程中,也可以利用上述笔刷绘制痕迹信息中的多种对第一采样图层进行笔刷痕迹处理,从而更能增加目标三维模型的水彩绘制效果,增强水彩绘制效果的真实性和生动性。
为了增强目标三维模型的水彩绘制效果的生动性,还可以对笔刷的渲染位置和渲染次数等进行随机处理,使得渲染位置和渲染次数等更加有随机性。示例性地,在一种实施方式中,纹理贴图中还可以包括反映笔刷噪波的噪波贴图。在得到第一采样图层之后,且在对第一采样图层进行轮廓描边处理之前,可以基于噪波贴图中的笔刷噪波信息,对第一采样图层进行笔刷噪波处理,得到笔刷噪波处理后的第一采样图层。然后响应于输入的目标三维模型的轮廓线信息,基于轮廓线信息,对笔刷噪波处理后的第一采样图层进行轮廓描边处理,得到第二采样图层。
这里,噪波贴图中可以包括渲染位置和渲染次数等信息对应的噪波的笔刷噪波信息。如图3所示的一种噪波贴图的效果示意图中,可以包括随机分布的噪波纹理和噪波位置。
利用噪波贴图中的笔刷噪波信息,对第一采样图层进行笔刷噪波处理后,目标三维模型上的笔刷的渲染位置和渲染次数等信息可以随机分布,从而增强水彩绘制效果的真实性和生动性。利用轮廓线信息对笔刷噪波处理后的第一采样图层进行轮廓描边处理,得到第二采样图层的过程可以参照S105的步骤,这里不再赘述。
为了增强目标三维模型的水彩绘制效果的立体感,在一种实施方式中,纹理贴图中还包括反映目标三维模型中受光区域的颜色变化特点的色阶贴图。在得到第一采样图层之后,并且对第一采样图层进行轮廓描边处理之前,可以基于色阶贴图中的色阶信息,对第一采样图层进行色阶处理,得到色阶处理后的第一采样图层;色阶信息中包括各个色阶分别对应的颜色信息、各个色阶在目标三维模型中受光区域的占比信息和各个色阶中相邻色阶的颜色融合信息。然后,响应于输入的目标三维模型的轮廓线信息,基于轮廓线信息,对色阶处理后的第一采样图层进行轮廓描边处理,得到第二采样图层。
这里,色阶贴图可以是通过绘图软件制作的反映目标三维模型中受光区域的颜色变化特点的贴图。色阶贴图中可以包含多个色阶。色阶贴图中的各个色阶可以按照颜色深浅变化的顺序进行排列。如图4所示的一种色阶贴图的效果示意图,可以包含4个色阶,且4个色阶按照由深到浅的顺序从左到右排列。并且每个色阶在整个色阶贴图中的占比信息可以是不同的。相邻的两个色阶在临界位置的颜色值可以是将这两个色阶的颜色值进行融合之后的颜色融合值。
利用色阶贴图中的色阶信息,对第一采样图层进行色阶处理,得到的色阶处理后的第一采样图层中融合了色阶贴图中的色阶信息。在利用色阶处理后的第一采样图层对目标三维模型进行渲染后,目标三维模型上受光区域的颜色可以呈现出渐变的效果,从而可以避免从阴影区域到受光区域之间颜色发生跳变的情况,从而增加目标三维模型的立体视觉效果和真实性。利用轮廓线信息对色阶处理后的的第一采样图层进行色阶处理,得到第二采样图层的过程可以参照S105的步骤,这里不再赘述。
考虑到在一些情况下可以只对目标三维模型中的目标区域进行色阶处理,例如对虚拟人物的腿部进行色阶处理。因此,这里可以获取可以反映目标三维模型中待进行色阶渲染的区域的通道贴图,通过通道贴图中存储的待进行色阶渲染的区域信息,对待进行色阶渲染的区域信息所指示的区域进行色阶渲染。
在具体实施方式中,可以基于通道贴图,确定第一采样图层中与待进行色阶渲染的区域对应的局部采样图层。然后,基于所述色阶贴图中的色阶信息,对所述局部采样图层进行色阶处理,得到色阶处理后的局部采样图层。然后响应于输入的目标三维模型的轮廓线信息,基于轮廓线信息,对色阶处理后的局部采样图层进行轮廓描边处理,得到第二采样图层。
这里,局部采样图层中可以包括待进行色阶渲染的区域信息以及区域信息所指示的区域对应的第一渲染信息。
通过使用通道贴图,可以对待进行色阶渲染的区域进行色阶渲染,从而使得待进行色阶渲染的区域呈现出颜色渐变的效果,增加目标三维模型的立体视觉效果和真实性。利用轮廓线信息对色阶处理后的的局部采样图层进行轮廓描边处理,得到第二采样图层的过程可以参照S105的步骤,这里不再赘述。
在具体实施中,在对目标三维模型进行渲染的过程中所使用的色阶贴图是对受光区域进行色阶处理的,因此色阶贴图中的颜色信息与目标三维模型的预设光照方向、法线贴图中存储的法线方向和笔刷贴图有关。因此,在一种实施方式中,各个色阶分别对应的颜色信息可以是通过以下步骤得到的:首先,将光照处理后的法线贴图和笔刷贴图进行第二采样处理,得到第三采样图层;然后,基于色阶贴图中的第N阶颜色信息,以及第三采样图层,得到第N+1阶颜色信息;其中,N为大于或等于1的正整数;第1阶颜色信息是根据所述第三采样图层中的信息确定的。
这里,第三采样图层中可以包含由光照处理后的法线贴图中包含的光照信息与法线方向的融合信息、以及笔刷贴图中包含的笔刷绘制特点信息进行融合之后的第三渲染信息。
色阶贴图中包含多个色阶,其中,第1阶颜色信息是根据所述第三采样图层中的信息确定的。在确定出第1阶颜色信息之后,可以基于第1阶颜色信息和第三采样图层中的信息,确定第2阶颜色信息。在确定出第2阶颜色信息之后,可以基于第2阶颜色信息和第三采样图层中的信息,确定第3阶颜色信息。以此类推,直至确定出色阶贴图中各个色阶的颜色信息。
在一种实施中,还可以将光照处理后的法线贴图、笔刷贴图、噪波贴图、笔刷绘制痕迹信息中的笔刷质感强度信息进行第二采样处理,得到第三采样图层。然后将第三采样图层对应的R通道和G通道作为色阶贴图的UV坐标值,得到色阶贴图中每个色阶的颜色信息。
在本公开实施例中,可以通过用于进行水彩渲染的算法对获取的纹理贴图进行处理,得到用于对目标三维模型进行渲染的渲染信息。这里使用的算法主要包括光照着色算法和轮廓线算法。
在具体实施中,首先可以获取目标三维模型的法线贴图,然后利用光照着色算法,基于预设光照方向,对法线贴图进行光照处理,得到光照处理后的法线贴图。
然后,在光照着色算法中结合颜色贴图,对颜色贴图、预设光照方向、法线贴图进行融合处理,得到第一图层。利用第一图层对目标三维模型进行渲染。
然后,在光照着色算法中结合笔刷贴图、噪波贴图、笔刷绘制痕迹信息中包括的笔刷质感强度信息,对第一图层、笔刷贴图、噪波贴图、笔刷绘制痕迹信息中包括的笔刷质感强度信息进行融合处理,得到第二图层。然后利用第二图层中包含的R通道和G通道下的信息值作为UV坐标值得到色阶贴图。
其中,根据第二图层中包含的渲染信息可以得到色阶贴图中的第1阶颜色信息,然后利用第1阶颜色信息、第二图层中包含的渲染信息以及笔刷绘制痕迹信息中包括的笔刷横向扭曲信息和笔刷纵向扭曲信息,得到色阶贴图中的第2阶颜色信息,以此类推,可以得到色阶贴图中各个色阶的颜色信息。
接下来,在光照着色算法中结合笔刷绘制痕迹信息中包括的高光范围信息、通道贴图中R通道下的区域信息,对预设光照方向、法线贴图、笔刷绘制痕迹信息中包括的高光范围信息、通道贴图中R通道下的区域信息进行融合处理,得到第三图层。
此外,光照着色算法中还可以结合色阶贴图和颜色贴图,对色阶贴图和颜色贴图进行融合,得到第四图层。
接下来,利用光照着色算法对第三图层和第四图层进行融合处理,得到第五图层。
最后,获取目标三维模型对应的轮廓线,利用轮廓线算法对第五图层进行轮廓描边处理,得到第六图层,然后利用第六图层对目标三维模型进行渲染,得到渲染后的目标三维模型,如图5所示的一种目标三维模型的效果示意图中的桃子模型,该桃子模型可以是由轮廓线,以及轮廓线所围成的模型中利用颜色贴图、法线贴图、笔刷贴图、噪波贴图、色阶贴图等融合之后的信息所渲染得到的水彩模型,其中叶子具有第一颜色,果实具有第二颜色,第一颜色与第二颜色可以不同。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
基于同一发明构思,本公开实施例中还提供了与图形处理方法对应的图形处理装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述图形处理方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。
参照图6所示,为本公开实施例提供的一种图形处理装置的架构示意图,所述装置包括:获取模块601、第一处理模块602、第二处理模块603、第三处理模块604、第四处理模块605;其中,
获取模块601,用于获取用于对目标三维模型进行渲染的纹理贴图;所述纹理贴图包括所述目标三维模型对应的颜色贴图、反映所述目标三维模型中每个点的法线方向的法线贴图、以及反映笔刷绘制特点的笔刷贴图;
第一处理模块602,用于基于预设光照方向信息,对所述法线贴图进行光照处理,得到光照处理后的法线贴图;
第二处理模块603,用于将所述光照处理后的法线贴图、所述颜色贴图以及所述笔刷贴图进行第一采样处理,得到第一采样图层;
第三处理模块604,用于响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述第一采样图层进行轮廓描边处理,得到第二采样图层;
第四处理模块605,用于基于所述第二采样图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。
一种可选的实施方式中,所述装置还包括:
第五处理模块,用于响应于输入的笔刷绘制痕迹信息,基于所述笔刷绘制痕迹信息,对所述第一采样图层进行笔刷痕迹处理,得到笔刷痕迹处理后的第一采样图层;
第三处理模块604,具体用于响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述笔刷痕迹处理后的第一采样图层进行轮廓描边处理,得到所述第二采样图层。
一种可选的实施方式中,所述笔刷绘制痕迹信息包括笔刷质感强度信息、笔刷横向扭曲信息、笔刷纵向扭曲信息、高光范围信息中的至少一种。
一种可选的实施方式中,所述纹理贴图还包括反映笔刷噪波的噪波贴图;
所述装置还包括:
第六处理模块,用于基于所述噪波贴图中的笔刷噪波信息,对所述第一采样图层进行笔刷噪波处理,得到笔刷噪波处理后的第一采样图层;
第三处理模块604,具体用于响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述笔刷噪波处理后的第一采样图层进行轮廓描边处理,得到所述第二采样图层。
一种可选的实施方式中,所述纹理贴图中还包括反映所述目标三维模型中受光区域的颜色变化特点的色阶贴图;
所述装置还包括:
第七处理模块,用于基于所述色阶贴图中的色阶信息,对所述第一采样图层进行色阶处理,得到色阶处理后的第一采样图层;所述色阶信息中包括各个色阶分别对应的颜色信息、所述各个色阶在所述目标三维模型中受光区域的占比信息和所述各个色阶中相邻色阶的颜色融合信息;
第三处理模块604,具体用于响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述色阶处理后的第一采样图层进行轮廓描边处理,得到所述第二采样图层。
一种可选的实施方式中,所述纹理贴图还包括反映所述目标三维模型中待进行色阶渲染的区域的通道贴图;
第七处理模块,具体用于基于所述通道贴图,确定所述第一采样图层中与所述待进行色阶渲染的区域对应的局部采样图层;
基于所述色阶贴图中的色阶信息,对所述局部采样图层进行色阶处理,得到色阶处理后的局部采样图层;
第三处理模块604,具体用于响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述色阶处理后的局部采样图层进行轮廓描边处理,得到所述第二采样图层。
一种可选的实施方式中,所述各个色阶分别对应的颜色信息是通过以下步骤确定的:
将所述光照处理后的法线贴图和所述笔刷贴图进行第二采样处理,得到第三采样图层;
基于所述色阶贴图中的第N阶颜色信息,以及所述第三采样图层,得到第N+1阶颜色信息;其中,N为大于或等于1的正整数;第1阶颜色信息是根据所述第三采样图层中的信息确定的。
一种可选的实施方式中,第二处理模块603,用于将所述目标三维模型的各个面片的所述颜色贴图分别与所述光照处理后的法线贴图及所述笔刷贴图进行融合处理,得到所述各个面片分别对应的子采样图层;
对所述各个面片分别对应的所述子采样图层进行整合,得到所述第一采样图层。
关于装置中的各模块的处理流程、以及各模块之间的交互流程的描述可以参照上述方法实施例中的相关说明,这里不再详述。
基于同一技术构思,本公开实施例还提供了一种计算机设备。参照图7所示,为本公开实施例提供的计算机设备700的结构示意图,包括处理器701、存储器702和总线703。其中,存储器702用于存储执行指令,包括内存7021和外部存储器7022;这里的内存7021也称内存储器,用于暂时存放处理器701中的运算数据,以及与硬盘等外部存储器7022交换的数据,处理器701通过内存7021与外部存储器7022进行数据交换,当 计算机设备700运行时,处理器701与存储器702之间通过总线703通信,使得处理器701执行以下指令:
获取用于对目标三维模型进行渲染的纹理贴图;所述纹理贴图包括所述目标三维模型对应的颜色贴图、反映所述目标三维模型中每个点的法线方向的法线贴图、以及反映笔刷绘制特点的笔刷贴图;
基于预设光照方向信息,对所述法线贴图进行光照处理,得到光照处理后的法线贴图;
将所述光照处理后的法线贴图、所述颜色贴图以及所述笔刷贴图进行第一采样处理,得到第一采样图层;
响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述第一采样图层进行轮廓描边处理,得到第二采样图层;
基于所述第二采样图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。
本公开实施例还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器运行时执行上述方法实施例中所述的图形处理方法的步骤。其中,所述存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例还提供一种计算机程序,所述计算机程序被处理器运行时执行上述方法实施例中所述的图形处理方法的步骤。
本公开实施例还提供一种计算机程序产品,所述计算机产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中所述的图形处理方法的步骤,具体可参见上述方法实施例,在此不再赘述。
其中,上述计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。

Claims (13)

  1. 一种图形处理方法,包括:
    获取用于对目标三维模型进行渲染的纹理贴图;所述纹理贴图包括所述目标三维模型对应的颜色贴图、反映所述目标三维模型中每个点的法线方向的法线贴图、以及反映笔刷绘制特点的笔刷贴图;
    基于预设光照方向信息,对所述法线贴图进行光照处理,得到光照处理后的法线贴图;
    将所述光照处理后的法线贴图、所述颜色贴图以及所述笔刷贴图进行第一采样处理,得到第一采样图层;
    响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述第一采样图层进行轮廓描边处理,得到第二采样图层;
    基于所述第二采样图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。
  2. 根据权利要求1所述的方法,其中,在得到所述第一采样图层之后,对所述第一采样图层进行轮廓描边处理之前,所述方法还包括:
    响应于输入的笔刷绘制痕迹信息,基于所述笔刷绘制痕迹信息,对所述第一采样图层进行笔刷痕迹处理,得到笔刷痕迹处理后的第一采样图层;
    所述响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述第一采样图层进行轮廓描边处理,得到第二采样图层,包括:
    响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述笔刷痕迹处理后的第一采样图层进行轮廓描边处理,得到所述第二采样图层。
  3. 根据权利要求2所述的方法,其中,所述笔刷绘制痕迹信息包括笔刷质感强度信息、笔刷横向扭曲信息、笔刷纵向扭曲信息、高光范围信息中的至少一种。
  4. 根据权利要求1所述的方法,其中,所述纹理贴图还包括反映笔刷噪波的噪波贴图;
    在得到所述第一采样图层之后,对所述第一采样图层进行轮廓描边处理之前,所述方法还包括:
    基于所述噪波贴图中的笔刷噪波信息,对所述第一采样图层进行笔刷噪波处理,得到笔刷噪波处理后的第一采样图层;
    所述响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述第一采样图层进行轮廓描边处理,得到第二采样图层,包括:
    响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述笔刷噪波处理后的第一采样图层进行轮廓描边处理,得到所述第二采样图层。
  5. 根据权利要求1所述的方法,其中,所述纹理贴图中还包括反映所述目标三维模型中受光区域的颜色变化特点的色阶贴图;
    在得到所述第一采样图层之后,对所述第一采样图层进行轮廓描边处理之前,所述方法还包括:
    基于所述色阶贴图中的色阶信息,对所述第一采样图层进行色阶处理,得到色阶处理后的第一采样图层;所述色阶信息中包括各个色阶分别对应的颜色信息、所述各个色 阶在所述目标三维模型中受光区域的占比信息和所述各个色阶中相邻色阶的颜色融合信息;
    所述响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述第一采样图层进行轮廓描边处理,得到第二采样图层,包括:
    响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述色阶处理后的第一采样图层进行轮廓描边处理,得到所述第二采样图层。
  6. 根据权利要求5所述的方法,其中,所述纹理贴图还包括反映所述目标三维模型中待进行色阶渲染的区域的通道贴图;
    所述基于所述色阶贴图中的色阶信息,对所述第一采样图层进行色阶处理,得到色阶处理后的第一采样图层,包括:
    基于所述通道贴图,确定所述第一采样图层中与所述待进行色阶渲染的区域对应的局部采样图层;
    基于所述色阶贴图中的色阶信息,对所述局部采样图层进行色阶处理,得到色阶处理后的局部采样图层;
    所述响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述第一采样图层进行轮廓描边处理,得到第二采样图层,包括:
    响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓线信息,对所述色阶处理后的局部采样图层进行轮廓描边处理,得到所述第二采样图层。
  7. 根据权利要求5或6所述的方法,其中,所述各个色阶分别对应的颜色信息是通过以下步骤确定的:
    将所述光照处理后的法线贴图和所述笔刷贴图进行第二采样处理,得到第三采样图层;
    基于所述色阶贴图中的第N阶颜色信息,以及所述第三采样图层,得到第N+1阶颜色信息;其中,N为大于或等于1的正整数;第1阶颜色信息是根据所述第三采样图层中的信息确定的。
  8. 根据权利要求1至7中任一项所述的方法,其中,所述将所述光照处理后的法线贴图、所述颜色贴图以及所述笔刷贴图进行第一采样处理,得到第一采样图层,包括:
    将所述目标三维模型的各个面片的所述颜色贴图分别与所述光照处理后的法线贴图及所述笔刷贴图进行融合处理,得到所述各个面片分别对应的子采样图层;
    对所述各个面片分别对应的所述子采样图层进行整合,得到所述第一采样图层。
  9. 一种图形处理装置,包括:
    获取模块,用于获取用于对目标三维模型进行渲染的纹理贴图;所述纹理贴图包括所述目标三维模型对应的颜色贴图、反映所述目标三维模型中每个点的法线方向的法线贴图、以及反映笔刷绘制特点的笔刷贴图;
    第一处理模块,用于基于预设光照方向信息,对所述法线贴图进行光照处理,得到光照处理后的法线贴图;
    第二处理模块,用于将所述光照处理后的法线贴图、所述颜色贴图以及所述笔刷贴图进行第一采样处理,得到第一采样图层;
    第三处理模块,用于响应于输入的所述目标三维模型的轮廓线信息,基于所述轮廓 线信息,对所述第一采样图层进行轮廓描边处理,得到第二采样图层;
    第四处理模块,用于基于所述第二采样图层,对所述目标三维模型进行渲染处理,得到渲染后的目标三维模型。
  10. 一种计算机设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当所述计算机设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至8中任一项所述的图形处理方法的步骤。
  11. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器运行时执行如权利要求1至8中任一项所述的图形处理方法的步骤。
  12. 一种计算机程序,所述计算机程序被处理器运行时执行如权利要求1至8中任一项所述的图形处理方法的步骤。
  13. 一种计算机程序产品,所述计算机程序产品包括计算机程序,所述计算机程序被处理器运行时执行如权利要求1至8中任一项所述的图形处理方法的步骤。
PCT/CN2022/127456 2021-12-05 2022-10-25 一种图形处理方法、装置、计算机设备及存储介质 WO2023098344A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111471456.2A CN114119847B (zh) 2021-12-05 2021-12-05 一种图形处理方法、装置、计算机设备及存储介质
CN202111471456.2 2021-12-05

Publications (1)

Publication Number Publication Date
WO2023098344A1 true WO2023098344A1 (zh) 2023-06-08

Family

ID=80366484

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/127456 WO2023098344A1 (zh) 2021-12-05 2022-10-25 一种图形处理方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN114119847B (zh)
WO (1) WO2023098344A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119847B (zh) * 2021-12-05 2023-11-07 北京字跳网络技术有限公司 一种图形处理方法、装置、计算机设备及存储介质
CN114596400B (zh) * 2022-05-09 2022-08-02 山东捷瑞数字科技股份有限公司 一种基于三维引擎批量生成法线贴图的方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966312A (zh) * 2014-06-10 2015-10-07 腾讯科技(深圳)有限公司 一种3d模型的渲染方法、装置及终端设备
CN111127596A (zh) * 2019-11-29 2020-05-08 长安大学 一种基于增量Voronoi序列的分层油画笔刷绘制方法
CN112051959A (zh) * 2020-09-02 2020-12-08 北京字节跳动网络技术有限公司 一种图像绘制过程的生成方法、装置、设备及存储介质
CN112070854A (zh) * 2020-09-02 2020-12-11 北京字节跳动网络技术有限公司 一种图像生成方法、装置、设备及存储介质
CN114119847A (zh) * 2021-12-05 2022-03-01 北京字跳网络技术有限公司 一种图形处理方法、装置、计算机设备及存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685869B (zh) * 2018-12-25 2023-04-07 网易(杭州)网络有限公司 虚拟模型渲染方法与装置、存储介质、电子设备
CN109993822B (zh) * 2019-04-10 2023-02-21 创新先进技术有限公司 一种水墨风格渲染方法和装置
CN111402381B (zh) * 2020-03-17 2023-11-21 网易(杭州)网络有限公司 模型渲染方法与装置、可读存储介质
CN112116692B (zh) * 2020-08-28 2024-05-10 北京完美赤金科技有限公司 模型渲染方法、装置、设备
CN112967363A (zh) * 2021-02-24 2021-06-15 北京盛世顺景文化传媒有限公司 一种8k三维水墨动画制作方法
CN113064540B (zh) * 2021-03-23 2022-11-01 网易(杭州)网络有限公司 基于游戏的绘制方法、绘制装置、电子设备及存储介质
CN113012185B (zh) * 2021-03-26 2023-08-29 影石创新科技股份有限公司 图像处理方法、装置、计算机设备和存储介质
CN113240783B (zh) * 2021-05-27 2023-06-27 网易(杭州)网络有限公司 风格化渲染方法和装置、可读存储介质、电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966312A (zh) * 2014-06-10 2015-10-07 腾讯科技(深圳)有限公司 一种3d模型的渲染方法、装置及终端设备
CN111127596A (zh) * 2019-11-29 2020-05-08 长安大学 一种基于增量Voronoi序列的分层油画笔刷绘制方法
CN112051959A (zh) * 2020-09-02 2020-12-08 北京字节跳动网络技术有限公司 一种图像绘制过程的生成方法、装置、设备及存储介质
CN112070854A (zh) * 2020-09-02 2020-12-11 北京字节跳动网络技术有限公司 一种图像生成方法、装置、设备及存储介质
CN114119847A (zh) * 2021-12-05 2022-03-01 北京字跳网络技术有限公司 一种图形处理方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CN114119847B (zh) 2023-11-07
CN114119847A (zh) 2022-03-01

Similar Documents

Publication Publication Date Title
WO2023098344A1 (zh) 一种图形处理方法、装置、计算机设备及存储介质
JP7386153B2 (ja) 照明をシミュレートするレンダリング方法及び端末
CN112215934B (zh) 游戏模型的渲染方法、装置、存储介质及电子装置
Bauszat et al. Guided image filtering for interactive high‐quality global illumination
US20200151938A1 (en) Generating stylized-stroke images from source images utilizing style-transfer-neural networks with non-photorealistic-rendering
McGuire et al. Weighted blended order-independent transparency
CN104392479B (zh) 一种利用灯光索引号对像素进行光照着色的方法
WO2023098358A1 (zh) 一种模型渲染方法、装置、计算机设备及存储介质
CN109035381B (zh) 基于ue4平台的卡通画头发渲染方法、存储介质
CN109844819A (zh) 用于动态遮挡处置的系统和方法
CN109741438B (zh) 三维人脸建模方法、装置、设备及介质
CN109087369A (zh) 虚拟对象显示方法、装置、电子装置及存储介质
CN113658316B (zh) 三维模型的渲染方法和装置、存储介质及计算机设备
CN109712226A (zh) 虚拟现实的透明模型渲染方法及装置
McGuire et al. Phenomenological transparency
US20220245889A1 (en) Systems and methods of texture super sampling for low-rate shading
KR20060108271A (ko) 디지털 패션 디자인용 실사 기반 가상 드레이핑 시뮬레이션방법
Lopez-Moreno et al. Non-photorealistic, depth-based image editing
Eicke et al. Stable dynamic webshadows in the X3DOM framework
CN116485981A (zh) 三维模型贴图制作方法、装置、设备及存储介质
CN115063330A (zh) 头发渲染方法、装置、电子设备及存储介质
CN113936080A (zh) 虚拟模型的渲染方法和装置、存储介质及电子设备
Ogaki et al. Production ray tracing of feature lines
CN113838155A (zh) 材质贴图的生成方法、装置和电子设备
US11087523B2 (en) Production ray tracing of feature lines

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22900149

Country of ref document: EP

Kind code of ref document: A1