CN113240783B - Stylized rendering method and device, readable storage medium and electronic equipment - Google Patents

Stylized rendering method and device, readable storage medium and electronic equipment Download PDF

Info

Publication number
CN113240783B
CN113240783B CN202110586491.2A CN202110586491A CN113240783B CN 113240783 B CN113240783 B CN 113240783B CN 202110586491 A CN202110586491 A CN 202110586491A CN 113240783 B CN113240783 B CN 113240783B
Authority
CN
China
Prior art keywords
image
rendered
rendering
model
normal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110586491.2A
Other languages
Chinese (zh)
Other versions
CN113240783A (en
Inventor
马克思米兰·罗兹勒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110586491.2A priority Critical patent/CN113240783B/en
Publication of CN113240783A publication Critical patent/CN113240783A/en
Application granted granted Critical
Publication of CN113240783B publication Critical patent/CN113240783B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to a stylized rendering method, a stylized rendering device, a readable storage medium and electronic equipment, and relates to the technical field of image rendering, wherein the method comprises the following steps: acquiring a rendering image corresponding to a model to be rendered, processing the rendering image based on a preset proportion to obtain a first rendering image, and adding an image filter to the first rendering image to obtain a second rendering image; establishing a normal channel corresponding to the model to be rendered, and obtaining a normal image corresponding to the model to be rendered through the normal channel; generating a curvature image corresponding to the normal image and the outline of the normal image based on the normal image, and obtaining a stylized rendering result of the model to be rendered according to the second rendering image, the curvature image and the outline of the normal image. The method and the device improve the stylization efficiency of the model to be rendered.

Description

Stylized rendering method and device, readable storage medium and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of image rendering, in particular to a stylized rendering method, a stylized rendering device, a readable storage medium and electronic equipment.
Background
The visual art style in the real-time rendering game can be divided into realistic and stylized, the realistic game or the rendering according to the physical correct illumination and materials, and the stylized game is more focused on the hand-painted feeling.
In the prior art, stylized visual effects have been mainly based on simple coloring skills, including: no illumination, no pre-rendering or real-time illumination is used, and resources and styles are defined completely depending on the addition of shadows and details by the artist; cartoon coloring, which relies on real-time illumination to create more recognizable effects through simplified and planarized illumination feedback; and the flat cable shadow is mainly used for modifying the shadow and the coloring gradual change so as to show the effect of sketching the hand-drawing pencil.
Although the prior stylized art asset and coloring skills can realize stylization, on one hand, the stylized art asset ignores the consistency of real-time illumination, so that the game role and the game scene are not fused; on the other hand, no illumination is suitable for games with fixed camera angles, cartoon coloring reduces textures and surface details in coloring, and flat shadows are rarely used in games; further, cartoon coloring and line shading are typically in the unit of a single model, any modification to illumination or coloring can only produce effects within the boundaries of the model, and cartoon coloring produces effects that are computer generated effects, watercolor and oil painting styles cannot be achieved by this technique.
Accordingly, there is a need to provide a new stylized rendering method.
It should be noted that the information of the present invention in the above background section is only for enhancing the understanding of the background of the present invention and thus may include information that does not form the prior art that is already known to those of ordinary skill in the art.
Disclosure of Invention
An object of the present disclosure is to provide a stylized rendering method, a stylized rendering device, a readable storage medium, and an electronic apparatus, which further overcome, at least to some extent, the problems that the stylized effect is realized only inside the model and the stylized effect is unnatural due to limitations and drawbacks of the related art.
According to one aspect of the present disclosure, there is provided a stylized rendering method including:
acquiring a rendering image corresponding to a model to be rendered, processing the rendering image based on a preset proportion to obtain a first rendering image, and adding an image filter to the first rendering image to obtain a second rendering image;
establishing a normal channel corresponding to the model to be rendered, and obtaining a normal image corresponding to the model to be rendered through the normal channel;
generating a curvature image corresponding to the normal image and the outline of the normal image based on the normal image, and obtaining a stylized rendering result of the model to be rendered according to the second rendering image, the curvature image and the outline of the normal image.
In an exemplary embodiment of the present disclosure, the stylized rendering method further includes:
obtaining a physical-based rendering coloring result of the model to be rendered; wherein the physical-based rendering shading result comprises one or more of: reflectivity mapping, metalness mapping, roughness mapping and normal mapping of the model to be rendered;
mixing the physical-based rendering coloring result with the diffuse reflection coloring result without illumination based on a preset coloring amount parameter so as to reduce the coloring amount of the model to be rendered; and the illumination-free diffuse reflection coloring result is a color parameter corresponding to the reflectivity map of the model to be rendered.
In an exemplary embodiment of the present disclosure, the mixing the physical-based rendering coloring result with the diffuse reflection coloring result without illumination based on the preset coloring amount parameter includes:
and mixing the rendering coloring result based on physics with the color parameters included in the reflectivity map of the model to be rendered based on the preset coloring amount parameters.
In an exemplary embodiment of the present disclosure, obtaining a rendered image corresponding to a model to be rendered, processing the rendered image based on a preset ratio to obtain a first rendered image, including:
Rendering the model to be rendered to obtain a rendering image corresponding to the model to be rendered;
and processing the resolution of the rendered image based on a preset proportion to obtain a first rendered image of the model to be rendered.
In an exemplary embodiment of the present disclosure, the normal image corresponding to the model to be rendered is a camera space normal image or a world space normal image;
when the normal image corresponding to the model to be rendered is a camera space normal image, obtaining the normal image corresponding to the model to be rendered through the normal channel, including:
the coordinates of the model to be rendered are converted from an object space to a clipping space through the normal line channel, the normal line of the model to be rendered is converted from a model space to a world space, and the normal line of the model to be rendered is converted from the world space to a camera space, so that a normal line image corresponding to the model to be rendered is obtained.
In one exemplary embodiment of the present disclosure, generating a curvature image corresponding to the normal image and a contour of the normal image based on the normal image includes:
and adjusting the curvature of the normal image through a curvature filter to obtain a curvature image corresponding to the normal image, and carrying out edge tracing on the normal image through a Sobel operator to generate the outline of the normal image.
In an exemplary embodiment of the present disclosure, obtaining a stylized rendering result of the model to be rendered according to contours of the second rendered image, the curvature image, and the normal image includes:
the curvature image is overlapped to the second rendering image, and a third rendering image is obtained;
and overlapping the outline of the normal image into the third rendering image to obtain a stylized rendering result of the model to be rendered.
According to one aspect of the present disclosure, there is provided a stylized rendering device including:
the second rendering image acquisition module is used for acquiring a rendering image corresponding to the model to be rendered, processing the rendering image based on a preset proportion to obtain a first rendering image, and adding an image filter to the first rendering image to obtain a second rendering image;
the normal image acquisition module is used for establishing a normal channel corresponding to the model to be rendered and obtaining a normal image corresponding to the model to be rendered through the normal channel;
and the stylized rendering result acquisition module is used for generating a curvature image corresponding to the normal image and the outline of the normal image based on the normal image, and obtaining the stylized rendering result of the model to be rendered according to the second rendering image, the curvature image and the outline of the normal image.
According to one aspect of the present disclosure, there is provided a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the stylized rendering method of any of the above-described example embodiments.
According to one aspect of the present disclosure, there is provided an electronic device including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the stylized rendering method of any of the example embodiments described above via execution of the executable instructions.
According to the stylized rendering method provided by the embodiment of the invention, on one hand, the image filter is applied to the whole rendering image corresponding to the model to be rendered, so that the limitation of the boundary of the model to be rendered is avoided, the problem that the stylized effect in the prior art is limited to be realized only in the interior of the model is solved, and the stylized efficiency of the model to be rendered is improved; on the other hand, the normal line channel is used for generating the normal line image corresponding to the rendering image, and the curvature image corresponding to the normal line image and the outline of the normal line image are generated based on the normal line image, so that the details of the rendering image are reconstructed, the hand-drawing feeling of the rendering image is increased, and the stylized effect of the rendering image is more natural.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is evident that the drawings in the following description are only some embodiments of the present invention and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 schematically shows a flow chart of a stylized rendering method according to an exemplary embodiment of the invention.
Fig. 2 schematically illustrates a schematic diagram of the working principle of a Kuwahara filter according to an exemplary embodiment of the present invention.
FIG. 3 schematically illustrates a method flow diagram for stylized rendering according to an example embodiment of the invention.
Fig. 4 schematically shows an effect of reducing the coloring amount according to an exemplary embodiment of the present invention.
Fig. 5 schematically shows a flow chart of a method of obtaining a first rendered image according to an exemplary embodiment of the invention.
Fig. 6 schematically shows an effect obtained after adding a Kuwahara filter according to an exemplary embodiment of the present invention.
Fig. 7 schematically illustrates an effect diagram of a second rendered image obtained by applying a block blur filter and a Kuwahara filter to a first rendered image according to an exemplary embodiment of the present invention.
Fig. 8 schematically illustrates an effect diagram of a second rendered image obtained by applying a noise filter and a Kuwahara filter to a first rendered image according to an exemplary embodiment of the present invention.
Fig. 9 schematically shows an effect diagram of generating a normal image of a camera space of a model to be rendered according to an exemplary embodiment of the invention.
Fig. 10 schematically shows an effect diagram of generating a contour of a normal image and a curvature image corresponding to the normal image according to an exemplary embodiment of the present invention.
Fig. 11 schematically shows a flow chart of a method of deriving a stylized rendering result of a model to be rendered according to an example embodiment of the invention.
Fig. 12 schematically shows an effect diagram of a model to be rendered by stylized rendering according to an exemplary embodiment of the present invention.
Fig. 13 schematically shows a flow chart of a stylized rendering method according to an example embodiment of the invention.
Fig. 14 schematically shows a block diagram of a stylized rendering device according to an example embodiment of the invention.
Fig. 15 schematically shows an electronic device for implementing the stylized rendering method described above according to an example embodiment of the invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known aspects have not been shown or described in detail to avoid obscuring aspects of the invention.
Furthermore, the drawings are merely schematic illustrations of the present invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
Much research is currently devoted to achieving physical-based rendering in real-time rendering, but stylized visual effects rely primarily on coloring and stylized art assets.
For stylized art assets, the model is represented by exaggerated characteristics, proportion and simplified structure of model surfaces and materials during creation, when the stylized art asset is manufactured, an artist needs to balance between maintaining the main characteristics of the art asset and reducing noise and clutter, the finally generated art asset is usually rendered without any illumination, or a relatively flat illumination feedback is created by using a cartoon rendering technology, in order to highlight the important characteristics of the stylized asset and ensure that the generated stylized asset has uniformity, the stylized asset needs more time to manufacture, and the stylized art asset ignores the uniformity of real-time illumination, so that the rendering result lacks uniformity.
For coloring, no illumination, cartoon coloring, and flat-wire shading may be included. Wherein no illumination is used without any prerendering or real-time illumination, the resources and the styles thereof are defined completely by adding shadows and details by the artwork, the shadows and the highlights are drawn by the artwork under the fixed illumination condition, and no illumination is mainly applied to games with fixed camera angles, such as plane side rolling or equal proportion games; cartoon coloring relies on real-time illumination, a more easily recognizable effect is created by simplifying and flattening illumination feedback, edge lights and contour lines can be added to the obtained effect, and the inspiration of the coloring model mainly comes from animation, cartoon and cartoon; the goal of the flat-line shading is to modify the shading and shading pattern in such a way that it exhibits the effect of hand-drawn pencil sketching.
However, no illumination is difficult to keep the effect consistent because each detail is made by artistic hand; cartoon tinting, while achieving different styles, reduces surface detail in texture and tinting, shines to shadows instead of smooth brightness gradation, with single or multiple hard cuts between lights and shadows. In addition, the flat cable shadows are rarely used in games.
Based on one or more of the above problems, in this exemplary embodiment, there is provided a stylized rendering method, which may be executed on a server, a server cluster, or a cloud server, etc.; of course, those skilled in the art may also operate the method of the present invention on other platforms as desired, and the present exemplary embodiment is not specifically limited thereto. Referring to fig. 1, the stylized rendering method may include the steps of:
s110, obtaining a rendering image corresponding to a model to be rendered, processing the rendering image based on a preset proportion to obtain a first rendering image, and adding an image filter to the first rendering image to obtain a second rendering image;
s120, establishing a normal channel corresponding to the model to be rendered, and obtaining a normal image corresponding to the model to be rendered through the normal channel;
And S130, generating a curvature image corresponding to the normal image and the outline of the normal image based on the normal image, and obtaining a stylized rendering result of the model to be rendered according to the second rendering image, the curvature image and the outline of the normal image.
According to the stylized rendering method, on one hand, the image filter is applied to the whole rendering image corresponding to the model to be rendered, so that the limitation of the boundary of the model to be rendered is avoided, the problem that the stylized effect in the prior art is limited to be realized only in the interior of the model is solved, and the stylized efficiency of the model to be rendered is improved; on the other hand, a normal line image corresponding to the rendering image is generated through the normal line channel, a curvature image corresponding to the normal line image and the outline of the normal line image are generated based on the normal line image, the details of the rendering image are reconstructed through the curvature image and the outline of the normal line image, the hand painting feeling of the rendering image is increased, and the stylized effect of the rendering image is more natural.
Hereinafter, each step involved in the stylized rendering method of the exemplary embodiment of the present disclosure is explained and explained in detail.
First, an application scenario and an object of the exemplary embodiment of the present disclosure are explained and explained. In particular, the present disclosure may be used to achieve hand-drawn effects that are not strictly limited inside model boundaries, preserve the main features and details of rendering models, and reduce performance consumption in the rendering process.
Next, step S110 to step S130 will be explained and described in detail.
In step S110, a rendered image corresponding to a model to be rendered is obtained, the rendered image is processed based on a preset ratio to obtain a first rendered image, and an image filter is added to the first rendered image to obtain a second rendered image.
The rendering image is an image obtained after the model to be rendered is rendered on the screen; processing the rendered image may be to reduce the resolution of the rendered image; the preset proportion may be 1/2 of the original resolution of the rendered image, or may be 1/4 of the original resolution of the rendered image, which is not specifically limited in this example embodiment; the filter added to the first rendered image may be a Kuwahara filter or may be another filter, and the image filter is not particularly limited in this exemplary embodiment; the general image filter operates on the principle that all pixels included in a rendered image are acquired, and for each pixel included in all pixels, the pixel is subjected to blurring processing by comparing the pixel with an adjacent pixel and calculating an average value, and can be used for modifying the rendered image based on a certain style, which is a common tool in photo and video editing and real-time rendering applications.
The working principle of the Kuwahara filter is as follows: firstly, dividing a pixel area of a rendered image into K different subareas, wherein K is a positive integer, the number of pixels included in each subarea can be adjusted by using a radius parameter, and a person skilled in the art can determine the value of the radius parameter according to the need; and secondly, calculating the mean value and variance of the K sub-areas, obtaining the sub-area with the minimum variance, obtaining the mean value of the sub-area with the minimum variance, and taking the mean value as the color value of the final pixel. Referring to fig. 2, the pixel area of the rendered image is divided into four sub-areas A, B, C and D, which are crossed at the center, first, the mean and variance of the four sub-areas are calculated, and then, the mean of the sub-area with the smallest variance is used as the color value of the final pixel of the rendered image; noise and detail included in the rendered image are filtered by adding a Kuwahara filter, but the original shape and hard edges of the rendered image are preserved.
In this example embodiment, referring to fig. 3, in order to highlight the characteristics of the hand-drawn cartoon or cartoon character in the game scene, the stylized rendering method may further include step S310 and step S320:
S310, obtaining a rendering coloring result based on physics of the model to be rendered; wherein the physical-based rendering shading result comprises one or more of: reflectivity mapping, metalness mapping, roughness mapping and normal mapping of the model to be rendered;
s320, mixing the physical-based rendering coloring result with the non-illumination reflection coloring result based on a preset coloring amount parameter so as to reduce the coloring amount of the model to be rendered; and the illumination-free diffuse reflection coloring result is a color parameter corresponding to the reflectivity map of the model to be rendered.
Hereinafter, step S310 and step S320 will be explained and explained. Specifically, one significant feature of hand-drawn cartoons and animations is that objects such as characters included in a scene appear to have not much detail compared to the scene, in order to achieve this feature on each rendering model, the amount of coloring applied to each rendering model may be reduced, first, the PBR (Physically Based Rendering, physical-based rendering) coloring result of the model to be rendered is calculated, where one or more of Albedo (reflectivity), metallicity, roughness, and normal maps may be included in the PBR coloring result; secondly, mixing a PBR coloring result and a no-illumination reflection result of the model to be rendered according to a preset coloring amount parameter to reduce the coloring amount of the model to be rendered and enable the model to be rendered to look more planar, wherein the coloring amount parameter can be a coloring amount percentage or a coloring amount weight value, and the coloring amount parameter is not particularly limited in the embodiment. An effect diagram of reducing the coloring amount of the model to be rendered can be shown with reference to fig. 4.
Wherein the physical-based rendering coloring result of the model to be rendered comprises: the reflectivity map, the metallicity map, the roughness map, and the normal map of the model to be rendered may also include a reflectivity map and a roughness map, and in this example embodiment, the rendering coloring result based on physics of the model to be rendered is not specifically limited. PBR is a rendering technique used in motion picture and real-time rendering to simulate the most common material types in a single, physically accurate rendering model. Modern game engines can render near-realistic visual effects when using properly configured PBR settings; albedo maps can embody the texture and color of the model to be rendered, and metalness maps can define whether the model to be rendered is metal or dielectric (non-metal); roughness maps may define the roughness or smoothness of the object surface reflection; the normal map is used for adding curved surface concave-convex and details; when the preset coloring amount parameter is a coloring amount percentage, the coloring amount percentage may be 10% or 50%, which is not particularly limited in the present exemplary embodiment; the no-illumination diffuse reflection coloring result may be a color parameter value of a model to be rendered included in the Albedo map to be rendered, further, the mixing the physical-based rendering coloring result with the no-illumination diffuse reflection coloring result based on a preset coloring amount parameter includes:
And mixing the rendering coloring result based on physics with the color parameters corresponding to the reflectivity map of the model to be rendered based on the preset coloring amount parameters.
Specifically, the physical-based rendering coloring result of the model to be rendered is mixed with the color parameters corresponding to the reflectivity map of the model to be rendered based on the preset coloring amount parameters, so that the model to be rendered has no additional coloring or illumination performance from the environment. When the model to be rendered needs to have shadow features, shadow addition can be performed after the coloring amount of the model to be rendered is reduced.
In this example embodiment, referring to fig. 5, a rendered image corresponding to a model to be rendered is acquired, and the rendered image is processed based on a preset ratio to obtain a first rendered image, which may include step S510 and step S520:
s510, rendering the model to be rendered to obtain a rendering image corresponding to the model to be rendered;
and S520, processing the resolution of the rendered image based on a preset proportion to obtain a first rendered image of the model to be rendered.
Hereinafter, step S510 and step S520 will be explained and explained. Specifically, firstly, rendering a model to be rendered to obtain a rendered image rendered by the rendering model; secondly, in order to reduce the consumption of image processing after adding an image filter in real-time rendering, the resolution of the rendered image can be reduced according to a preset proportion to obtain a first rendered image, specifically, the rendered image can be reduced to 1/2 of the original resolution of the first rendered image, or the rendered image can be reduced to 1/4 of the original resolution of the first rendered image, and the method is not particularly limited in the embodiment; again, a Kuwahara filter is added to the obtained first rendered image, the Kuwahara filter can smooth the first rendered image, and meanwhile, main features of the first rendered image can be reserved, when a larger kernel radius is used, the first rendered image can show the effect of watercolor or oil painting, and the effect of the second rendered image obtained by adding the filter to the first rendered image can be shown by referring to fig. 6.
Further, before adding the Kuwahara filter, other filters may be added to adjust the first rendered image, where the other filters may be a block blur filter, a gaussian blur filter, or a noise filter, and in this example embodiment, the method is not specifically limited.
Specifically, when the other filters are block-blurred filters, firstly, sampling color values of pixels included in the image kernel, and determining final color values of pixels included in the image kernel by using a fixed image kernel proportioning weight, wherein the image kernel is a pixel matrix for realizing picture effects such as blurring, sharpening, contour or embossing; the image kernel may be 3*3 or 4*4, which is not specifically limited in this exemplary embodiment, and the image kernel matching weight may be 1/9 or 1/16, which is not specifically limited in this exemplary embodiment; when the image kernel is 3*3 and the image kernel proportioning weight is 1/9, 9 pixels included in the image kernel may be sampled, and the color value of each pixel contributes 1/9 of its own color value to the final pixel color value, that is, the color values of 9 pixels included in the image kernel are added, and the result is divided by 9 to calculate the color of the blurred pixel. Before the Kuwahara filter is applied to the first rendered image, a smoother and more blurred shape can be obtained by applying a block blurring filter, and the resulting effect diagram of the second rendered image can be shown with reference to fig. 7.
When the other filters are gaussian blur filters, the principle is the same as that of the square block blur filter, but gaussian attenuation is performed on the pixel color, compared with the weight of the pixel value at the boundary and the corner of the image kernel, the weight value of the pixel at the center of the image kernel is larger, the image kernel can be 3*3 or 4*4, and the specific limitation is not made in the embodiment of the present invention; the color values of the pixels included in the image kernel are added, wherein the weight of the color value of each pixel follows a gaussian distribution.
When the other filters are noise filters, in order to realize the dynamic noise filter, the UV coordinates of the pixels in the image kernel and the cosine of the dot product can be obtained, the UV coordinates of the pixels, the cosine of the dot product and the pseudo-random constant term are multiplied, the decimal part of the product is returned, the pixel coordinates can deviate along with the change of time, and the image formed by adding the noise can also change along with the time, so that the dynamic noise filter is realized. Before the Kuwahara filter is applied to the first rendered image, a noise filter is applied to obtain a dynamic unstable effect, and an effect diagram of the obtained second rendered image can be shown by referring to fig. 8.
In step S120, a normal channel corresponding to the model to be rendered is established, and a normal image corresponding to the model to be rendered is obtained through the normal channel.
In the present exemplary embodiment, after applying the Kuwahara filter to the rendered image, a rendered image having a stylized effect may be obtained, but the original details of the rendered image are lost during the stylization, especially when the Kuwahara filter is applied to the reduced resolution rendered image or a larger kernel radius is used, in order to reconstruct the details of the rendered image, the model to be rendered or a subset of the model to be rendered may be rendered again through an additional normal channel to generate an image of the surface normal of the model to be rendered. Where the normal is a vector describing the surface and curvature of the model to be rendered. In real-time rendering, normal vectors are required for illumination and shading calculations on the model to be rendered. The normal vector is stored on each vertex of the three-dimensional model, the vertex can define the position, and the normal vector can define the direction of the curved surface; the normal image corresponding to the model to be rendered may be a camera space normal image or a world space normal image, which is not particularly limited in this example embodiment; the normal or any vector may be represented in a different space. While vectors in real-time rendering are typically represented in world space, it is more appropriate for image effects to convert normals to camera space so that the vector direction can match the direction of the camera.
When the normal image corresponding to the model to be rendered is a camera space normal image, obtaining the normal image corresponding to the model to be rendered through the normal channel, including:
the coordinates of the model to be rendered are converted from an object space to a clipping space through the normal line channel, the normal line of the model to be rendered is converted from a model space to a world space, and the normal line of the model to be rendered is converted from the world space to a camera space, so that a normal line image corresponding to the model to be rendered is obtained.
Specifically, first, to obtain the camera spatial normal image, all model shaders must use the coordinate system required for the individual channel output, and in this example embodiment, the scriptablerender function of Unity (game engine) may be used to render the individual normal channels; secondly, converting coordinates of the model to be rendered from the object space into the clipping space by a normal channel using a TransformObjectToHClip () function included in HLSL (High Level Shader Language, high-order shader language) syntax; thirdly, converting the normal line of the model to be rendered from model space to world space through a transformObjectToWorldNORMAL () function; and finally, converting the normal of the model to be rendered from world space to camera space through a TransformWirlToViewDir () function, and further obtaining a normal image corresponding to the model to be rendered. The generated normal image of the camera space is shown with reference to fig. 9.
In step S130, a curvature image corresponding to the normal image and a contour of the normal image are generated based on the normal image, and a stylized rendering result of the model to be rendered is obtained according to the second rendering image, the curvature image and the contour of the normal image.
Under the condition that the coloring amount of the model to be rendered is low, the coloring of the model to be rendered can be enhanced through curvature, so that a curvature image corresponding to the normal image can be obtained by adding a curvature filter to the normal image, highlight and shadow of the model to be rendered are extracted, different parameters are provided in the curvature filter to adjust the minimum and maximum brightness of a curvature result, and a sampling radius is provided to determine the degree of retaining details. In order to reconstruct and emphasize the details of the rendered image lost in the process of applying the Kuwahara filter, and to make the contour of the rendered image and the smoothed color no longer perfectly match, the hand-painting feel of the rendered image is increased, so that the rendered image has better art effect, the contour or the tracing of the rendered image can be generated based on the normal image of the tracing filter, and the tracing filter provides adjustable intensity parameters and radius parameters, wherein the radius parameters can control the thickness of the tracing.
In the present exemplary embodiment, generating a curvature image corresponding to the normal image and a contour of the normal image based on the normal image includes:
and adjusting the curvature of the normal image through a curvature filter to obtain a curvature image corresponding to the normal image, and carrying out edge tracing on the normal image through a Sobel operator to generate the outline of the normal image, namely generating the outline of the normal image.
Specifically, the curvature represents the degree of change of the curved surface at a given point and is calculated based on a normal vector, and therefore, first, it is necessary to acquire a normal vector image, calculate the curvature by constructing two horizontal vectors and two vertical vectors between a normal vector of a pixel point included in the normal vector image and a normal vector of a pixel point adjacent to the pixel point (up, down, left, right), and adjust the calculated curvature by parameters included in a curvature filter. The contour of the normal image may be generated by the sobel operator, mainly by calculating horizontal and vertical derivatives based on neighboring pixels in positive and negative directions, i.e. the image is silhouetted by a relative change between pixel color values, which in this example embodiment may be based on a relative change of the picture gray values; the image boundary can be determined through the image kernel, specifically, the approximate values of gray level bias derivatives of the image kernel in the horizontal direction and the vertical direction are calculated through the convolution factors in the horizontal direction and the convolution factors in the vertical direction, and the final color values of the pixels included in the image kernel are obtained by utilizing the approximate values in the horizontal direction and the approximate values in the vertical direction. The outline of the generated normal line image and the curvature image corresponding to the normal line image can be referred to as shown in fig. 10 based on the normal line image shown in fig. 9.
After obtaining the curvature image corresponding to the normal image and the contour of the normal image, referring to fig. 11, obtaining the stylized rendering result of the model to be rendered according to the second rendering image, the curvature image corresponding to the normal image and the contour of the normal image may include step S1110 and step S1120:
step S1110, the curvature image is overlapped to the second rendering image, and a third rendering image is obtained;
and S1120, overlapping the outline of the normal image into the third rendering image to obtain a stylized rendering result of the model to be rendered.
Hereinafter, step S1110 and step S1120 will be explained and described. Specifically, the curvature image corresponding to the normal image and the effect of the second rendering image obtained after the Kuwahara filter is added are overlapped to obtain a third rendering image, the contour effect of the normal image is overlapped to the third rendering image, a dark side can be added to the contour of the third rendering image, the influence on the bright area of the third rendering image is reduced, namely, the influence on the white part included in the third rendering image is avoided, wherein the overlapping of the contour of the normal image to the third rendering image can be the multiplication of the pixel value of the contour object of the normal image and the pixel value of the third rendering image, and the obtained result is the pixel value corresponding to the stylized rendering result of the model to be rendered.
The stylized rendering method provided by the exemplary embodiments of the present disclosure has at least the following advantages: the method comprises the steps of processing a rendering image of a model to be rendered, so that the stylized efficiency of the model to be rendered is improved; on the other hand, before the image filter is added, the resolution reduction processing is carried out on the rendered graph, so that the consumption of image processing after the filter is added in real-time rendering is reduced; in still another aspect, the contour effect of the normal image is superimposed into the third rendered image, so that the influence on the bright area of the third rendered image is reduced, and the detail performance of the model to be rendered is enhanced. An effect diagram obtained by the stylized rendering method may be shown with reference to fig. 12.
The stylized rendering method of the example embodiments of the present disclosure is further explained and illustrated below in conjunction with fig. 13. Fig. 13 is a flow chart of a stylized rendering method, where the stylized rendering method may include:
s1310, obtaining a physical-based rendering coloring result of the model to be rendered, and removing coloring of the model to be rendered according to the physical-based rendering coloring result to obtain a target rendering model;
s1320, obtaining a rendering image of a target rendering model, and reducing the resolution of the rendering image based on a preset proportion to obtain a first rendering image;
S1330, adding an image filter to the first rendered image to obtain a second rendered image;
s1340, establishing a normal channel corresponding to the target rendering model, and generating a normal image of the target rendering model through the normal channel;
s1350, adding a curvature filter and a Sobel filter to the normal image to obtain a curvature image corresponding to the normal image and a contour of the normal image;
s1360, superposing the second rendering image and the filter image corresponding to the normal image to obtain a third rendering image;
and S1370, superposing the outline of the normal image on the third rendering image to obtain a stylized rendering result of the model to be rendered.
The exemplary embodiments of the present disclosure also provide a stylized rendering apparatus, as shown with reference to fig. 14, may include: a second rendered image acquisition module 1410, a normal image acquisition module 1420, and a stylized rendering result acquisition module 1430. Wherein:
the second rendered image obtaining module 1410 is configured to obtain a rendered image corresponding to the model to be rendered, process the rendered image based on a preset ratio to obtain a first rendered image, and add an image filter to the first rendered image to obtain a second rendered image;
The normal image acquisition module 1420 is configured to establish a normal channel corresponding to the model to be rendered, and obtain a normal image corresponding to the model to be rendered through the normal channel;
and a stylized rendering result obtaining module 1430, configured to generate a curvature image corresponding to the normal image and a contour of the normal image based on the normal image, and obtain a stylized rendering result of the model to be rendered according to the second rendering image, the curvature image and the contour of the normal image.
The specific details of each module in the stylized rendering device are described in detail in the corresponding stylized rendering method, so that the details are not repeated here.
In an exemplary embodiment of the present disclosure, the stylized rendering method further includes:
acquiring a default physical-based rendering coloring result of the model to be rendered; wherein the physical-based rendering shading result comprises one or more of: reflectivity mapping, metalness mapping, roughness mapping and normal mapping of the model to be rendered;
mixing the default physical-based rendering coloring result with the diffuse reflection coloring result without illumination based on a preset coloring amount parameter so as to reduce the coloring amount of the model to be rendered; and the illumination-free diffuse reflection coloring result is a color parameter corresponding to the reflectivity map of the model to be rendered.
In an exemplary embodiment of the present disclosure, the mixing the default physical-based rendering shading result with the diffuse reflection shading result without illumination based on a preset shading amount parameter includes:
and mixing the default physical-based rendering coloring result with color parameters included in the reflectivity map of the model to be rendered based on a preset coloring amount parameter.
In an exemplary embodiment of the present disclosure, obtaining a rendered image corresponding to a model to be rendered, processing the rendered image based on a preset ratio to obtain a first rendered image, including:
rendering the model to be rendered to obtain a rendering image corresponding to the model to be rendered;
and processing the resolution of the rendered image based on a preset proportion to obtain a first rendered image of the model to be rendered.
In an exemplary embodiment of the present disclosure, the normal image corresponding to the model to be rendered is a camera space normal image or a world space normal image;
when the normal image corresponding to the model to be rendered is a camera space normal image, obtaining the normal image corresponding to the model to be rendered through the normal channel, including:
The coordinates of the model to be rendered are converted from an object space to a clipping space through the normal line channel, the normal line of the model to be rendered is converted from a model space to a world space, and the normal line of the model to be rendered is converted from the world space to a camera space, so that a normal line image corresponding to the model to be rendered is obtained.
In one exemplary embodiment of the present disclosure, generating a curvature image corresponding to the normal image and a contour of the normal image based on the normal image includes:
and adjusting the curvature of the normal image through a curvature filter to obtain a curvature image corresponding to the normal image, and carrying out edge tracing on the normal image through a Sobel operator to generate the outline of the normal image.
In an exemplary embodiment of the present disclosure, obtaining a stylized rendering result of the model to be rendered according to contours of the second rendered image, the curvature image, and the normal image includes:
the curvature image is overlapped to the second rendering image, and a third rendering image is obtained;
and overlapping the outline of the normal image into the third rendering image to obtain a stylized rendering result of the model to be rendered.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the invention. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the steps of the methods of the present invention are depicted in the accompanying drawings in a particular order, this is not required to either imply that the steps must be performed in that particular order, or that all of the illustrated steps be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
In an exemplary embodiment of the present invention, an electronic device capable of implementing the above method is also provided.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a system, method, or program product. Accordingly, aspects of the invention may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 1500 according to such an embodiment of the invention is described below with reference to fig. 15. The electronic device 1500 shown in fig. 15 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 15, the electronic device 1500 is embodied in the form of a general purpose computing device. The components of electronic device 1500 may include, but are not limited to: the at least one processing unit 1510, the at least one storage unit 1520, a bus 1530 connecting the different system components (including the storage unit 1520 and the processing unit 1510), and a display unit 1540.
Wherein the storage unit stores program code that is executable by the processing unit 1510 such that the processing unit 1510 performs steps according to various exemplary embodiments of the present invention described in the above section of the "exemplary method" of the present specification. For example, the processing unit 1510 may perform step S110 as shown in fig. 1: acquiring a rendering image corresponding to a model to be rendered, processing the rendering image based on a preset proportion to obtain a first rendering image, and adding an image filter to the first rendering image to obtain a second rendering image; s120: establishing a normal channel corresponding to the model to be rendered, and obtaining a normal image corresponding to the model to be rendered through the normal channel; s130: generating a curvature image corresponding to the normal image and the outline of the normal image based on the normal image, and obtaining a stylized rendering result of the model to be rendered according to the second rendering image, the curvature image and the outline of the normal image.
The storage unit 1520 may include readable media in the form of volatile memory units such as Random Access Memory (RAM) 15201 and/or cache memory 15202, and may further include Read Only Memory (ROM) 15203.
The storage unit 1520 may also include a program/utility 15204 having a set (at least one) of program modules 15205, such program modules 15205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 1530 may be a bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 1500 may also communicate with one or more external devices 1600 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 1500, and/or any device (e.g., router, modem, etc.) that enables the electronic device 1500 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1550. Also, the electronic device 1500 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, for example, the Internet, through a network adapter 1560. As shown, the network adapter 1560 communicates with other modules of the electronic device 1500 over the bus 1530. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 1500, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present invention may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present invention.
In an exemplary embodiment of the present invention, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
A program product for implementing the above-described method according to an embodiment of the present invention may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (9)

1. A stylized rendering method, comprising:
acquiring a rendering image corresponding to a model to be rendered, processing the rendering image based on a preset proportion to obtain a first rendering image, adding an image filter to the first rendering image, and performing fuzzy processing on the first rendering image to obtain a second rendering image;
Establishing a normal channel corresponding to the model to be rendered, and obtaining a normal image corresponding to the model to be rendered through the normal channel;
generating a curvature image corresponding to the normal image and a contour of the normal image based on the normal image, and obtaining a stylized rendering result of the model to be rendered according to the second rendering image, the curvature image and the contour of the normal image; and
obtaining a physical-based rendering coloring result of the model to be rendered; wherein the physical-based rendering shading result comprises one or more of: reflectivity mapping, metalness mapping, roughness mapping and normal mapping of the model to be rendered; mixing the physical-based rendering coloring result with the diffuse reflection coloring result without illumination based on a preset coloring amount parameter so as to reduce the coloring amount of the model to be rendered; and the illumination-free diffuse reflection coloring result is a color parameter corresponding to the reflectivity map of the model to be rendered.
2. The stylized rendering method of claim 1, wherein the blending the physical-based rendering coloring result with the diffuse reflection coloring result without illumination based on a preset coloring amount parameter comprises:
And mixing the rendering coloring result based on physics with the color parameters corresponding to the reflectivity map of the model to be rendered based on the preset coloring amount parameters.
3. The stylized rendering method of claim 1, wherein obtaining a rendered image corresponding to a model to be rendered, processing the rendered image based on a preset ratio to obtain a first rendered image, comprises:
rendering the model to be rendered to obtain a rendering image corresponding to the model to be rendered;
and processing the resolution of the rendered image based on a preset proportion to obtain a first rendered image of the model to be rendered.
4. A stylized rendering method according to claim 3, wherein the normal image corresponding to the model to be rendered is a camera space normal image or a world space normal image;
when the normal image corresponding to the model to be rendered is a camera space normal image, obtaining the normal image corresponding to the model to be rendered through the normal channel, including:
the coordinates of the model to be rendered are converted from an object space to a clipping space through the normal line channel, the normal line of the model to be rendered is converted from a model space to a world space, and the normal line of the model to be rendered is converted from the world space to a camera space, so that a normal line image corresponding to the model to be rendered is obtained.
5. The stylized rendering method of claim 4, wherein generating a curvature image corresponding to the normal image and a contour of the normal image based on the normal image includes:
and adjusting the curvature of the normal image through a curvature filter to obtain a curvature image corresponding to the normal image, and carrying out edge tracing on the normal image through a Sobel operator to generate the outline of the normal image.
6. The stylized rendering method of claim 5, wherein obtaining a stylized rendering result of the model to be rendered from contours of the second rendered image, the curvature image, and the normal image includes:
the curvature image is overlapped to the second rendering image, and a third rendering image is obtained;
and overlapping the outline of the normal image into the third rendering image to obtain a stylized rendering result of the model to be rendered.
7. A stylized rendering device, comprising:
the second rendering image acquisition module is used for acquiring a rendering image corresponding to the model to be rendered, processing the rendering image based on a preset proportion to obtain a first rendering image, adding an image filter to the first rendering image, and performing blurring processing on the first rendering image to obtain a second rendering image;
The normal image acquisition module is used for establishing a normal channel corresponding to the model to be rendered and obtaining a normal image corresponding to the model to be rendered through the normal channel;
the stylized rendering result obtaining module is used for generating a curvature image corresponding to the normal image and the outline of the normal image based on the normal image, and obtaining the stylized rendering result of the model to be rendered according to the second rendering image, the curvature image and the outline of the normal image;
the coloring amount processing module is used for acquiring a physical-based rendering coloring result of the model to be rendered; wherein the physical-based rendering shading result comprises one or more of: reflectivity mapping, metalness mapping, roughness mapping and normal mapping of the model to be rendered; mixing the physical-based rendering coloring result with the diffuse reflection coloring result without illumination based on a preset coloring amount parameter so as to reduce the coloring amount of the model to be rendered; and the illumination-free diffuse reflection coloring result is a color parameter corresponding to the reflectivity map of the model to be rendered.
8. A readable storage medium having stored thereon a computer program, which when executed by a processor implements the stylized rendering method of any one of claims 1-6.
9. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the stylized rendering method of any one of claims 1-6 via execution of the executable instructions.
CN202110586491.2A 2021-05-27 2021-05-27 Stylized rendering method and device, readable storage medium and electronic equipment Active CN113240783B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110586491.2A CN113240783B (en) 2021-05-27 2021-05-27 Stylized rendering method and device, readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110586491.2A CN113240783B (en) 2021-05-27 2021-05-27 Stylized rendering method and device, readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113240783A CN113240783A (en) 2021-08-10
CN113240783B true CN113240783B (en) 2023-06-27

Family

ID=77139248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110586491.2A Active CN113240783B (en) 2021-05-27 2021-05-27 Stylized rendering method and device, readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113240783B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658316B (en) * 2021-10-18 2022-03-08 北京市商汤科技开发有限公司 Rendering method and device of three-dimensional model, storage medium and computer equipment
CN114119835B (en) * 2021-12-03 2022-11-08 北京冰封互娱科技有限公司 Hard surface model processing method and device and electronic equipment
CN114119847B (en) * 2021-12-05 2023-11-07 北京字跳网络技术有限公司 Graphic processing method, device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1750046A (en) * 2005-10-20 2006-03-22 浙江大学 Three-dimensional ink and wash effect rendering method based on graphic processor
US7098925B1 (en) * 2000-03-10 2006-08-29 Intel Corporation Shading of images using texture
JP2007272273A (en) * 2006-03-30 2007-10-18 Namco Bandai Games Inc Image generation system, program, and information storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5966134A (en) * 1996-06-28 1999-10-12 Softimage Simulating cel animation and shading
JP3352982B2 (en) * 1999-09-14 2002-12-03 株式会社スクウェア Rendering method and device, game device, and computer-readable recording medium for storing program for rendering three-dimensional model
AU2009212881B2 (en) * 2009-08-31 2012-06-14 Canon Kabushiki Kaisha Efficient radial gradient fills
CN104966312B (en) * 2014-06-10 2017-07-21 腾讯科技(深圳)有限公司 A kind of rendering intent, device and the terminal device of 3D models
CN109685869B (en) * 2018-12-25 2023-04-07 网易(杭州)网络有限公司 Virtual model rendering method and device, storage medium and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7098925B1 (en) * 2000-03-10 2006-08-29 Intel Corporation Shading of images using texture
CN1750046A (en) * 2005-10-20 2006-03-22 浙江大学 Three-dimensional ink and wash effect rendering method based on graphic processor
JP2007272273A (en) * 2006-03-30 2007-10-18 Namco Bandai Games Inc Image generation system, program, and information storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Non Photorealistic Techniques with Focus and Context Volume Rendering;Peter Benilov;《School of Computer Science and Statistics Trinity College Dublin》;1-60 *
Real-Time NPR System with Multiple Styles;D. Prykhodko, Leonid Gaiazov;《 Computer Science》;1-9 *
二维彩色图像的卡通风格实现;周冲;《中国优秀博硕士学位论文全文数据库 (硕士) 信息科技辑》;I138-220 *

Also Published As

Publication number Publication date
CN113240783A (en) 2021-08-10

Similar Documents

Publication Publication Date Title
CN113240783B (en) Stylized rendering method and device, readable storage medium and electronic equipment
Elber Interactive line art rendering of freeform surfaces
US9684997B2 (en) Efficient rendering of volumetric elements
CN109448137B (en) Interaction method, interaction device, electronic equipment and storage medium
Cignoni et al. A simple normal enhancement technique for interactive non-photorealistic renderings
CN111583379B (en) Virtual model rendering method and device, storage medium and electronic equipment
US7777745B2 (en) Edge effect
US7995060B2 (en) Multiple artistic look rendering methods and apparatus
CN108805971B (en) Ambient light shielding method
CN111420404A (en) Method and device for rendering objects in game, electronic equipment and storage medium
CN112734896B (en) Environment shielding rendering method and device, storage medium and electronic equipment
CN115100337A (en) Whole body portrait video relighting method and device based on convolutional neural network
CN113888398B (en) Hair rendering method and device and electronic equipment
US11804008B2 (en) Systems and methods of texture super sampling for low-rate shading
Haller et al. A loose and sketchy approach in a mediated reality environment
CN111739074A (en) Scene multipoint light source rendering method and device
KR100454070B1 (en) Method for Real-time Toon Rendering with Shadow using computer
CN113936080A (en) Rendering method and device of virtual model, storage medium and electronic equipment
CN114288671A (en) Method, device and equipment for making map and computer readable medium
CN115311395A (en) Three-dimensional scene rendering method, device and equipment
Yuan et al. GPU-based rendering and animation for Chinese painting cartoon
Curtis et al. Real-time non-photorealistic animation for immersive storytelling in “Age of Sail”
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
Garcia et al. Coherent Mark‐based Stylization of 3D Scenes at the Compositing Stage
US7880743B2 (en) Systems and methods for elliptical filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant