CN113240783A - Stylized rendering method and device, readable storage medium and electronic equipment - Google Patents

Stylized rendering method and device, readable storage medium and electronic equipment Download PDF

Info

Publication number
CN113240783A
CN113240783A CN202110586491.2A CN202110586491A CN113240783A CN 113240783 A CN113240783 A CN 113240783A CN 202110586491 A CN202110586491 A CN 202110586491A CN 113240783 A CN113240783 A CN 113240783A
Authority
CN
China
Prior art keywords
image
rendered
rendering
model
normal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110586491.2A
Other languages
Chinese (zh)
Other versions
CN113240783B (en
Inventor
马克思米兰·罗兹勒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110586491.2A priority Critical patent/CN113240783B/en
Publication of CN113240783A publication Critical patent/CN113240783A/en
Application granted granted Critical
Publication of CN113240783B publication Critical patent/CN113240783B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The disclosure relates to a stylized rendering method, a stylized rendering device, a readable storage medium and an electronic device, and relates to the technical field of image rendering, wherein the method comprises the following steps: the method comprises the steps of obtaining a rendering image corresponding to a model to be rendered, processing the rendering image based on a preset proportion to obtain a first rendering image, and adding an image filter to the first rendering image to obtain a second rendering image; establishing a normal channel corresponding to the model to be rendered, and obtaining a normal image corresponding to the model to be rendered through the normal channel; generating a curvature image corresponding to the normal image and the contour of the normal image based on the normal image, and obtaining a stylized rendering result of the model to be rendered according to the second rendering image, the curvature image and the contour of the normal image. The present disclosure improves the efficiency of stylizing the model to be rendered.

Description

Stylized rendering method and device, readable storage medium and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of image rendering, in particular to a stylized rendering method, a stylized rendering device, a readable storage medium and electronic equipment.
Background
The visual art style in the real-time rendering game can be divided into realistic writing and stylization, wherein the realistic writing game or the realistic rendering game is performed according to physically correct lighting and materials, and the stylized game focuses more on hand-drawing feeling.
In the prior art, stylized visual effects are still mainly dependent on simple coloring techniques and stylized art assets, wherein the coloring techniques specifically include: no illumination, no pre-rendering or real-time lighting, and completely relying on artists to add shadows and details to define resources and styles thereof; cartoon shading, which relies on real-time lighting to create a more legible effect through simplified and flattened illumination feedback; the line arranging shadow is mainly used for modifying the shadow and coloring gradual change so as to embody the effect of drawing a pencil sketch by hand.
Although stylization can be realized by the existing stylized art assets and coloring skills, on one hand, the stylized art assets neglect the consistency of real-time illumination, so that game roles are not fused with game scenes; on the other hand, no illumination is suitable for games with fixed camera angles, the cartoon coloring reduces the texture and the surface details in coloring, and the flat cable shadow is rarely used in the games; further, cartoon shading and drop line shading are typically in units of a single model, any modification to lighting or shading can only produce effects within the boundaries of the model, and the effects produced by cartoon shading are computer-generated effects, watercolor and oil painting styles cannot be achieved by this technique.
Therefore, it is desirable to provide a new stylized rendering method.
It is to be noted that the information invented in the above background section is only for enhancing the understanding of the background of the present invention, and therefore, may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the present disclosure is to provide a stylized rendering method, a stylized rendering apparatus, a readable storage medium, and an electronic device, thereby overcoming, at least to some extent, the problems that a stylized effect is only achieved inside a model and the stylized effect is unnatural due to limitations and disadvantages of the related art.
According to an aspect of the present disclosure, there is provided a stylized rendering method, including:
the method comprises the steps of obtaining a rendering image corresponding to a model to be rendered, processing the rendering image based on a preset proportion to obtain a first rendering image, and adding an image filter to the first rendering image to obtain a second rendering image;
establishing a normal channel corresponding to the model to be rendered, and obtaining a normal image corresponding to the model to be rendered through the normal channel;
generating a curvature image corresponding to the normal image and the contour of the normal image based on the normal image, and obtaining a stylized rendering result of the model to be rendered according to the second rendering image, the curvature image and the contour of the normal image.
In an exemplary embodiment of the present disclosure, the stylized rendering method further includes:
obtaining a rendering and coloring result of the model to be rendered based on physics; wherein the physics-based rendering shading results include one or more of: a reflectivity map, a metallization map, a roughness map and a normal map of the model to be rendered;
mixing the rendering and coloring result based on physics with a non-illumination diffuse reflection coloring result based on a preset coloring quantity parameter so as to reduce the coloring quantity of the model to be rendered; and the non-illumination diffuse reflection coloring result is a color parameter corresponding to the reflectivity map of the model to be rendered.
In an exemplary embodiment of the present disclosure, the mixing the rendering coloring result based on physics and the non-illumination diffuse reflection coloring result based on a preset coloring amount parameter includes:
and mixing the rendering and coloring result based on physics with the color parameter included in the reflectivity map of the model to be rendered based on a preset coloring amount parameter.
In an exemplary embodiment of the present disclosure, obtaining a rendering image corresponding to a model to be rendered, and processing the rendering image based on a preset ratio to obtain a first rendering image includes:
rendering the model to be rendered to obtain a rendered image corresponding to the model to be rendered;
and processing the resolution of the rendered image based on a preset proportion to obtain a first rendered image of the model to be rendered.
In an exemplary embodiment of the present disclosure, the normal image corresponding to the model to be rendered is a camera space normal image or a world space normal image;
when the normal image corresponding to the model to be rendered is a camera space normal image, obtaining the normal image corresponding to the model to be rendered through the normal channel, including:
converting the coordinate of the model to be rendered from an object space to a cutting space, converting the normal of the model to be rendered from a model space to a world space, and converting the normal of the model to be rendered from the world space to a camera space through the normal channel so as to obtain a normal image corresponding to the model to be rendered.
In an exemplary embodiment of the present disclosure, generating a curvature image corresponding to the normal image and an outline of the normal image based on the normal image includes:
and adjusting the curvature of the normal image through a curvature filter to obtain a curvature image corresponding to the normal image, and performing edge tracing on the normal image through a Sobel operator to generate the outline of the normal image.
In an exemplary embodiment of the present disclosure, obtaining a stylized rendering result of the model to be rendered according to the second rendering image, the curvature image, and the contour of the normal image includes:
superposing the curvature image to the second rendering image to obtain a third rendering image;
and superposing the contour of the normal image to the third rendering image to obtain a stylized rendering result of the model to be rendered.
According to an aspect of the present disclosure, there is provided a stylized rendering apparatus including:
the second rendering image acquisition module is used for acquiring a rendering image corresponding to the model to be rendered, processing the rendering image based on a preset proportion to obtain a first rendering image, and adding an image filter to the first rendering image to obtain a second rendering image;
the normal image acquisition module is used for establishing a normal channel corresponding to the model to be rendered and obtaining a normal image corresponding to the model to be rendered through the normal channel;
and the stylized rendering result acquisition module is used for generating a curvature image corresponding to the normal image and the contour of the normal image based on the normal image, and obtaining the stylized rendering result of the model to be rendered according to the second rendering image, the curvature image and the contour of the normal image.
According to an aspect of the present disclosure, there is provided a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the stylized rendering method of any of the example embodiments described above.
According to an aspect of the present disclosure, there is provided an electronic device including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the stylized rendering method of any of the example embodiments described above via execution of the executable instructions.
According to the stylized rendering method provided by the embodiment of the invention, on one hand, the image filter is applied to the whole rendering image corresponding to the model to be rendered, so that the method is not limited by the boundary of the model to be rendered, the problem that the stylized effect in the prior art is only limited to be realized in the model is solved, and the stylized efficiency of the model to be rendered is improved; on the other hand, the normal image corresponding to the rendering image is generated through the normal channel, and the curvature image corresponding to the normal image and the contour of the normal image are generated based on the normal image, so that the details of the rendering image are reconstructed, the hand-drawing feeling of the rendering image is increased, and the stylized effect of the rendering image is more natural.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 schematically illustrates a flow chart of a stylized rendering method according to an exemplary embodiment of the present invention.
Fig. 2 schematically shows a schematic diagram of the operating principle of a Kuwahara filter according to an exemplary embodiment of the present invention.
FIG. 3 schematically illustrates a flow diagram of a method of stylized rendering, according to an example embodiment of the present invention.
Fig. 4 schematically shows an effect of reducing a coloring amount according to an exemplary embodiment of the present invention.
FIG. 5 schematically illustrates a flow chart of a method of obtaining a first rendered image according to an exemplary embodiment of the invention.
Fig. 6 schematically shows a schematic representation of the effect obtained after addition of a Kuwahara filter according to an exemplary embodiment of the invention.
Fig. 7 is a schematic diagram illustrating an effect of a second rendered image obtained by applying a block blurring filter and a Kuwahara filter to a first rendered image according to an exemplary embodiment of the present invention.
Fig. 8 is a schematic diagram illustrating an effect of a second rendered image obtained by applying a noise filter and a Kuwahara filter to a first rendered image according to an exemplary embodiment of the present invention.
Fig. 9 schematically shows an effect diagram for generating a normal image of a camera space of a model to be rendered according to an exemplary embodiment of the present invention.
Fig. 10 schematically illustrates an effect diagram of generating an outline of a normal image and a curvature image corresponding to the normal image according to an exemplary embodiment of the present invention.
FIG. 11 schematically illustrates a flowchart of a method of obtaining stylized rendering results for a model to be rendered, according to an exemplary embodiment of the present invention.
Fig. 12 is a schematic diagram illustrating an effect of a model to be rendered through stylized rendering according to an exemplary embodiment of the present invention.
Fig. 13 schematically illustrates a flowchart of a stylized rendering method according to an exemplary embodiment of the present invention.
Fig. 14 schematically illustrates a block diagram of a stylized rendering apparatus according to an exemplary embodiment of the present invention.
Fig. 15 schematically illustrates an electronic device for implementing the above-described stylized rendering method according to an exemplary embodiment of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the invention.
Furthermore, the drawings are merely schematic illustrations of the invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
There is a lot of research devoted to implementing physics-based rendering in real-time rendering, but stylized visual effects mainly depend on colored and stylized art assets.
For stylized art assets, the characteristics and proportion of the models are exaggerated and the structures of the surfaces and materials of the models are simplified during creation, when stylized art assets are manufactured, art workers need to balance between keeping the main characteristics of the stylized assets and reducing noise and disorder, the finally generated art assets are generally rendered without any illumination, or a cartoon rendering technology is used for creating a relatively flat illumination feedback, in order to highlight the important characteristics of the stylized assets and ensure that the generated stylized assets have uniformity, the stylized assets need more time to be manufactured, and the stylized art assets ignore the uniformity of real-time illumination, so that the rendering results lack the uniformity.
For coloring, no lighting, cartoon coloring, and drop-line shading may be included. The system is characterized in that no illumination is used, no pre-rendering or real-time illumination is used, resources and styles thereof are defined by completely depending on the addition of shadows and details by artists, the shadows and highlights are drawn by the artists under a fixed illumination condition, and no illumination is mainly applied to games with fixed camera angles, such as plane side rolling or equal-proportion games; cartoon coloring relies on real-time illumination, an effect which is easier to recognize is created by simplifying and planarizing illumination feedback, edge light and contour lines can be added to the finally obtained effect, and inspiration of a coloring model mainly comes from animations, cartoons and cartoons; the aim of the wire arranging shadow is to modify the shadow and the coloring gradual change mode, so that the wire arranging shadow embodies the effect of hand-drawing pencil sketch.
However, without illumination, it is difficult to maintain consistent results since every detail is art crafted; cartoon shading, while enabling different styles, reduces surface detail in texture and shading, lighting to shadows is not a smooth gradual change in brightness, but rather a single or multiple hard cuts between lighting and shadows. Additionally, drop-line shading is rarely used in games.
In view of one or more of the above problems, the present example embodiment first provides a stylized rendering method, which may be executed on a server, a server cluster, a cloud server, or the like; of course, those skilled in the art may also operate the method of the present invention on other platforms as needed, and this example embodiment is not limited to this specifically. Referring to fig. 1, the stylized rendering method may include the steps of:
s110, obtaining a rendering image corresponding to a model to be rendered, processing the rendering image based on a preset proportion to obtain a first rendering image, and adding an image filter to the first rendering image to obtain a second rendering image;
s120, establishing a normal channel corresponding to the model to be rendered, and obtaining a normal image corresponding to the model to be rendered through the normal channel;
and S130, generating a curvature image corresponding to the normal image and the contour of the normal image based on the normal image, and obtaining a stylized rendering result of the model to be rendered according to the second rendering image, the curvature image and the contour of the normal image.
According to the stylized rendering method, on one hand, the image filter is applied to the whole rendering image corresponding to the model to be rendered, so that the method is not limited by the boundary of the model to be rendered, the problem that the stylized effect in the prior art is only limited to be realized in the model is solved, and the stylized efficiency of the model to be rendered is improved; on the other hand, a normal image corresponding to the rendering image is generated through the normal channel, a curvature image corresponding to the normal image and the contour of the normal image are generated based on the normal image, the details of the rendering image are reconstructed through the curvature image and the contour of the normal image, the hand-drawing sense of the rendering image is increased, and the stylized effect of the rendering image is more natural.
Hereinafter, each step involved in the stylized rendering method of the exemplary embodiment of the present disclosure is explained and explained in detail.
First, an application scenario and an object of the exemplary embodiment of the present disclosure are explained and explained. Specifically, the method and the device can be used for achieving the hand-drawing effect which is not strictly limited in the model boundary, retaining the main characteristics and details of the rendering model, and reducing the performance consumption in the rendering process.
Next, steps S110 to S130 will be explained and explained in detail.
In step S110, a rendering image corresponding to the model to be rendered is obtained, the rendering image is processed based on a preset ratio to obtain a first rendering image, and an image filter is added to the first rendering image to obtain a second rendering image.
The rendering image is an image obtained after the model to be rendered is rendered to a screen; processing the rendered image may be to reduce a resolution of the rendered image; the preset ratio may be 1/2 of the original resolution of the rendered image, or 1/4 of the original resolution of the rendered image, which is not specifically limited in this exemplary embodiment; the filter added to the first rendered image may be a Kuwahara filter, or may be another filter, and the image filter is not specifically limited in this exemplary embodiment; the working principle of the common image filter is to acquire all pixels included in a rendered image, and for each pixel included in all pixels, the pixel is blurred by comparing the pixel with adjacent pixels and calculating an average value, so that the common image filter can be used for modifying the rendered image based on a certain style, and is a common tool in photo and video editing and real-time rendering applications.
The Kuwahara filter works on the principle that: firstly, dividing a pixel area of a rendered image into K different sub-areas, wherein K is a positive integer, the number of pixels included in each sub-area can be adjusted by using a radius parameter, and a person skilled in the art can determine the value of the radius parameter according to needs; secondly, calculating the mean value and the variance of the K sub-regions, acquiring the sub-region with the minimum variance, acquiring the mean value of the sub-region with the minimum variance, and taking the mean value as the color value of the final pixel. Referring to fig. 2, the pixel area of the rendered image is divided into four sub-areas A, B, C and D, the four sub-areas cross at the center with a cross, first, the mean and variance of the four sub-areas are calculated, and then, the mean of the sub-area with the smallest variance is used as the color value of the final pixel of the rendered image; noise and detail included in the rendered image is filtered by adding a Kuwahara filter, but at the same time the original shape and hard edges of the rendered image are preserved.
In this exemplary embodiment, referring to fig. 3, in order to highlight the characteristics of the hand-drawn cartoon or cartoon character in the game scene, the stylized rendering method may further include steps S310 and S320:
s310, obtaining a rendering and coloring result of the model to be rendered based on physics; wherein the physics-based rendering shading results include one or more of: a reflectivity map, a metallization map, a roughness map and a normal map of the model to be rendered;
s320, mixing the rendering and coloring result based on physics and the coloring result without light reflection based on a preset coloring amount parameter to reduce the coloring amount of the model to be rendered; and the non-illumination diffuse reflection coloring result is a color parameter corresponding to the reflectivity map of the model to be rendered.
Hereinafter, step S310 and step S320 will be explained and explained. Specifically, a significant feature of the hand-drawn cartoon and animation is that objects such as characters included in a scene do not have much detail compared with the scene, and in order to implement the feature on each Rendering model, the coloring amount applied to each Rendering model may be reduced, first, a PBR (physical Based Rendering) coloring result of the model to be rendered is calculated, where the PBR coloring result may include one or more of an Albedo (reflectivity) map, a metal degree map, a roughness map, and a normal map; secondly, mixing the PBR coloring result of the model to be rendered with the non-light reflection result according to a preset coloring amount parameter to reduce the coloring amount of the model to be rendered and make the model to be rendered look more planar, wherein the coloring amount parameter may be a coloring amount percentage or a weighted value of the coloring amount, and the coloring amount parameter is not specifically limited in the present embodiment. An effect map for reducing the shading amount for the model to be rendered can be referred to fig. 4.
Wherein, the rendering and coloring result based on physics of the model to be rendered comprises: the rendering method includes rendering a model to be rendered, and rendering a rendering model to be rendered, which includes a reflectivity map, a metallization map, a roughness map, and a normal map of the model to be rendered. PBR is a rendering technique used in cinematic and real-time rendering to simulate the most common material types in a single physically-representation-accurate rendering model. When using properly configured PBR settings, modern game engines can render near-realistic visual effects; the Albedo map can embody the texture and color of the model to be rendered, and the metallization map can define whether the model to be rendered is metal or dielectric medium (nonmetal); the roughness map may define the roughness or smoothness of the object surface reflection; the normal map is used for adding the concave-convex and the detail of the curved surface; when the preset coloring amount parameter is a coloring amount percentage, the coloring amount percentage may be 10% or 50%, which is not specifically limited in this exemplary embodiment; the non-illumination diffuse reflection coloring result may be a color parameter value of a model to be rendered included in an Albedo map to be rendered, and further, mixing the rendering coloring result based on physics and the non-illumination diffuse reflection coloring result based on a preset coloring amount parameter includes:
and mixing the rendering and coloring result based on physics with the color parameter corresponding to the reflectivity mapping of the model to be rendered based on a preset coloring amount parameter.
Specifically, the physical rendering and coloring result of the model to be rendered and the color parameter corresponding to the reflectivity map of the model to be rendered are mixed based on a preset coloring amount parameter, so that the model to be rendered does not have extra coloring or illumination expression from the environment. When the model to be rendered needs to have shadow features, the shadow can be added after the coloring amount of the model to be rendered is reduced.
In this exemplary embodiment, referring to fig. 5, acquiring a rendering image corresponding to a model to be rendered, and processing the rendering image based on a preset ratio to obtain a first rendering image may include step S510 and step S520:
step 510, rendering the model to be rendered to obtain a rendering image corresponding to the model to be rendered;
and S520, processing the resolution of the rendered image based on a preset proportion to obtain a first rendered image of the model to be rendered.
Hereinafter, step S510 and step S520 will be explained and explained. Specifically, firstly, rendering a model to be rendered, and acquiring a rendered image rendered by the rendering model; secondly, in order to reduce the consumption of image processing after adding an image filter in real-time rendering, the resolution of the rendered image may be reduced according to a preset ratio to obtain a first rendered image, specifically, the rendered image may be reduced to 1/2 of the original resolution, or the rendered image may be reduced to 1/4 of the original resolution, which is not specifically limited in this example embodiment; thirdly, a Kuwahara filter is added to the obtained first rendered image, the Kuwahara filter can smooth the first rendered image, main characteristics of the first rendered image can be kept, when a larger kernel radius is used, the first rendered image can show a watercolor or oil painting effect, and the effect of the second rendered image obtained by adding the filter to the first rendered image can be shown in fig. 6.
Further, before the Kuwahara filter is added, another filter may be added to adjust the first rendered image, and the other filter may be a frame blur filter, a gaussian blur filter, or a noise filter, which is not specifically limited in this example embodiment.
Specifically, when the other filter is a frame blurring filter, first, sampling a color value of a pixel included in an image kernel, and determining a final color value of the pixel included in the image kernel by using a fixed image kernel matching weight, where the image kernel is a pixel matrix and is used for realizing a picture effect, such as blurring, sharpening, contouring or embossing; the image kernel may be 3 × 3 or 4 × 4, which is not specifically limited in this exemplary embodiment, and the image kernel matching weight may be 1/9 or 1/16, which is not specifically limited in this exemplary embodiment; when the image kernel is 3 × 3 and the image kernel matching weight is 1/9, 9 pixels included in the image kernel may be sampled, and 1/9, in which the color value of each pixel contributes its own color value, is the final pixel color value, that is, the color values of the 9 pixels included in the image kernel are added, and the result is divided by 9 to calculate the color of the blurred pixel. Applying a block blur filter before applying the Kuwahara filter to the first rendered image may result in a smoother and more blurred shape, and the resulting effect map of the second rendered image may be as shown with reference to fig. 7.
When the other filter is a gaussian blur filter, the principle is the same as the working principle of the block blur filter, but the pixel color is gaussian attenuated, and compared with the weight of the pixel value at the boundary and corner of the image kernel, the weight value of the pixel at the center of the image kernel is larger, and the image kernel may be 3 × 3 or 4 × 4, which is not specifically limited in this example embodiment; adding the color values of the pixels comprised in the image kernel, wherein the weight of the color value of each pixel follows a gaussian distribution.
When other filters are noise filters, in order to realize a dynamic noise filter, the UV coordinates of pixels in an image kernel and the cosine of a dot product are obtained, the UV coordinates of the pixels, the cosine of the dot product and a pseudorandom constant term are multiplied, the decimal part of the product is returned, the pixel coordinates can deviate along with the change of time, an image formed by adding noise can also change along with time, and then the dynamic noise filter is realized. Before applying the Kuwahara filter to the first rendered image, a noise filter may be applied to obtain a dynamic instability effect, and an effect map of the second rendered image may be obtained as shown in fig. 8.
In step S120, a normal channel corresponding to the model to be rendered is established, and a normal image corresponding to the model to be rendered is obtained through the normal channel.
In this exemplary embodiment, after applying the Kuwahara filter to the rendered image, a rendered image with a stylized effect may be obtained, but original details of the rendered image are lost in the stylization process, and especially when the Kuwahara filter is applied to the rendered image with reduced resolution, or a larger kernel radius is used, in order to reconstruct details of the rendered image, the model to be rendered or the subset of the model to be rendered may be rendered again through an additional normal channel to generate an image of the surface normal of the model to be rendered. Where the normal is a vector describing the surface and curvature of the model to be rendered. In real-time rendering, illumination and shading calculations on the model to be rendered require normal vectors. The normal vector is stored on each vertex of the three-dimensional model, the vertex can define the position, and the normal vector can define the direction of the curved surface; the normal image corresponding to the model to be rendered may be a camera spatial normal image or a world spatial normal image, which is not specifically limited in this exemplary embodiment; the normal or any vector can be represented in a different space. Although the vectors in real-time rendering are usually represented in world space, for image effects it is more appropriate to convert the normal into camera space so that the vector direction can match the direction of the camera.
When the normal image corresponding to the model to be rendered is a camera space normal image, obtaining the normal image corresponding to the model to be rendered through the normal channel, including:
converting the coordinate of the model to be rendered from an object space to a cutting space, converting the normal of the model to be rendered from a model space to a world space, and converting the discovery of the model to be rendered from the world space to a camera space through the normal channel so as to obtain a normal image corresponding to the model to be rendered.
Specifically, first, in order to acquire a camera space normal image, all model shaders must use a separate channel to output a required coordinate system, and in the present exemplary embodiment, a single normal channel may be rendered using a scriptable renderfeature function of Unity (game engine); secondly, converting the coordinates of the model to be rendered from the object space to the clipping space by using a TransformObjectToHClip () function included in HLSL (High Level Shader Language) syntax through a normal channel; thirdly, converting the normal of the model to be rendered from the model space to the world space through a transformObjectToWorldNormal () function; and finally, converting the normal of the model to be rendered from the world space to the camera space through a transformWorldToViewDir () function, and further obtaining a normal image corresponding to the model to be rendered. The generated normal image of the camera space is shown with reference to fig. 9.
In step S130, a curvature image corresponding to the normal image and a contour of the normal image are generated based on the normal image, and a stylized rendering result of the model to be rendered is obtained according to the second rendering image, the curvature image and the contour of the normal image.
When the coloring amount of the model to be rendered is low, the coloring of the model to be rendered can be enhanced through the curvature, so that the curvature image corresponding to the normal image can be obtained by adding the curvature filter to the normal image so as to extract highlight and shadow of the model to be rendered, different parameters are provided in the curvature filter to adjust the minimum and maximum brightness of the curvature result, and the sampling radius is also provided to determine the degree of detail retention. In order to reconstruct and emphasize details of a rendered image lost in the process of applying a Kuwahara filter and make the outline of the rendered image and the smoothed color not completely matched any more, the hand-drawing feeling of the rendered image is increased, so that the rendered image has more artistic effect, the outline or the stroke of the rendered image can be generated based on a normal image of a stroke filter, and the stroke filter provides a parameter capable of adjusting an intensity parameter and a radius parameter, wherein the radius parameter can control the thickness of the stroke.
In the present exemplary embodiment, generating a curvature image corresponding to the normal image and an outline of the normal image based on the normal image includes:
and performing edge tracing on the normal image through a Sobel operator to generate a contour of the normal image, namely generating the contour of the normal image.
Specifically, the curvature represents the degree of change of the curved surface at a given point, and is calculated based on a normal vector, so that, first, a normal vector image needs to be obtained, two horizontal vectors and two vertical vectors are constructed between the normal vector of a pixel point included in the normal vector image and the normal vectors of pixel points adjacent to the pixel point (up, down, left, and right) to calculate the curvature, and the calculated curvature is adjusted by parameters included in a curvature filter. The contour of the normal image may be generated by sobel operator, mainly by calculating horizontal and vertical derivatives based on neighboring pixels in the positive and negative directions, i.e. the image is edged by relative change between pixel color values, in this example embodiment, may be based on relative change of picture gray value; the image boundary may be determined by an image kernel, specifically, by a convolution factor in a horizontal direction and a convolution factor in a vertical direction, approximate values of gray scale partial derivatives of the image kernel in the horizontal direction and the vertical direction are calculated, respectively, and a final color value of a pixel included in the image kernel is obtained by using the approximate values in the horizontal direction and the approximate values in the vertical direction. Based on the normal image shown in fig. 9, the contour of the generated normal image and the curvature image corresponding to the normal image can be referred to as shown in fig. 10.
After obtaining the curvature image corresponding to the normal image and the contour of the normal image, referring to fig. 11, obtaining the stylized rendering result of the model to be rendered according to the second rendering image, the curvature image corresponding to the normal image and the contour of the normal image may include step S1110 and step S1120:
step S1110, superposing the curvature image to the second rendering image to obtain a third rendering image;
step S1120, the contour of the normal image is superposed to the third rendering image to obtain a stylized rendering result of the model to be rendered.
Hereinafter, step S1110 and step S1120 will be explained and explained. Specifically, the curvature image corresponding to the normal image and the effect of the second rendered image obtained after the Kuwahara filter is added are superposed to obtain a third rendered image, and the contour effect of the normal image is superposed into the third rendered image, so that a dark side is added to the contour of the third rendered image, and the influence on the bright area of the third rendered image is reduced, that is, the white part included in the third rendered image is not influenced.
The stylized rendering method provided by the disclosed example embodiment has at least the following advantages: the rendering image of the model to be rendered is processed, so that the stylization efficiency of the model to be rendered is improved; on the other hand, before the image filter is added, the rendered graph is subjected to resolution reduction processing, so that the consumption of image processing after the filter is added in real-time rendering is reduced; on the other hand, the contour effect of the normal image is superposed into the third rendering image, so that the influence on the bright area of the third rendering image is reduced, and the detailed representation of the model to be rendered is enhanced. The effect graph obtained by the stylized rendering method can be referred to as shown in fig. 12.
Hereinafter, the stylized rendering method according to the exemplary embodiment of the present disclosure is further explained and explained with reference to fig. 13. Fig. 13 is a schematic flowchart of a stylized rendering method, where the stylized rendering method may include:
step S1310, obtaining a rendering and coloring result of the model to be rendered based on physics, and removing coloring of the model to be rendered according to the rendering and coloring result based on physics to obtain a target rendering model;
s1320, obtaining a rendering image of the target rendering model, and reducing the resolution of the rendering image based on a preset proportion to obtain a first rendering image;
s1330, adding an image filter to the first rendered image to obtain a second rendered image;
s1340, establishing a normal channel corresponding to the target rendering model, and generating a normal image of the target rendering model through the normal channel;
s1350, adding a curvature filter and a Sobel filter to the normal image to obtain a curvature image corresponding to the normal image and a contour of the normal image;
s1360, superposing the second rendering image and a filter image corresponding to the normal image to obtain a third rendering image;
and S1370, overlapping the contour of the normal image to a third rendering image to obtain a stylized rendering result of the model to be rendered.
An exemplary embodiment of the present disclosure further provides a stylized rendering apparatus, as shown in fig. 14, which may include: a second rendered image obtaining module 1410, a normal image obtaining module 1420, and a stylized rendering result obtaining module 1430. Wherein:
a second rendering image obtaining module 1410, configured to obtain a rendering image corresponding to the model to be rendered, process the rendering image based on a preset ratio to obtain a first rendering image, and add an image filter to the first rendering image to obtain a second rendering image;
a normal image obtaining module 1420, configured to establish a normal channel corresponding to the model to be rendered, and obtain a normal image corresponding to the model to be rendered through the normal channel;
and a stylized rendering result obtaining module 1430, configured to generate a curvature image corresponding to the normal image and a contour of the normal image based on the normal image, and obtain a stylized rendering result of the model to be rendered according to the second rendering image, the curvature image and the contour of the normal image.
The specific details of each module in the stylized rendering apparatus have been described in detail in the corresponding stylized rendering method, and therefore are not described herein again.
In an exemplary embodiment of the present disclosure, the stylized rendering method further includes:
acquiring a default rendering and coloring result based on physics of the model to be rendered; wherein the physics-based rendering shading results include one or more of: a reflectivity map, a metallization map, a roughness map and a normal map of the model to be rendered;
mixing the default rendering and coloring result based on physics with a non-illumination diffuse reflection coloring result based on a preset coloring quantity parameter so as to reduce the coloring quantity of the model to be rendered; and the non-illumination diffuse reflection coloring result is a color parameter corresponding to the reflectivity map of the model to be rendered.
In an exemplary embodiment of the present disclosure, the mixing the default rendering shading result based on physics with no-illumination diffuse reflection shading result based on a preset shading amount parameter includes:
and mixing the default rendering and coloring result based on physics with the color parameter included in the reflectivity map of the model to be rendered based on a preset coloring amount parameter.
In an exemplary embodiment of the present disclosure, obtaining a rendering image corresponding to a model to be rendered, and processing the rendering image based on a preset ratio to obtain a first rendering image includes:
rendering the model to be rendered to obtain a rendered image corresponding to the model to be rendered;
and processing the resolution of the rendered image based on a preset proportion to obtain a first rendered image of the model to be rendered.
In an exemplary embodiment of the present disclosure, the normal image corresponding to the model to be rendered is a camera space normal image or a world space normal image;
when the normal image corresponding to the model to be rendered is a camera space normal image, obtaining the normal image corresponding to the model to be rendered through the normal channel, including:
converting the coordinate of the model to be rendered from an object space to a cutting space, converting the normal of the model to be rendered from a model space to a world space, and converting the normal of the model to be rendered from the world space to a camera space through the normal channel so as to obtain a normal image corresponding to the model to be rendered.
In an exemplary embodiment of the present disclosure, generating a curvature image corresponding to the normal image and an outline of the normal image based on the normal image includes:
and adjusting the curvature of the normal image through a curvature filter to obtain a curvature image corresponding to the normal image, and performing edge tracing on the normal image through a Sobel operator to generate the outline of the normal image.
In an exemplary embodiment of the present disclosure, obtaining a stylized rendering result of the model to be rendered according to the second rendering image, the curvature image, and the contour of the normal image includes:
superposing the curvature image to the second rendering image to obtain a third rendering image;
and superposing the contour of the normal image to the third rendering image to obtain a stylized rendering result of the model to be rendered.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the invention. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present invention are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
In an exemplary embodiment of the present invention, there is also provided an electronic device capable of implementing the above method.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 1500 according to this embodiment of the invention is described below with reference to fig. 15. The electronic device 1500 shown in fig. 15 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 15, electronic device 1500 is in the form of a general purpose computing device. Components of electronic device 1500 may include, but are not limited to: the at least one processing unit 1510, the at least one storage unit 1520, a bus 1530 connecting different system components (including the storage unit 1520 and the processing unit 1510), and a display unit 1540.
Wherein the memory unit stores program code that is executable by the processing unit 1510 to cause the processing unit 1510 to perform steps according to various exemplary embodiments of the present invention as described in the above section "exemplary methods" of the present specification. For example, the processing unit 1510 may perform step S110 as shown in fig. 1: the method comprises the steps of obtaining a rendering image corresponding to a model to be rendered, processing the rendering image based on a preset proportion to obtain a first rendering image, and adding an image filter to the first rendering image to obtain a second rendering image; s120: establishing a normal channel corresponding to the model to be rendered, and obtaining a normal image corresponding to the model to be rendered through the normal channel; s130: generating a curvature image corresponding to the normal image and the contour of the normal image based on the normal image, and obtaining a stylized rendering result of the model to be rendered according to the second rendering image, the curvature image and the contour of the normal image.
The storage unit 1520 may include readable media in the form of volatile storage units, such as a random access memory unit (RAM)15201 and/or a cache memory unit 15202, and may further include a read only memory unit (ROM) 15203.
Storage unit 1520 may also include a program/utility 15204 having a set (at least one) of program modules 15205, such program modules 15205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 1530 may be any bus representing one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 1500 can also communicate with one or more external devices 1600 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 1500, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 1500 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interface 1550. Also, the electronic device 1500 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 1560. As shown, the network adapter 1560 communicates with the other modules of the electronic device 1500 over the bus 1530. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 1500, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiment of the present invention.
In an exemplary embodiment of the present invention, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.
According to the program product for realizing the method, the portable compact disc read only memory (CD-ROM) can be adopted, the program code is included, and the program product can be operated on terminal equipment, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (10)

1. A stylized rendering method, comprising:
the method comprises the steps of obtaining a rendering image corresponding to a model to be rendered, processing the rendering image based on a preset proportion to obtain a first rendering image, and adding an image filter to the first rendering image to obtain a second rendering image;
establishing a normal channel corresponding to the model to be rendered, and obtaining a normal image corresponding to the model to be rendered through the normal channel;
generating a curvature image corresponding to the normal image and the contour of the normal image based on the normal image, and obtaining a stylized rendering result of the model to be rendered according to the second rendering image, the curvature image and the contour of the normal image.
2. The stylized rendering method of claim 1, further comprising:
obtaining a rendering and coloring result of the model to be rendered based on physics; wherein the physics-based rendering shading results include one or more of: a reflectivity map, a metallization map, a roughness map and a normal map of the model to be rendered;
mixing the rendering and coloring result based on physics with a non-illumination diffuse reflection coloring result based on a preset coloring quantity parameter so as to reduce the coloring quantity of the model to be rendered; and the non-illumination diffuse reflection coloring result is a color parameter corresponding to the reflectivity map of the model to be rendered.
3. The stylized rendering method of claim 2, wherein the mixing of the physics-based rendering shading result with the non-illumination diffuse reflection shading result based on a preset shading amount parameter comprises:
and mixing the rendering and coloring result based on physics with the color parameter corresponding to the reflectivity mapping of the model to be rendered based on a preset coloring amount parameter.
4. The stylized rendering method of claim 1, wherein obtaining a rendered image corresponding to a model to be rendered, and processing the rendered image based on a preset proportion to obtain a first rendered image, comprises:
rendering the model to be rendered to obtain a rendered image corresponding to the model to be rendered;
and processing the resolution of the rendered image based on a preset proportion to obtain a first rendered image of the model to be rendered.
5. The stylized rendering method of claim 4, characterised in that the normal image corresponding to the model to be rendered is a camera space normal image or a world space normal image;
when the normal image corresponding to the model to be rendered is a camera space normal image, obtaining the normal image corresponding to the model to be rendered through the normal channel, including:
converting the coordinate of the model to be rendered from an object space to a cutting space, converting the normal of the model to be rendered from a model space to a world space, and converting the normal of the model to be rendered from the world space to a camera space through the normal channel so as to obtain a normal image corresponding to the model to be rendered.
6. The stylized rendering method of claim 5, wherein generating a curvature image corresponding to the normal image and an outline of the normal image based on the normal image comprises:
and adjusting the curvature of the normal image through a curvature filter to obtain a curvature image corresponding to the normal image, and performing edge tracing on the normal image through a Sobel operator to generate the outline of the normal image.
7. The stylized rendering method of claim 6, wherein obtaining the stylized rendering result of the model to be rendered according to the second rendering image, the curvature image and the contour of the normal image comprises:
superposing the curvature image to the second rendering image to obtain a third rendering image;
and superposing the contour of the normal image to the third rendering image to obtain a stylized rendering result of the model to be rendered.
8. A stylized rendering apparatus, comprising:
the second rendering image acquisition module is used for acquiring a rendering image corresponding to the model to be rendered, processing the rendering image based on a preset proportion to obtain a first rendering image, and adding an image filter to the first rendering image to obtain a second rendering image;
the normal image acquisition module is used for establishing a normal channel corresponding to the model to be rendered and obtaining a normal image corresponding to the model to be rendered through the normal channel;
and the stylized rendering result acquisition module is used for generating a curvature image corresponding to the normal image and the contour of the normal image based on the normal image, and obtaining the stylized rendering result of the model to be rendered according to the second rendering image, the curvature image and the contour of the normal image.
9. A readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the stylized rendering method of any one of claims 1-7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the stylized rendering method of any of claims 1-7 via execution of the executable instructions.
CN202110586491.2A 2021-05-27 2021-05-27 Stylized rendering method and device, readable storage medium and electronic equipment Active CN113240783B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110586491.2A CN113240783B (en) 2021-05-27 2021-05-27 Stylized rendering method and device, readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110586491.2A CN113240783B (en) 2021-05-27 2021-05-27 Stylized rendering method and device, readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113240783A true CN113240783A (en) 2021-08-10
CN113240783B CN113240783B (en) 2023-06-27

Family

ID=77139248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110586491.2A Active CN113240783B (en) 2021-05-27 2021-05-27 Stylized rendering method and device, readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113240783B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658316A (en) * 2021-10-18 2021-11-16 北京市商汤科技开发有限公司 Rendering method and device of three-dimensional model, storage medium and computer equipment
CN114119835A (en) * 2021-12-03 2022-03-01 北京冰封互娱科技有限公司 Hard surface model processing method and device and electronic equipment
CN114119847A (en) * 2021-12-05 2022-03-01 北京字跳网络技术有限公司 Graph processing method and device, computer equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5966134A (en) * 1996-06-28 1999-10-12 Softimage Simulating cel animation and shading
JP2001084404A (en) * 1999-09-14 2001-03-30 Square Co Ltd Method and device for rendering, game machine, and computer readable recording medium for storing program for rendering three-dimensional model
CN1750046A (en) * 2005-10-20 2006-03-22 浙江大学 Three-dimensional ink and wash effect rendering method based on graphic processor
US7098925B1 (en) * 2000-03-10 2006-08-29 Intel Corporation Shading of images using texture
JP2007272273A (en) * 2006-03-30 2007-10-18 Namco Bandai Games Inc Image generation system, program, and information storage medium
US20110050696A1 (en) * 2009-08-31 2011-03-03 Canon Kabushiki Kaisha Efficient radial gradient fills
CN104966312A (en) * 2014-06-10 2015-10-07 腾讯科技(深圳)有限公司 Method for rendering 3D model, apparatus for rendering 3D model and terminal equipment
CN109685869A (en) * 2018-12-25 2019-04-26 网易(杭州)网络有限公司 Dummy model rendering method and device, storage medium, electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5966134A (en) * 1996-06-28 1999-10-12 Softimage Simulating cel animation and shading
JP2001084404A (en) * 1999-09-14 2001-03-30 Square Co Ltd Method and device for rendering, game machine, and computer readable recording medium for storing program for rendering three-dimensional model
US7098925B1 (en) * 2000-03-10 2006-08-29 Intel Corporation Shading of images using texture
CN1750046A (en) * 2005-10-20 2006-03-22 浙江大学 Three-dimensional ink and wash effect rendering method based on graphic processor
JP2007272273A (en) * 2006-03-30 2007-10-18 Namco Bandai Games Inc Image generation system, program, and information storage medium
US20110050696A1 (en) * 2009-08-31 2011-03-03 Canon Kabushiki Kaisha Efficient radial gradient fills
CN104966312A (en) * 2014-06-10 2015-10-07 腾讯科技(深圳)有限公司 Method for rendering 3D model, apparatus for rendering 3D model and terminal equipment
CN109685869A (en) * 2018-12-25 2019-04-26 网易(杭州)网络有限公司 Dummy model rendering method and device, storage medium, electronic equipment

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
D. PRYKHODKO, LEONID GAIAZOV: "Real-Time NPR System with Multiple Styles", 《 COMPUTER SCIENCE》 *
PETER BENILOV: "Non Photorealistic Techniques with Focus and Context Volume Rendering", 《SCHOOL OF COMPUTER SCIENCE AND STATISTICS TRINITY COLLEGE DUBLIN》 *
卫琦: "一种树的三维水墨画风格非真实感绘制方法的研究与实现", 《中国优秀博硕士学位论文全文数据库(硕士)哲学与人文科学辑》 *
周冲: "二维彩色图像的卡通风格实现", 《中国优秀博硕士学位论文全文数据库 (硕士) 信息科技辑》 *
柳有权等: "线条增强的建筑物图像抽象画生成", 《计算机辅助设计与图形学学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658316A (en) * 2021-10-18 2021-11-16 北京市商汤科技开发有限公司 Rendering method and device of three-dimensional model, storage medium and computer equipment
CN113658316B (en) * 2021-10-18 2022-03-08 北京市商汤科技开发有限公司 Rendering method and device of three-dimensional model, storage medium and computer equipment
CN114119835A (en) * 2021-12-03 2022-03-01 北京冰封互娱科技有限公司 Hard surface model processing method and device and electronic equipment
CN114119835B (en) * 2021-12-03 2022-11-08 北京冰封互娱科技有限公司 Hard surface model processing method and device and electronic equipment
CN114119847A (en) * 2021-12-05 2022-03-01 北京字跳网络技术有限公司 Graph processing method and device, computer equipment and storage medium
CN114119847B (en) * 2021-12-05 2023-11-07 北京字跳网络技术有限公司 Graphic processing method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113240783B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
US11257286B2 (en) Method for rendering of simulating illumination and terminal
CN113240783B (en) Stylized rendering method and device, readable storage medium and electronic equipment
Lu et al. Non-photorealistic volume rendering using stippling techniques
US9684997B2 (en) Efficient rendering of volumetric elements
CN107045729B (en) A kind of image rendering method and device
CN109448137B (en) Interaction method, interaction device, electronic equipment and storage medium
Cignoni et al. A simple normal enhancement technique for interactive non-photorealistic renderings
US9189883B1 (en) Rendering of multiple volumes
CN108805971B (en) Ambient light shielding method
CN111583379B (en) Virtual model rendering method and device, storage medium and electronic equipment
US8847963B1 (en) Systems and methods for generating skin and volume details for animated characters
Magdics et al. Post-processing NPR effects for video games
Winnemöller NPR in the Wild
Pessoa et al. RPR-SORS: Real-time photorealistic rendering of synthetic objects into real scenes
CN115100337A (en) Whole body portrait video relighting method and device based on convolutional neural network
US11804008B2 (en) Systems and methods of texture super sampling for low-rate shading
Haller et al. A loose and sketchy approach in a mediated reality environment
CN113888398B (en) Hair rendering method and device and electronic equipment
CN116363288A (en) Rendering method and device of target object, storage medium and computer equipment
KR100454070B1 (en) Method for Real-time Toon Rendering with Shadow using computer
CN115311395A (en) Three-dimensional scene rendering method, device and equipment
CN113936080A (en) Rendering method and device of virtual model, storage medium and electronic equipment
Yuan et al. GPU-based rendering and animation for Chinese painting cartoon
Garcia et al. Coherent Mark‐based Stylization of 3D Scenes at the Compositing Stage
Curtis et al. Real-time non-photorealistic animation for immersive storytelling in “Age of Sail”

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant