CN112691373A - Rendering method, device and equipment of virtual object and computer readable storage medium - Google Patents
Rendering method, device and equipment of virtual object and computer readable storage medium Download PDFInfo
- Publication number
- CN112691373A CN112691373A CN202110040561.4A CN202110040561A CN112691373A CN 112691373 A CN112691373 A CN 112691373A CN 202110040561 A CN202110040561 A CN 202110040561A CN 112691373 A CN112691373 A CN 112691373A
- Authority
- CN
- China
- Prior art keywords
- scattering
- rendered
- sub
- rendering
- parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/506—Illumination models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/55—Radiosity
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
Abstract
The embodiment of the application provides a rendering method, a rendering device, rendering equipment and a computer-readable storage medium of a virtual object; the method comprises the following steps: displaying a virtual object model, wherein the virtual object model comprises an object to be rendered, and the material of an entity corresponding to the object to be rendered has a multilayer structure; responding to scattering rendering operation aiming at an object to be rendered, and displaying a scattering depth control and a scattering color control, wherein the scattering depth control is used for triggering adjustment of scattering depth, and the scattering color control is used for triggering adjustment of scattering color; and in response to an adjusting operation acted on the scattering parameter control, displaying the sub-surface scattering information matched with the adjusted scattering parameter for the object to be rendered, wherein the scattering parameter control comprises one or two of a scattering depth control and a scattering color control, and the adjusted scattering parameter comprises one or two of an adjusted scattering depth and an adjusted scattering color. Through the embodiment of the application, the rendering effect of the sub-surface scattering of the virtual object can be improved.
Description
Technical Field
The present application relates to rendering technologies in the field of computer applications, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for rendering a virtual object.
Background
Sub-surface scattering is a light transmission mechanism where light rays penetrate the surface of a translucent object and interact and scatter within the material, eventually leaving the surface from different locations. In the process of rendering the virtual scene, the reality sense of the rendered virtual object can be improved by simulating the sub-surface scattering effect.
Generally, rendering of sub-surface scattering effects of virtual objects is typically achieved in a pre-integration based manner; however, in the process of implementing the sub-surface scattering effect rendering, the diffuse reflection component corresponding to the material with the sub-surface scattering effect on the virtual object is simulated by sampling a pre-integral graph, and then the rendering of the sub-surface scattering effect of the material with the sub-surface scattering effect is implemented through the pre-integral graph; since the pre-integrals are obtained by pre-calculation and have differences from the real-time information, the sub-surface scattering effect rendered is poor.
Disclosure of Invention
The embodiment of the application provides a rendering method, a rendering device and a computer readable storage medium of a virtual object, which can improve the rendering effect of sub-surface scattering.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a rendering method of a virtual object, which comprises the following steps:
displaying a virtual object model, wherein the virtual object model comprises an object to be rendered, and the material of an entity corresponding to the object to be rendered has a multilayer structure;
in response to a scattering rendering operation for the object to be rendered, displaying a scattering depth control and a scattering color control, wherein the scattering depth control is used for triggering adjustment of scattering depth, and the scattering color control is used for triggering adjustment of scattering color;
in response to an adjustment operation acting on a scattering parameter control, displaying, for the object to be rendered, sub-surface scattering information adapted to the adjusted scattering parameter, wherein the scattering parameter control includes one or both of the scattering depth control and the scattering color control, and the adjusted scattering parameter includes one or both of the adjusted scattering depth and the adjusted scattering color.
An embodiment of the present application provides a rendering apparatus for a virtual object, including:
the model display module is used for displaying a virtual object model, wherein the virtual object model comprises an object to be rendered, and the material of an entity corresponding to the object to be rendered has a multilayer structure;
the control display module is used for responding to scattering rendering operation aiming at the object to be rendered, and displaying a scattering depth control and a scattering color control, wherein the scattering depth control is used for triggering adjustment of scattering depth, and the scattering color control is used for triggering adjustment of scattering color;
and the scattering rendering module is used for responding to an adjusting operation acted on a scattering parameter control, and displaying sub-surface scattering information matched with the adjusted scattering parameter aiming at the object to be rendered, wherein the scattering parameter control comprises one or two of the scattering depth control and the scattering color control, and the adjusted scattering parameter comprises one or two of the adjusted scattering depth and the adjusted scattering color.
In this embodiment of the present application, the rendering apparatus further includes an illumination calculation module, configured to display an illumination control, where the illumination control is configured to trigger setting of an illumination parameter, and the illumination parameter includes one or two of a camera parameter and an illumination environment parameter; responding to an illumination parameter setting operation acted on the illumination control, and displaying the illumination parameter; and performing illumination calculation on the object to be rendered based on the illumination parameters to obtain highlight information to be rendered, basic color information to be rendered and a sub-surface scattering object.
In this embodiment of the application, the scattering rendering module is further configured to display the adjusted scattering parameter in response to the adjustment operation acting on the scattering parameter control; and performing secondary surface scattering rendering processing on the secondary surface scattering object based on the adjusted scattering parameter, displaying a fusion result of the highlight information to be rendered, the basic color information to be rendered and the rendering processing result, and displaying the secondary surface scattering information matched with the adjusted scattering parameter under the action of the illumination parameter for the object to be rendered, wherein the secondary surface scattering information is the fusion result.
In an embodiment of the present application, the scattering parameter control includes the scattering depth control and the scattering color control, and the adjusted scattering parameter includes the adjusted scattering depth and the adjusted scattering color; the scattering rendering module is further configured to determine a sub-surface scattering range of the object to be rendered based on the adjusted scattering depth; determining a sub-surface scattering color intensity of the object to be rendered based on the adjusted scattering color; and performing rendering processing of sub-surface scattering on the sub-surface scattering object by combining the sub-surface scattering range and the sub-surface scattering color intensity.
In this embodiment of the application, the scattering rendering module is further configured to obtain camera coordinate information of each to-be-rendered pixel point of the to-be-rendered object in a camera coordinate system, where the camera coordinate system is determined based on the camera parameters; determining a camera scattering frame based on the adjusted scattering depth by taking the camera coordinate information as a reference; projecting the camera scattering frame to a standard coordinate system to obtain a standard scattering frame; acquiring standard coordinate information of each pixel point to be rendered under the standard coordinate system; and determining a sub-surface scattering range of each pixel point to be rendered under the standard coordinate system based on the standard scattering frame and the standard coordinate information, so as to obtain the sub-surface scattering range which corresponds to the object to be rendered and comprises a plurality of sub-surface scattering ranges.
In this embodiment of the application, the illumination calculation module is further configured to perform illumination calculation on the object to be rendered based on the illumination parameter, so as to obtain normal information to be rendered.
In this embodiment of the application, the scattering rendering module is further configured to determine, based on the standard scattering frame and the standard coordinate information, a scattering range of the sub-surface to be corrected of each pixel point to be rendered in the standard coordinate system; and correcting the sub-surface scattering range to be corrected based on the normal information to be rendered to obtain the sub-surface scattering range.
In this embodiment of the application, the scattering rendering module is further configured to correct the sub-surface scattering range to be corrected based on the to-be-rendered normal information, so as to obtain an initial sub-surface scattering range of each to-be-rendered pixel point in the standard coordinate system; acquiring a noise sampling result corresponding to a preset noise map; and carrying out disturbance processing on the initial sub-surface scattering range by combining the noise sampling result and preset noise influence strength to obtain the sub-surface scattering range.
In this embodiment of the application, the scattering rendering module is further configured to determine a target close-range parameter corresponding to the adjusted scattering color based on a ratio of a close-range parameter threshold to a long-range parameter threshold; determining the adjusted scattering color as a target remote parameter; and determining a mixing weight threshold, the target close-range parameter and the target far-range parameter as the sub-surface scattering intensity information.
In this embodiment of the present application, the scattering rendering module is further configured to simulate a scattering profile of the sub-surface scattering object in combination with the sub-surface scattering color intensity; and performing post-processing on the scattering profile based on the convolution direction corresponding to the sub-surface scattering range, thereby finishing rendering processing of the sub-surface scattering object.
In this embodiment of the present application, the control display module is further configured to display a scattering depth unit control, where the scattering depth unit control is configured to trigger adjustment of a unit of scattering depth; displaying the adjusted scattering depth units in response to a scattering depth unit adjustment operation acting on the scattering depth unit control.
In this embodiment of the application, the scattering rendering module is further configured to, in response to the adjustment operation acting on the scattering parameter control, display, for the object to be rendered, the sub-surface scattering information adapted to the adjusted scattering parameter in the adjusted scattering depth unit.
In this embodiment of the present application, the rendering apparatus further includes a remaining object rendering module, configured to obtain remaining objects to be rendered of the virtual object model except the object to be rendered; and obtaining target residual rendering objects corresponding to the residual objects to be rendered.
In this embodiment of the application, the rendering apparatus further includes a model fusion module, configured to combine the target remaining rendering objects and the target rendering objects into a rendering model corresponding to the virtual object model, where the target rendering object is the object to be rendered that displays the sub-surface scattering information adapted to the adjusted scattering parameter; and displaying the rendering model, and finishing the rendering of the virtual object corresponding to the virtual object model.
An embodiment of the present application provides a rendering device for a virtual object, including:
a memory for storing executable instructions;
and the processor is used for realizing the rendering method of the virtual object provided by the embodiment of the application when the executable instruction stored in the memory is executed.
Embodiments of the present application provide a computer-readable storage medium, which stores executable instructions for causing a processor to implement a virtual object method of a material provided in an embodiment of the present application when executed.
The embodiment of the application has at least the following beneficial effects: in the virtual object model, aiming at an object to be rendered with a sub-surface scattering effect, the rendering of the sub-surface scattering effect can be realized by adjusting the scattering color and/or the scattering depth; the scattering color represents the scattering intensity of the object to be rendered for different colors of illumination, and the scattering depth represents the scattering range of the illumination corresponding to the object to be rendered, so that the scattering color and the scattering depth corresponding to the better sub-surface scattering effect of the object to be rendered can be quickly and accurately adjusted, and the rendered sub-surface scattering effect is better; therefore, the rendering effect of sub-surface scattering can be improved.
Drawings
FIG. 1 is a diagram of an exemplary page for adjusting sub-surface scattering parameters;
FIG. 2 is an alternative architectural diagram of a rendering system provided by embodiments of the present application;
fig. 3 is a schematic structural diagram of a terminal in fig. 2 according to an embodiment of the present disclosure;
FIG. 4 is an alternative flowchart of a rendering method for virtual objects according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an exemplary adjustment page of scattering parameters provided by an embodiment of the present application;
FIG. 6a is a rendering diagram of an exemplary sub-surface scattering effect provided by an embodiment of the present application;
FIG. 6b is a diagram illustrating a rendering result of an exemplary process for adjusting scattering depth according to an embodiment of the present disclosure;
FIG. 7 is a diagram illustrating an exemplary result of a rendering process provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of an exemplary pre-set noise map provided by an embodiment of the present application;
FIG. 9 is a diagram of another exemplary rendering processing result provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of an exemplary simulated scattering profile provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of another exemplary adjustment page of scattering parameters provided by an embodiment of the present application;
FIG. 12 is a schematic flowchart of another alternative rendering method for virtual objects according to an embodiment of the present disclosure;
FIG. 13 is a schematic diagram illustrating an exemplary rendering process of a virtual object according to an embodiment of the present application;
FIG. 14 is a diagram illustrating a visualization result of an exemplary rendered object provided by an embodiment of the present application;
FIG. 15 is a diagram of an exemplary target rendering object provided by an embodiment of the present application;
FIG. 16 is a schematic diagram of another exemplary target rendering object provided by an embodiment of the present application;
FIG. 17 is a diagram of an exemplary rendering pipeline setup page provided by an embodiment of the present application;
fig. 18 is a schematic diagram of an exemplary rendering configuration provided in an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) And (3) sub-surface scattering (SSS or 3S), wherein the distance between the position of the light emergent point and the position of the light incident point exceeds one pixel, so that the illumination result of the pixel is influenced by the current pixel point and the illumination of the similar points nearby, and the sub-surface scattering effect is formed. In the embodiment of the present application, the material having the sub-surface scattering effect generally refers to a translucent material including a multi-layer structure, such as skin, leaves, jade ware, etc.
2) Diffuse reflection (diffuse), when a parallel incident light beam strikes a rough surface, the surface reflects the light beam in all directions, so that although the incident light beams are parallel to each other, the reflected light beam is randomly reflected in different directions due to the non-uniform normal directions of the points, and the reflection is called diffuse reflection.
3) The control is triggerable information displayed in the forms of buttons, icons, links, texts, selection boxes, input boxes, tabs and the like; the triggering mode can be contact triggering, non-contact triggering, command receiving triggering and the like; in addition, the control in the embodiment of the present application may be a single control, or may be a generic name of multiple controls.
4) The operation is a manner for triggering the device to execute processing, such as a click operation, a double-click operation, a long-press operation, a sliding operation, a gesture operation, a received trigger instruction, and the like; in addition, various operations in the embodiments of the present application may be a single operation or may be a collective term for a plurality of operations.
5) A client, an application program running in the terminal for providing various services, such as a rendering client; and the rendering device of the virtual object is the device running the rendering client.
6) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
7) The virtual object is the image of various people and objects which can interact in the virtual scene, or the movable object in the virtual scene; the movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. In addition, the virtual object may be an avatar in the virtual scene for representing the user, and the virtual scene may include a plurality of virtual objects, each of which has its own shape and volume in the virtual scene and occupies a part of the space in the virtual scene. And the virtual object is rendered based on the virtual object model.
8) Highlight is an art term, and refers to the brightest part of an object when a light source irradiates the object and then reflects the object into human eyes; in the embodiments of the present application, the effect is also called as highlight effect and reflective effect.
It should be noted that, when the rendering of the sub-surface scattering effect of the virtual object is implemented in the pre-integration manner, although the rendering effect of the sub-surface scattering can be improved by manually adjusting the pre-integration map, since the manual adjustment of the pre-integration map requires art experience knowledge, the complexity and time consumption of the manual adjustment of the pre-integration map are high.
In addition, besides the rendering of the sub-surface scattering effect of the virtual object is realized by adopting a pre-integration mode, the rendering can also be realized by adopting a separable sub-surface scattering scheme of a screen space; namely, a post-processing method is used for simulating the sub-surface scattering effect in the screen space, and the finally generated material with the sub-surface scattering effect, which stores the diffuse reflection component information, is subjected to transverse and longitudinal fuzzy operation twice to simulate the sub-surface scattering effect. However, in the process of rendering the sub-surface scattering effect, the parameters to be adjusted include a "blend" weight and standard deviations of three channels corresponding to the short-distance gaussian function and the long-distance gaussian function, respectively, see fig. 1, where fig. 1 is a schematic diagram corresponding to an exemplary adjustment page of the sub-surface scattering parameters; as shown in fig. 1, a parameter adjustment control 1-11 corresponding to a close-distance gaussian function, a parameter adjustment control 1-12 corresponding to a long-distance gaussian function, and a weight control 1-13 are displayed on an adjustment page 1-1; the parameter adjusting controls 1-11 and the parameter adjusting controls 1-12 respectively correspond to the three channels; therefore, the defects 1-2 of the separable subsurface scattering scheme of the screen space are mainly the mixture of Gaussian parameters, so that the difficulty of adjusting the subsurface scattering effect parameters is higher; one of the two methods is that parameters corresponding to the gaussian function are purely mathematical, and the other is that the adjustable range of the separable subsurface scattering scheme in the screen space has 7 degrees of freedom, which is too large, and the combined result of many parameters is invalid, so that the user needs to use the rendering user to visually fit the rendering user with eyes, and the user still needs art experience knowledge, and the complexity and time consumption of the adjustment are high.
In summary, there is a problem that rendering quality and parameter adjustment simplicity cannot be satisfied simultaneously in the rendering of the subsurface scattering effect. Based on this, embodiments of the present application provide a rendering method, an apparatus, a device, and a computer-readable storage medium for an artist-friendly rendering method, an apparatus, and a computer-readable storage medium for a sub-surface scattering effect of a high-quality virtual object, which can solve a problem that the rendering quality and the parameter adjustment simplicity cannot be satisfied at the same time, so that the rendering simplicity is improved and the rendering effect is improved on the premise of ensuring the rendering quality of the sub-surface scattering effect.
An exemplary application of the rendering device for virtual objects (hereinafter, simply referred to as a rendering device) provided in the embodiments of the present application is described below, and the rendering device provided in the embodiments of the present application may be implemented as various types of user terminals such as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, and a portable game device), and may also be implemented as a server. Next, an exemplary application when the rendering apparatus is implemented as a terminal will be explained.
Referring to fig. 2, fig. 2 is an alternative architecture diagram of a rendering system provided in the embodiment of the present application; as shown in fig. 2, in order to support a rendering application, in the rendering system 100, a terminal 200 (rendering device) is connected to a server 400 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of both; in addition, the rendering system 100 further includes a database 500 for providing data support to the server 400 when the server 400 provides a data service to the terminal 200 through the network 300.
The terminal 200 is used for displaying a virtual object model on a graphical interface, wherein the virtual object model comprises an object to be rendered 200-11, and the material of an entity corresponding to the object to be rendered has a multilayer structure; in response to a scattering rendering operation for an object to be rendered, displaying a scattering depth control and a scattering color control 200-12, wherein the scattering depth control is used for triggering adjustment of scattering depth, and the scattering color control is used for triggering adjustment of scattering color; and in response to an adjusting operation acted on the scattering parameter control, displaying the sub-surface scattering information matched with the adjusted scattering parameter for the object to be rendered, wherein the scattering parameter control comprises one or two of a scattering depth control and a scattering color control, and the adjusted scattering parameter comprises one or two of an adjusted scattering depth and an adjusted scattering color.
In some embodiments, the server 400 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform, and the like. The terminal 200 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present invention.
Referring to fig. 3, fig. 3 is a schematic diagram of a structure of a terminal in fig. 2 according to an embodiment of the present disclosure, where the terminal 200 shown in fig. 3 includes: at least one processor 210, memory 250, at least one network interface 220, and a user interface 230. The various components in terminal 200 are coupled together by a bus system 240. It is understood that the bus system 240 is used to enable communications among the components. The bus system 240 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 240 in fig. 3.
The Processor 210 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 230 includes one or more output devices 231, including one or more speakers and/or one or more visual display screens, that enable the presentation of media content. The user interface 230 also includes one or more input devices 232, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 250 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 250 optionally includes one or more storage devices physically located remotely from processor 210.
The memory 250 includes volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 250 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 250 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 251 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 252 for communicating to other computing devices via one or more (wired or wireless) network interfaces 220, exemplary network interfaces 220 including: bluetooth, wireless-compatibility authentication (Wi-Fi), and Universal Serial Bus (USB), etc.;
a presentation module 253 to enable presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 231 (e.g., a display screen, speakers, etc.) associated with the user interface 230;
an input processing module 254 for detecting one or more user inputs or interactions from one of the one or more input devices 232 and translating the detected inputs or interactions.
In some embodiments, the rendering apparatus for virtual objects (hereinafter referred to as rendering apparatus) provided in the embodiments of the present application may be implemented in software, and fig. 3 illustrates the rendering apparatus 255 stored in the memory 250, which may be software in the form of programs and plug-ins, and includes the following software modules: a model display module 2551, a control display module 2552, a scatter rendering module 2553, a lighting calculation module 2554, a remaining object rendering module 2555 and a model fusion module 2556, which are logical and thus can be arbitrarily combined or further split according to the functions implemented. The functions of the respective modules will be explained below.
In other embodiments, the rendering apparatus provided in this embodiment may be implemented in hardware, and for example, the rendering apparatus provided in this embodiment may be a processor in the form of a hardware decoding processor, which is programmed to execute the rendering method of the virtual object provided in this embodiment, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
In the following, a rendering method of a virtual object provided in the embodiment of the present application will be described in conjunction with an exemplary application and implementation of a terminal provided in the embodiment of the present application.
Referring to fig. 4, fig. 4 is an alternative flowchart of a rendering method of a virtual object according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 4.
S401, displaying a virtual object model, wherein the virtual object model comprises an object to be rendered.
In the embodiment of the application, when a user renders a sub-surface scattering effect on a virtual object, a model corresponding to the virtual object is loaded through rendering equipment, and the rendering equipment displays the virtual object model. Here, the virtual object model, that is, a model corresponding to the virtual object, may be a three-dimensional mesh model, and may also be a rendering model having rendering data (for example, map information, camera parameters, lighting environment parameters, and the like), which is not specifically limited in this embodiment of the present application; when the virtual object model is a three-dimensional grid model, the object to be rendered is also the three-dimensional grid model, and then, when the object to be rendered is rendered, the chartlet information of the object to be rendered is required to be obtained, wherein the chartlet information is used for controlling highlight, roughness, basic color and the like of the object to be rendered.
It should be noted that the virtual object model includes an object to be rendered, and the material of the entity corresponding to the object to be rendered has a multilayer structure, that is, the entity corresponding to the object to be rendered is a material having a sub-surface scattering effect, for example, an entity made of translucent materials such as a human face, leaves, jade articles, and the like. In addition, the objects to be rendered may be all of the virtual object models, and at this time, the entire virtual object model is an entity with a semitransparent material; the object to be rendered may also be part of a virtual object model, which is an entity having a translucent material at this time; the embodiment of the present application is not particularly limited to this.
S402, responding to scattering rendering operation aiming at the object to be rendered, and displaying a scattering depth control and a scattering color control.
In the embodiment of the application, when a user performs rendering of a sub-surface scattering effect on an object to be rendered on a displayed virtual object model, for example, when a "3S rendering" button for the object to be rendered is clicked, a rendering device also receives a scattering rendering operation for the object to be rendered; at this time, the rendering device responds to the scattering rendering operation, and displays a control for adjusting the rendering parameter of the sub-surface scattering effect, namely, a scattering depth control and a scattering color control.
It should be noted that the scattering depth control is used to trigger adjustment of the scattering depth, where the scattering depth is a diffusion range of the sub-surface scattering effect, that is, position information of surrounding pixel points that contribute (or influence) to the illumination result of the current pixel point. The scattering color control is used for triggering adjustment of the scattering color, wherein the scattering color represents the scattering intensity of the object to be rendered for different color illumination.
And S403, responding to the adjustment operation acted on the scattering parameter control, and displaying the sub-surface scattering information matched with the adjusted scattering parameter for the object to be rendered, wherein the scattering parameter control comprises one or two of a scattering depth control and a scattering color control.
In the embodiment of the application, when a user triggers the scattering parameter control to adjust the scattering depth and/or the scattering color, the rendering device also receives an adjustment operation acted on the scattering parameter control; at this time, the rendering device displays a sub-surface scattering effect adapted to the adjusted scattering parameter on the object to be rendered in response to the adjustment operation. Wherein the sub-surface scattering effect is presented by displaying sub-surface scattering information.
It should be noted that the scattering parameter control includes one or two of a scattering depth control and a scattering color control, and the adjusted scattering parameter includes one or two of an adjusted scattering depth and an adjusted scattering color; that is, when the scattering parameter control comprises a scattering depth control and a scattering color control, the adjusted scattering parameters comprise an adjusted scattering depth and an adjusted scattering color; when the scattering parameter control comprises a scattering depth control, the adjusted scattering parameter comprises an adjusted scattering depth; when the scattering parameter control comprises a scattering color control, the adjusted scattering parameter comprises an adjusted scattering color. In addition, when the adjusted scattering parameter includes the adjusted scattering depth, it indicates that the scattering color may be a default scattering color (the scattering color determined before the current adjustment operation is received), so that at this time, the rendering device displays, for the object to be rendered, the sub-surface scattering information adapted to the adjusted scattering parameter, including: the rendering equipment displays sub-surface scattering information which is matched with the adjusted scattering depth and the default scattering color aiming at the object to be rendered; when the adjusted scattering parameter includes the adjusted scattering color, it indicates that the scattering depth may be a default scattering depth (the scattering depth determined before the current adjustment operation is received), and thus, at this time, the rendering device displays, for the object to be rendered, the sub-surface scattering information adapted to the adjusted scattering parameter, including: the rendering device displays, for the object to be rendered, sub-surface scattering information adapted to the adjusted scattering color and the default scattering depth.
It should be further noted that the rendering method of the virtual object in the embodiment of the present application is applied to a real-time rendering scene with a sub-surface scattering effect.
For example, referring to fig. 5, fig. 5 is a schematic diagram of an exemplary adjustment page of scattering parameters provided in an embodiment of the present application; as shown in fig. 5, a scattering depth control 5-11 and a scattering color control 5-12 are displayed on the adjustment page 5-1 of the scattering parameters; in the process of adjusting the scattering depth control 5-11 and the scattering color control 5-12, different sub-surface scattering effects can be displayed on the object to be rendered; the rendering of the sub-surface scattering effect of the object to be rendered can be realized by adjusting the scattering depth controls 5-11 and the scattering color controls 5-12, and a better sub-surface scattering effect can be rendered.
Referring to fig. 6a, fig. 6a is a rendering schematic diagram of an exemplary sub-surface scattering effect provided by an embodiment of the present application; as shown in fig. 6a, the rendering result 6a-1 is obtained by adjusting the scattering color and the scattering depth through the scattering depth control 5-11 and the scattering color control 5-12 in fig. 5 for the face model (object to be rendered). The skin of the face in the rendering result 6a-1 has a heavy feeling, and the quality of the rendered skin is high.
Referring to fig. 6b, fig. 6b is a schematic diagram of a rendering result of an exemplary process for adjusting a scattering depth according to an embodiment of the present disclosure; as shown in fig. 6b, when the scattering color is adjusted to [236, 90, 48] (a reddish color for simulating that the light emitted from the blood vessel tissue under the skin is reddish light) by the scattering color control 5-12 in fig. 5, the rendering results corresponding to different scattering depths are obtained; the rendering result 6b-1 is the sub-surface scattering effect corresponding to the adjusted scattering depth of 0 mm, the rendering result 6b-2 is the sub-surface scattering effect corresponding to the adjusted scattering depth of 106 mm, and the rendering result 6b-3 is the sub-surface scattering effect corresponding to the adjusted scattering depth of 238 mm. Here, the skin in the rendering result 6b-1 is very dry, losing the texture of the skin completely; with increasing depth, see rendering results 6b-2 and 6b-3, the skin gradually becomes moisturized, gradually highlighting the texture of the skin.
It can be understood that, in the virtual object model, for an object to be rendered with a sub-surface scattering effect, the rendering of the sub-surface scattering effect can be realized by adjusting the scattering color and/or the scattering depth; the scattering color represents the scattering intensity of the object to be rendered for different colors of illumination, and the scattering depth represents the scattering range of the illumination corresponding to the object to be rendered, so that the scattering color and the scattering depth corresponding to the better sub-surface scattering effect of the object to be rendered can be quickly and accurately adjusted, and the rendered sub-surface scattering effect is better; therefore, the rendering effect and rendering efficiency of sub-surface scattering can be improved.
In the embodiment of the present application, S402 is followed by S404-S406; that is, after the rendering device responds to the scattering rendering operation for the object to be rendered, the rendering method of the virtual object further includes S404-S406, which are described below.
And S404, displaying the illumination control.
It should be noted that the illumination control is used for triggering the setting of the illumination parameters, and the illumination parameters include one or two of camera parameters and illumination environment parameters; the camera parameters are used for controlling position information and angle information of an object to be rendered in a rendered scene, and the illumination environment parameters comprise direction light and a point light source.
And S405, responding to the illumination parameter setting operation acted on the illumination control, and displaying the illumination parameters.
It should be noted that, when the user triggers the illumination control to set the illumination parameters, the rendering device also receives the operation of setting the illumination parameters acting on the illumination control; at this time, the rendering device displays the set camera parameters and/or the set lighting environment parameters in response to the lighting parameter setting operation, and the display of the lighting parameters is also completed.
S406, performing illumination calculation on the object to be rendered based on the illumination parameters to obtain highlight information to be rendered, basic color information to be rendered and a sub-surface scattering object.
It should be noted that after obtaining the illumination parameters, the rendering device performs illumination calculation on the object to be rendered, for example, performs IBL (Image Based Lighting) and direct illumination calculation; the obtained illumination calculation result comprises highlight information to be rendered, basic color information to be rendered and a sub-surface scattering object; the highlight information to be rendered is highlight information of an object to be rendered; the basic color information to be rendered is the basic color information corresponding to the object to be rendered; the sub-surface scattering object is a calculation result including diffuse reflection and shading, and is a sub-surface scattering rendering object (RT).
Accordingly, in the embodiment of the present application, S403 may be implemented by S4031 and S4032; that is, in response to the adjustment operation acting on the scattering parameter control, the rendering device displays, for the object to be rendered, the sub-surface scattering information adapted to the adjusted scattering parameter, including S4031 and S4032, which are described below.
S4031, in response to the adjustment operation applied to the scattering parameter control, displaying the adjusted scattering parameter.
It should be noted that, when the user triggers the scattering parameter control to adjust the scattering color and/or the scattering depth, the adjusted scattering color and/or the adjusted scattering depth are also displayed by the rendering device; i.e. the rendering device will also display the adjusted scattering parameters.
S4032, rendering processing of sub-surface scattering is carried out on the sub-surface scattering object based on the adjusted scattering parameters, fusion results of highlight information to be rendered, basic color information to be rendered and rendering processing results are displayed, and sub-surface scattering information matched with the adjusted scattering parameters under the action of the illumination parameters is displayed for the object to be rendered.
It should be noted that, after obtaining the adjusted scattering parameter, the rendering device performs rendering processing of sub-surface scattering on the sub-surface scattering object based on the adjusted scattering parameter, so as to obtain information having a sub-surface scattering effect, that is, a rendering processing result; fusing highlight information to be rendered, basic color information to be rendered and rendering processing results, and displaying sub-surface scattering information matched with the adjusted scattering parameters under the action of illumination parameters aiming at an object to be rendered; here, the sub-surface scattering information is the fusion result.
In the embodiment of the application, when the scattering parameter control comprises a scattering depth control and a scattering color control, the adjusted scattering parameter comprises an adjusted scattering depth and an adjusted scattering color; therefore, the rendering device in S4032 performs rendering processing of sub-surface scattering on the sub-surface scattering object based on the adjusted scattering parameter, including S40321 to S40323, and the following steps are respectively described.
S40321, determining a sub-surface scattering range of the object to be rendered based on the adjusted scattering depth.
It should be noted that, after the adjusted scattering depth is obtained by the rendering device, the scattering range corresponding to each pixel point in the object to be rendered is determined based on the adjusted scattering depth, and thus the sub-surface scattering range of the object to be rendered is obtained. In addition, when the rendering device performs rendering processing of sub-surface scattering on the sub-surface scattering object based on the default scattering depth, the scattering range corresponding to each pixel point in the object to be rendered is determined based on the default scattering depth.
S40322, and determining the intensity of the scattering color of the sub-surface of the object to be rendered based on the adjusted scattering color.
It should be noted that, because of the visually scattering color, it can be considered as the scattering intensity of the material with the sub-surface scattering effect on different color illumination; therefore, after the rendering device obtains the adjusted scattering color, the scattering intensity of the object to be rendered on the adjusted scattering color can be obtained, and therefore, the sub-surface scattering color intensity is obtained. In addition, when the rendering device performs rendering processing of sub-surface scattering on the sub-surface scattering object based on the default scattering color, the obtained scattering intensity of the object to be rendered on the default scattering color is obtained.
S40323, rendering processing of sub-surface scattering is carried out on the sub-surface scattering object by combining the sub-surface scattering range and the sub-surface scattering color intensity.
In the embodiment of the application, after the rendering device obtains the sub-surface scattering range and the sub-surface scattering color intensity corresponding to the object to be rendered, the object to be rendered can be rendered with the sub-surface scattering effect through the sub-surface scattering range and the sub-surface scattering color intensity; here, the sub-surface scattering object is a rendering processing object of a sub-surface scattering effect of the object to be rendered, and therefore, the rendering device performs rendering processing of sub-surface scattering on the sub-surface scattering object by combining the sub-surface scattering range and the sub-surface scattering color intensity, and a result with the sub-surface scattering effect corresponding to the object to be rendered can be obtained.
In the embodiment of the present application, S40321 may be implemented by S403211 to S403215; that is, the rendering apparatus determines the sub-surface scattering range of the object to be rendered, including S403211 to S403215, based on the adjusted scattering depth, and each step is described below.
S403211, camera coordinate information of each pixel point to be rendered of the object to be rendered in the camera coordinate system is obtained.
It should be noted that, because the object to be rendered is a model in the local coordinate system, the adjusted scattering depth is the scattering depth in the camera coordinate system; therefore, when determining the scattering range corresponding to each pixel point to be rendered in the object to be rendered, the rendering device needs to convert the coordinate information of each pixel point to be rendered in the local coordinate system into the coordinate information in the camera coordinate system, and at this time, the camera coordinate information is obtained. Wherein the camera coordinate system is determined based on the camera parameters; the pixel point to be rendered is any pixel point in the object to be rendered.
S403212 determines a camera scattering frame based on the adjusted scattering depth with the camera coordinate information as a reference.
In the embodiment of the application, the rendering equipment determines a target frame by using the adjusted scattering depth with the coordinate information of the camera as a reference, so that a camera scattering frame is obtained; here, the camera scattering frame is a scattering range of each pixel point to be rendered in the camera coordinate system.
S403213, projecting the scattering frame of the camera to a standard coordinate system to obtain the standard scattering frame.
It should be noted that, because the rendering processing of the sub-surface scattering is performed in the standard coordinate system, the camera scattering frame is the scattering range of each pixel point to be rendered in the camera coordinate system; therefore, the rendering equipment projects the camera scattering to the standard coordinate system so as to realize the conversion of the camera scattering frame from the camera coordinate system to the standard coordinate system, and the standard scattering frame is obtained; here, the standard scattering frame is a scattering range of each pixel point to be rendered under the standard coordinate system.
S403214, standard coordinate information of each pixel point to be rendered under the standard coordinate system is obtained.
It should be noted that, the rendering device converts the coordinate information of each pixel point to be rendered in the local coordinate system into coordinate information in the standard coordinate system, so as to obtain standard coordinate information.
S403215, determining a sub-surface scattering range of each pixel point to be rendered in the standard coordinate system based on the standard scattering frame and the standard coordinate information, and thus obtaining a sub-surface scattering range which corresponds to the object to be rendered and comprises a plurality of sub-surface scattering ranges.
It should be noted that the sub-surface scattering range is a scattering range of each pixel point to be rendered in a standard coordinate system; after the rendering equipment obtains the sub-surface scattering range corresponding to each pixel point to be rendered in the object to be rendered, the sub-surface scattering range corresponding to the object to be rendered is obtained; wherein the sub-surface scattering range comprises a plurality of sub-surface scattering ranges.
For example, the sub-subsurface scattering range can be determined by equation (1), where equation (1) is:
Spread=|π(Pointcamera+float4(d,d,d,0))-PointNDC| (1)
wherein Spread is the sub-surface scattering range, d is the adjusted scattering depth, pi is the perspective projection operation, PointcameraFor camera coordinate information, PointNDCFor standard coordinate information, float4(d, d, d,0) is a 4-dimensional vector.
In the embodiment of the present application, S407 is further included after S405; that is, after the rendering device displays the lighting parameters in response to the lighting parameter setting operation acting on the lighting control, the rendering method of the virtual object further includes S407, which is explained below.
S407, based on the illumination parameters, performing illumination calculation on the object to be rendered to obtain normal information to be rendered.
It should be noted that when the rendering device performs illumination calculation on the object to be rendered based on the illumination parameters, the rendering device can obtain highlight information to be rendered, basic color information to be rendered, and a subsurface scattering object, as well as normal information to be rendered; the normal information to be rendered is the normal information of each pixel point to be rendered, and is also a rendering object scattered by a sub-surface; thus, the rendering method of the virtual object in the embodiment of the present application is a 3S rendering method based on multiple RTs (sub-surface scattering object, normal information to be rendered, RT storing a sub-surface scattering range, and RT storing an adjusted scattering color).
Accordingly, in the present embodiment, S403215 may be implemented by S4032151 and S4032152; that is to say, the rendering device determines, based on the standard scattering frame and the standard coordinate information, a sub-surface scattering range of each pixel point to be rendered in the standard coordinate system, including S4032151 and S4032152, and the following steps are described separately.
S4032151, determining a sub-surface scattering range to be corrected of each pixel point to be rendered under the standard coordinate system based on the standard scattering frame and the standard coordinate information.
It should be noted that the sub-surface scattering range to be corrected is a scattering range of each pixel point to be rendered under the standard coordinate system, which is directly determined by the rendering equipment based on the standard scattering frame and the standard coordinate information; for example, Spread corresponds to the diffusion direction.
S4032152, based on the normal line information to be rendered, the sub-surface scattering range to be corrected is corrected, and the sub-surface scattering range is obtained.
In the embodiment of the application, in order to improve the accuracy of the scattering range of each pixel point to be rendered, the rendering device also corrects the scattering range of the sub-surface to be corrected by using the normal information to be rendered; and the corrected sub-surface scattering range to be corrected, namely the sub-surface scattering range, is obtained by the rendering equipment.
Illustratively, the correction of the sub-surface scattering range to be corrected can be realized by the formula (2); the formula (2) is:
wherein, Dir'.xyFor the corrected sub-surface scattering range to be corrected, float2(1.0 ) is a 2-dimensional vector, normal.xyFor normal information to be rendered, Dir.xyThe sub-surface scattering range to be modified (which may be the diffusion direction corresponding to the Spread in equation (1)).
It can be understood that the rendering device corrects the scattering range through the normal information to be rendered corresponding to each pixel point to be rendered, so that the accuracy of the scattering range of each pixel point to be rendered is improved, and the rendering effect of the sub-surface scattering of the object to be rendered can be improved.
In the embodiment of the present application, S4032152 may be implemented by S40321521-S40321523; that is, the rendering device corrects the scattering range of the sub-surface to be corrected based on the normal information to be rendered, and obtains the scattering range of the sub-surface, including S40321521-S40321523. The following describes each step.
S40321521, based on the normal information to be rendered, the sub-surface scattering range to be corrected is corrected, and the initial sub-surface scattering range of each pixel point to be rendered under the standard coordinate system is obtained.
It should be noted that the initial sub-surface scattering range is the corrected sub-surface scattering range to be corrected.
S40321522, a noise sampling result corresponding to the preset noise map is obtained.
In the embodiment of the application, a preset noise map is preset in the rendering device, or the rendering device can acquire the preset noise map, and the preset noise map is used for performing dithering (disturbance) processing on the scattering range of the initial sub-surface; the dithering process may be implemented by a dither Sampling (Dithered Sampling) method. Here, the rendering apparatus samples from a preset noise map, and the obtained sampling result is a noise sampling result.
It should be noted that, because the rendering device performs post-processing convolution direction control based on the initial sub-surface scattering range, when rendering of the sub-surface scattering effect is realized, when the scattering depth is large, stripe-shaped noise occurs in the obtained sub-surface scattering effect; for example, referring to fig. 7, fig. 7 is a schematic diagram of an exemplary rendering processing result provided in the embodiment of the present application; as shown in fig. 7, the rendering processing result 7-1 is unclear, and streak-like noise exists. Therefore, the rendering device also needs to dither the initial sub-surface scattering range before performing the post-processing convolution direction control using the initial sub-surface scattering range.
S40321523, combining the noise sampling result and the preset noise influence strength, and performing disturbance processing on the initial sub-surface scattering range to obtain a sub-surface scattering range.
In the embodiment of the application, a preset noise influence intensity is preset in the rendering device, or the rendering device can acquire the preset noise influence intensity, and the preset noise influence intensity is used for determining the disturbance degree of the noise sampling result on the initial sub-surface scattering range.
Illustratively, the perturbation processing can be realized by equation (3); the formula (3) is:
Dir’‘.xy=Dir‘.xy*NoiseScale*NoiseSample.xy (3)
wherein, Dir'.xyIs the sub-subsurface scattering range; NoiseSacle is the preset noise influence intensity; NoiseSample.xyAs a result of the noise sampling, it is obtained by sampling the preset noise map 8-1 shown in fig. 8.
Here, when the rendering device performs post-processing convolution direction control based on the sub-surface scattering range to achieve rendering of the sub-surface scattering effect, referring to fig. 9, fig. 9 is a schematic diagram of another exemplary rendering processing result provided in the embodiment of the present application; as shown in fig. 9, the rendering processing result 9-1 is higher in definition than the rendering processing result 7-1 in fig. 7.
It can be understood that the rendering device performs dithering processing on the post-processing convolution direction of each pixel point to be rendered through a preset noise map and preset noise influence strength, so that the problem of stripe-shaped noise caused by large adjusted scattering depth is solved, and the rendering effect of sub-surface scattering is improved.
In the embodiment of the present application, S40322 may be implemented by S403221 to S403223; that is, the rendering apparatus determines the intensity of the sub-surface scattering color of the object to be rendered based on the adjusted scattering color, including S403221 to S403223, which are described below.
S403221, determining a target close range parameter corresponding to the adjusted scattering color based on the ratio of the close range parameter threshold to the long range parameter threshold.
In this embodiment of the present application, a short-distance parameter threshold, a long-distance parameter threshold, and a hybrid weight threshold are preset in the rendering device, or the rendering device can obtain the short-distance parameter threshold, the long-distance parameter threshold, and the hybrid weight threshold, where the short-distance parameter threshold, the long-distance parameter threshold, and the hybrid weight threshold are estimated parameters used for obtaining a Diffusion Profile (Diffusion Profile) through a gaussian function. Here, the rendering apparatus determines a target close-range parameter corresponding to the adjusted scattering color based on a ratio of the close-range parameter threshold to the long-range parameter threshold; for example, the ratio of the short-distance parameter threshold to the long-distance parameter threshold and the fusion result of the adjusted scattering colors are used as the target short-distance parameters.
It should be noted that the close range parameter threshold and the target close range parameter both refer to a close range gaussian function standard deviation.
S403222, determining the adjusted scattering color as a target long-distance parameter.
Here, the rendering apparatus determines the adjusted scattering color as a target remote parameter; in addition, the remote parameter threshold and the target remote parameter both refer to remote gaussian function standard deviation.
S403223, determining the mixed weight threshold, the target close-range parameter and the target far-range parameter as sub-surface scattering intensity information.
It should be noted that the rendering device still uses the hybrid weight threshold as the hybrid weight, and uses the hybrid weight threshold, the target close-range parameter and the target long-range parameter as the parameters for obtaining the diffusion profile through the gaussian function, that is, the sub-surface scattering intensity information includes the hybrid weight threshold, the target close-range parameter and the target long-range parameter.
Illustratively, the near distance parameter threshold, the far distance parameter threshold, and the hybrid weight threshold are shown in table 1:
TABLE 1
The short-distance parameter threshold and the long-distance parameter threshold are both thresholds corresponding to RGB (Red Green Blue ) three channels. The process of obtaining the target short-distance parameter and the target long-distance parameter by the rendering equipment based on the short-distance parameter threshold and the long-distance parameter threshold can be realized by the formulas (4) to (6); formulae (4) to (6) are:
ratio=σnear/σfar (4)
σ‘near=ScatterColor*ratio (5)
σ‘far=ScatterColor (6)
wherein σnearA close range parameter threshold; sigmafarIs a remote parameter threshold; ratio is the ratio of the short-range parameter threshold to the long-range parameter threshold, and for the short-range parameter threshold and the long-range parameter threshold in Table 1, ratio is [0.25, 0.2667, 0.4 ]]. Scattercolor is the adjusted scattering color, σ'nearIs a target close range parameter, σ'farIs a target distance parameter.
In the embodiment of the present application, S40323 may be implemented by S403231 and S403232; that is, the rendering apparatus performs a rendering process of sub-surface scattering on the sub-surface scattering object, including S403231 and S403232, in combination with the sub-surface scattering range and the sub-surface scattering color intensity, and the following steps are respectively described.
S403231, combined with the intensity of the sub-surface scattering color, simulates a scattering profile of the sub-surface scattering object.
It should be noted that the rendering device performs two orthogonal fuzzy post-processing operations using a blending weight threshold in the sub-surface scattering color intensity, a target near distance parameter and a target far distance parameter to simulate a diffusion profile by two gaussian functions.
Illustratively, referring to fig. 10, fig. 10 is a schematic diagram of an exemplary simulated scattering profile provided by an embodiment of the present application; as shown in fig. 10, a curve 10-1 is a standard diffusion profile, a curve 10-2 is a diffusion profile corresponding to a gaussian function based on a target distance parameter, a curve 10-3 is a diffusion profile corresponding to a gaussian function based on a target near distance parameter, and a curve 10-4 is a diffusion profile approximated by two gaussian functions based on a target near distance parameter and a target distance parameter.
S403232, post-processing is performed on the scattering profile based on the convolution direction corresponding to the sub-surface scattering range, thereby completing rendering of the sub-surface scattering object.
In the embodiment of the application, after the rendering device obtains the scattering profile, the information of how the light is diffused and distributed in the object to be rendered is also determined; therefore, the rendering equipment controls the convolution direction of the sub-surface scattering post-processing through the sub-surface scattering range, and simulates the scattering effect that light rays enter an object to be rendered and exit from the vicinity of an incidence point with certain attenuation based on the scattering profile, so that the simulation calculation of the sub-surface scattering effect is completed.
In this embodiment of the application, after the rendering device responds to the scattering rendering operation for the object to be rendered in S402, the rendering method for the virtual object further includes S408 and S409, which are described below.
And S408, displaying the scattering depth unit control.
It should be noted that the scattering depth unit control is used for triggering adjustment of the unit of scattering depth; wherein the unit of the scattering depth is consistent with the construction unit of the object to be rendered; for example, the parameter value corresponding to the unit of scattering depth of an object to be rendered, which is manufactured in centimeter units, is fixed to 0.01; the parameter value corresponding to the unit of scattering depth of the object to be rendered manufactured in decimeter unit is fixed to be 0.1; and (3) fixing the parameter value corresponding to the unit of the scattering depth of the object to be rendered, which is manufactured in the meter unit, to be 1.
And S409, responding to the scattering depth unit adjusting operation acted on the scattering depth unit control, and displaying the adjusted scattering depth unit.
In the embodiment of the application, when a user triggers the unit of scattering depth unit control for adjusting the scattering depth, the rendering device also receives the unit of scattering depth unit adjustment operation acting on the unit of scattering depth unit control; at this time, the rendering apparatus acquires an adjusted scattering depth unit in response to the scattering depth unit adjustment operation, and displays the adjusted scattering depth unit.
Accordingly, in this embodiment of the application, in response to the adjusting operation acting on the scattering parameter control, the rendering device in S403 displays, for the object to be rendered, the sub-surface scattering information adapted to the adjusted scattering parameter, including: and the rendering equipment responds to the adjustment operation acted on the scattering parameter control, and displays the sub-surface scattering information matched with the adjusted scattering parameter in the adjusted scattering depth unit aiming at the object to be rendered. That is, the unit corresponding to the adjusted scattering depth in the adjusted scattering parameters is the adjusted scattering depth unit.
Exemplarily, referring to fig. 11, fig. 11 is a schematic diagram of another exemplary adjustment page of scattering parameters provided in the embodiment of the present application; as shown in fig. 11, a scattering depth control 11-11, a scattering color control 11-12, and a scattering depth unit control 11-13 are displayed on the adjustment page 11-1 of the scattering parameter; after the adjustment of the scattering depth unit is completed through the scattering depth unit control 11-13, different sub-surface scattering effects are displayed on the object to be rendered in the process of adjusting the scattering depth control 11-11 and the scattering color control 11-12.
Referring to fig. 12, fig. 12 is a schematic flowchart of another alternative rendering method for virtual objects according to an embodiment of the present application; as shown in fig. 12, in the embodiment of the present application, S401 is followed by S410 and S411; that is, after the virtual object model is displayed, the rendering method of the virtual object further includes S410 and S411, and the following steps are respectively described.
S410, obtaining the remaining objects to be rendered of the virtual object model except the objects to be rendered.
It should be noted that, when the virtual object model is a three-dimensional mesh model, the remaining models in the virtual object model except the object to be rendered are also required to be rendered; therefore, the rendering device obtains the remaining models except the object to be rendered in the virtual object model, and the remaining object to be rendered is obtained.
S411, obtaining target residual rendering objects corresponding to the residual objects to be rendered.
It should be noted that, when rendering the remaining objects to be rendered, rendering of the sub-surface scattering effect is not required, and thus, the processes described in S402 and S403 are not used. Here, the target remaining rendering object is a remaining object to be rendered after completion of rendering.
In the embodiment of the application, the object to be rendered and the remaining objects to be rendered are rendered in different rendering modes, which can be realized by setting a rendering client.
With continued reference to fig. 12, in the present embodiment, S403 is followed by S412 and S413; that is, after the rendering device displays the sub-surface scattering information adapted to the adjusted scattering parameter for the object to be rendered, the rendering method of the virtual object further includes S412 and S413, which are described below.
And S412, combining the target residual rendering object and the target rendering object into a rendering model corresponding to the virtual object model, wherein the target rendering object is an object to be rendered, which displays sub-surface scattering information matched with the adjusted scattering parameter.
In the embodiment of the application, after the rendering device obtains the target residual rendering object and the target rendering object, the target residual rendering object and the target rendering object are combined, so that a rendering model corresponding to the virtual object model is obtained; and the rendering model is a virtual object model which completes rendering.
And S413, displaying the rendering model.
It should be noted that the rendering model is used for rendering the virtual object; when the rendering device displays the rendering model, the rendering of the virtual object corresponding to the virtual object model can be completed.
It can be understood that, in the embodiment of the present application, for a virtual object model, rendering is implemented through S402 and S403 described in the embodiment of the present application for a material having a sub-surface scattering effect in the virtual object model, and rendering is implemented for a material not having the sub-surface scattering effect in a manner of not performing rendering of the sub-surface scattering effect, so that consumption for rendering the virtual object model is reduced.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
Referring to fig. 13, fig. 13 is a schematic diagram illustrating an exemplary rendering process of a virtual object according to an embodiment of the present disclosure; as shown in fig. 13, when rendering a sub-surface scattering effect on a head model (an object to be rendered) of a three-dimensional mesh model (a virtual object model) of a game character, in the rendering process, input data 13-1 includes the head model 13-11, a map 13-12 (map information of the object to be rendered), camera parameters 13-13, and a lighting environment 13-14 (lighting environment parameters).
First, image-based illumination and direct illumination calculations are performed on the input data 13-1, obtaining highlight head images (Specular)13-21 (highlight information to be rendered), base color head images (albedo)13-22 (base color information to be rendered), 3S postprocessing objects (LightTexture, storing calculation results of diffuse reflection and shadow) 13-23 (sub-surface scattering objects), scattering color rendering objects (ColorTexture, storing adjusted scattering colors, and storing 3S effect masks for controlling regions for rendering 3S effects) 13-24 (sub-surface scattering objects), diffusion degree rendering objects (DirTexture, storing diffusion degree information obtained by calculation according to an adjusted scattering depth through an equation (1)) 13-25 (sub-surface scattering range) and normal information rendering objects (normaltTexture) 13-26 (normal rendering information).
Then, 3S calculation is carried out on the basis of 4 auxiliary rendering objects, namely the 3S post-processing object 13-23, the scattering color rendering object 13-24, the diffusion degree rendering object 13-25 and the normal information rendering object 13-26, so that a 3S effect head image 13-3 is obtained. Wherein, the visualization results of the 3S post-processing object 13-23, the scattering color rendering object 13-24, the diffusion degree rendering object 13-25, and the normal information rendering object 13-26 are shown in fig. 14. Here, the normal information rendering object 13-26 is used to correct the scattering direction corresponding to the diffusion degree rendering object 13-25 by the formula (2); and perturbing the corrected scattering direction based on fig. 8 and equation (3); in addition, the scattering color rendering object 13-24 acquires gaussian function parameters by equations (4) to (6), approximates the diffusion profile of the 3S post-processing object 13-23 based on the mixture weight threshold and the gaussian function parameters, and performs 3S calculation on the diffusion profile of the 3S post-processing object 13-23 in the 3S post-processing convolution direction controlled by the disturbed scattering direction.
And finally, fusing the highlight head image 13-21, the basic color head image 13-22 and the 3S effect head image 13-3 to obtain a rendering result 13-4 (target rendering object) with the sub-surface scattering effect corresponding to the head model 13-11.
Referring to fig. 15, fig. 15 is a schematic diagram of an exemplary target rendering object provided by an embodiment of the present application; as shown in fig. 15, the rendering result 15-1 shows the enlarged result of the rendering result 13-4 in fig. 13, and in the rendering result 15-1, the skin of the face of the virtual object has a high sense of thickness and texture, the reality of the rendered skin is high, and the rendering quality is high.
Referring to FIG. 16, FIG. 16 is a schematic diagram of another exemplary target rendering object provided by an embodiment of the present application; as shown in fig. 16, the rendering result 16-1 shows the sub-surface scattering effect of the near-distance skin of the rendering result 13-4 in fig. 13, the skin has a high sense of thickness and texture, the rendered skin has high reality and rendering quality.
It should be noted that before the rendering processing flow described in fig. 13 is implemented, a setting process needs to be performed on a rendering client (for example, a "Unity" rendering engine) so that an object to be subjected to the sub-surface scattering effect rendering is rendered through the rendering processing flow described in fig. 13, and an object that does not need to be subjected to the sub-surface scattering effect rendering is rendered through a default rendering manner of the rendering engine. Here, for the human body model 13-11 of the game character, a custom rendering processing object (ObjectDrawPass) is added using a rendering interface (RenderFeature); "ObjectDrawPass" is created by inheriting the rendering base class (scriptable RenderPass) and defining custom rendering (CustomRenderPass); in addition, at the time of creating "ObjectDrawPass", the setting of a plurality of rendering objects is performed by the "setrefererget" function of "CommondBuffer". In the embodiment of the present application, 5 rendering objects are set to store 13-21 to 13-26 in fig. 13.
Further, the camera parameters 13-13 in fig. 13 are obtained by performing rendering camera-related settings in the "Execute" function; the image-based lighting and direct lighting calculations in FIG. 13 for the input data 13-1 are performed by rendering the object using the "DrawRenderers" function; the 3S calculation in fig. 13 is achieved by performing post-processing calculation of the 3S effect by the "Blit" function using the rendered 13-23 to 13-24 in fig. 13.
It should also be noted that, in order to make the human head model 13-11 not be rendered (drawn) repeatedly, a new forward rendering (forwardrender) is defined in the rendering engine, and the human head model 13-11 rendered in the custom "ObjectDrawPass" is removed in the opaque Layer Mask (Opaue Layer Mask) using the hierarchical method of the rendering engine. For example, referring to fig. 17, fig. 17 is a schematic diagram of an exemplary rendering pipeline setup page provided by an embodiment of the present application; as shown in FIG. 17, in the setup page 17-1, the rendering List 17-111(Render List) of the general rendering Pipeline setup 17-11(Universal Render Pipeline Asset) is set in relation to that, by default, the forward rendering data 17-1111 "SSSFForwardRenderer (ForwardRenderrData)" is used, i.e., a new "ForwardRenderer" is defined in the rendering engine, and rendering of the remaining models (remaining objects to be rendered) other than the human head model 13-11 is realized. Referring to fig. 18, fig. 18 is a schematic diagram of an exemplary rendering configuration provided by an embodiment of the present application; as shown in FIG. 18, on the settings page 18-1, for the settings of the forward rendering 18-11, the "skin 18-1111" option is removed from the opacity Layer Mask 18-111(Opaue Layer Mask), which also enables the removal of the head model 13-11 rendered in the custom "ObjectDrawPass".
It can be understood that, with the rendering method of the virtual object provided in the embodiment of the present application, for the 3S effect that most affects the texture of the object to be rendered (for example, skin), only 2 parameters need to be adjusted, and on the premise that the rendering quality of the object to be rendered by professional art is satisfied, a material effect with better texture can be quickly rendered, so that the manufacturing of opaque material resources is accelerated, and the rendering efficiency of the virtual object is improved.
Continuing with the exemplary structure of the rendering device 255 provided by the embodiments of the present application implemented as software modules, in some embodiments, as shown in fig. 3, the software modules stored in the rendering device 255 of the memory 250 may include:
a model display module 2551, configured to display a virtual object model, where the virtual object model includes an object to be rendered, and a material of an entity corresponding to the object to be rendered has a multilayer structure;
a control display module 2552, configured to display, in response to a scattering rendering operation for the object to be rendered, a scattering depth control and a scattering color control, where the scattering depth control is used to trigger an adjustment of a scattering depth, and the scattering color control is used to trigger an adjustment of a scattering color;
a scattering rendering module 2553, configured to, in response to an adjustment operation acting on a scattering parameter control, display, for the object to be rendered, sub-surface scattering information adapted to an adjusted scattering parameter, where the scattering parameter control includes one or both of the scattering depth control and the scattering color control, and the adjusted scattering parameter includes one or both of an adjusted scattering depth and an adjusted scattering color.
In this embodiment of the present application, the rendering apparatus 255 further includes a lighting calculation module 2554, configured to display a lighting control, where the lighting control is configured to trigger setting of a lighting parameter, where the lighting parameter includes one or both of a camera parameter and a lighting environment parameter; responding to an illumination parameter setting operation acted on the illumination control, and displaying the illumination parameter; and performing illumination calculation on the object to be rendered based on the illumination parameters to obtain highlight information to be rendered, basic color information to be rendered and a sub-surface scattering object.
In this embodiment, the scattering rendering module 2553 is further configured to display the adjusted scattering parameter in response to the adjustment operation acting on the scattering parameter control; and performing secondary surface scattering rendering processing on the secondary surface scattering object based on the adjusted scattering parameter, displaying a fusion result of the highlight information to be rendered, the basic color information to be rendered and the rendering processing result, and displaying the secondary surface scattering information matched with the adjusted scattering parameter under the action of the illumination parameter for the object to be rendered, wherein the secondary surface scattering information is the fusion result.
In an embodiment of the present application, the scattering parameter control includes the scattering depth control and the scattering color control, and the adjusted scattering parameter includes the adjusted scattering depth and the adjusted scattering color; the scattering rendering module 2553 is further configured to determine a sub-surface scattering range of the object to be rendered based on the adjusted scattering depth; determining a sub-surface scattering color intensity of the object to be rendered based on the adjusted scattering color; and performing rendering processing of sub-surface scattering on the sub-surface scattering object by combining the sub-surface scattering range and the sub-surface scattering color intensity.
In this embodiment of the application, the scattering rendering module 2553 is further configured to obtain camera coordinate information of each to-be-rendered pixel point of the to-be-rendered object in a camera coordinate system, where the camera coordinate system is determined based on the camera parameters; determining a camera scattering frame based on the adjusted scattering depth by taking the camera coordinate information as a reference; projecting the camera scattering frame to a standard coordinate system to obtain a standard scattering frame; acquiring standard coordinate information of each pixel point to be rendered under the standard coordinate system; and determining a sub-surface scattering range of each pixel point to be rendered under the standard coordinate system based on the standard scattering frame and the standard coordinate information, so as to obtain the sub-surface scattering range which corresponds to the object to be rendered and comprises a plurality of sub-surface scattering ranges.
In this embodiment of the application, the illumination calculation module 2554 is further configured to perform illumination calculation on the object to be rendered based on the illumination parameter, so as to obtain normal information to be rendered.
In this embodiment of the application, the scattering rendering module 2553 is further configured to determine, based on the standard scattering frame and the standard coordinate information, a to-be-corrected sub-surface scattering range of each to-be-rendered pixel point in the standard coordinate system; and correcting the sub-surface scattering range to be corrected based on the normal information to be rendered to obtain the sub-surface scattering range.
In this embodiment of the application, the scattering rendering module 2553 is further configured to correct the scattering range of the sub-surface to be corrected based on the to-be-rendered normal information, so as to obtain an initial sub-surface scattering range of each to-be-rendered pixel point in the standard coordinate system; acquiring a noise sampling result corresponding to a preset noise map; and carrying out disturbance processing on the initial sub-surface scattering range by combining the noise sampling result and preset noise influence strength to obtain the sub-surface scattering range.
In this embodiment of the application, the scattering rendering module 2553 is further configured to determine a target close-range parameter corresponding to the adjusted scattering color based on a ratio of a close-range parameter threshold to a long-range parameter threshold; determining the adjusted scattering color as a target remote parameter; and determining a mixing weight threshold, the target close-range parameter and the target far-range parameter as the sub-surface scattering intensity information.
In this embodiment, the scattering rendering module 2553 is further configured to simulate a scattering profile of the sub-surface scattering object in combination with the sub-surface scattering color intensity; and performing post-processing on the scattering profile based on the convolution direction corresponding to the sub-surface scattering range, thereby finishing rendering processing of the sub-surface scattering object.
In this embodiment of the present application, the control display module 2552 is further configured to display a scattering depth unit control, where the scattering depth unit control is configured to trigger adjustment of a unit of scattering depth; displaying the adjusted scattering depth units in response to a scattering depth unit adjustment operation acting on the scattering depth unit control.
In this embodiment, the scattering rendering module 2553 is further configured to, in response to the adjustment operation acting on the scattering parameter control, display, for the object to be rendered, the sub-surface scattering information adapted to the adjusted scattering parameter in the adjusted scattering depth unit.
In this embodiment of the present application, the rendering apparatus 255 further includes a remaining object rendering module 2555, configured to obtain remaining objects to be rendered of the virtual object model except for the object to be rendered; and obtaining target residual rendering objects corresponding to the residual objects to be rendered.
In this embodiment of the application, the rendering apparatus 255 further includes a model fusion module 2556, configured to combine the target remaining rendering objects and the target rendering objects into a rendering model corresponding to the virtual object model, where the target rendering objects are the objects to be rendered that display the sub-surface scattering information adapted to the adjusted scattering parameter; and displaying the rendering model, and finishing the rendering of the virtual object corresponding to the virtual object model.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the rendering method of the virtual object according to the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, which when executed by a processor, will cause the processor to perform a rendering method of a virtual object provided by embodiments of the present application, for example, the rendering method of a virtual object as shown in fig. 4.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In summary, according to the embodiment of the present application, in the virtual object model, for an object to be rendered with a sub-surface scattering effect, rendering of the sub-surface scattering effect can be achieved by adjusting a scattering color and/or a scattering depth; the scattering color represents the scattering intensity of the object to be rendered for different colors of illumination, and the scattering depth represents the scattering range of the illumination corresponding to the object to be rendered, so that the scattering color and the scattering depth corresponding to the better sub-surface scattering effect of the object to be rendered can be quickly and accurately adjusted, and the rendered sub-surface scattering effect is better; therefore, the rendering effect and rendering efficiency of sub-surface scattering can be improved.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.
Claims (13)
1. A method for rendering a virtual object, comprising:
displaying a virtual object model, wherein the virtual object model comprises an object to be rendered, and the material of an entity corresponding to the object to be rendered has a multilayer structure;
in response to a scattering rendering operation for the object to be rendered, displaying a scattering depth control and a scattering color control, wherein the scattering depth control is used for triggering adjustment of scattering depth, and the scattering color control is used for triggering adjustment of scattering color;
in response to an adjustment operation acting on a scattering parameter control, displaying, for the object to be rendered, sub-surface scattering information adapted to the adjusted scattering parameter, wherein the scattering parameter control includes one or both of the scattering depth control and the scattering color control, and the adjusted scattering parameter includes one or both of the adjusted scattering depth and the adjusted scattering color.
2. The method of claim 1, wherein in response to the scatter rendering operation for the object to be rendered, the method further comprises:
displaying an illumination control, wherein the illumination control is used for triggering the setting of illumination parameters, and the illumination parameters comprise one or two of camera parameters and illumination environment parameters;
responding to an illumination parameter setting operation acted on the illumination control, and displaying the illumination parameter;
based on the illumination parameters, performing illumination calculation on the object to be rendered to obtain highlight information to be rendered, basic color information to be rendered and a sub-surface scattering object;
the displaying, in response to an adjustment operation acting on a scattering parameter control, sub-surface scattering information adapted to the adjusted scattering parameter for the object to be rendered, includes:
displaying the adjusted scattering parameter in response to the adjustment operation acting on the scattering parameter control;
and performing secondary surface scattering rendering processing on the secondary surface scattering object based on the adjusted scattering parameter, displaying a fusion result of the highlight information to be rendered, the basic color information to be rendered and the rendering processing result, and displaying the secondary surface scattering information matched with the adjusted scattering parameter under the action of the illumination parameter for the object to be rendered, wherein the secondary surface scattering information is the fusion result.
3. The method of claim 2, wherein the scattering parameter control comprises the scattering depth control and the scattering color control, and the adjusted scattering parameter comprises the adjusted scattering depth and the adjusted scattering color;
the rendering processing of the sub-surface scattering on the sub-surface scattering object based on the adjusted scattering parameter includes:
determining a sub-surface scattering range of the object to be rendered based on the adjusted scattering depth;
determining a sub-surface scattering color intensity of the object to be rendered based on the adjusted scattering color;
and performing rendering processing of sub-surface scattering on the sub-surface scattering object by combining the sub-surface scattering range and the sub-surface scattering color intensity.
4. The method of claim 3, wherein determining the sub-surface scattering extent of the object to be rendered based on the adjusted scattering depth comprises:
acquiring camera coordinate information of each pixel point to be rendered of the object to be rendered under a camera coordinate system, wherein the camera coordinate system is determined based on the camera parameters;
determining a camera scattering frame based on the adjusted scattering depth by taking the camera coordinate information as a reference;
projecting the camera scattering frame to a standard coordinate system to obtain a standard scattering frame;
acquiring standard coordinate information of each pixel point to be rendered under the standard coordinate system;
and determining a sub-surface scattering range of each pixel point to be rendered under the standard coordinate system based on the standard scattering frame and the standard coordinate information, so as to obtain the sub-surface scattering range which corresponds to the object to be rendered and comprises a plurality of sub-surface scattering ranges.
5. The method of claim 4, wherein after displaying the lighting parameter in response to a lighting parameter setting operation acting on the lighting control, the method further comprises:
based on the illumination parameters, performing illumination calculation on the object to be rendered to obtain normal information to be rendered;
the determining the sub-surface scattering range of each pixel point to be rendered under the standard coordinate system based on the standard scattering frame and the standard coordinate information includes:
determining the sub-surface scattering range to be corrected of each pixel point to be rendered under the standard coordinate system based on the standard scattering frame and the standard coordinate information;
and correcting the sub-surface scattering range to be corrected based on the normal information to be rendered to obtain the sub-surface scattering range.
6. The method according to claim 5, wherein the modifying the sub-surface scattering range to be modified based on the normal information to be rendered to obtain the sub-surface scattering range comprises:
based on the normal information to be rendered, correcting the sub-surface scattering range to be corrected to obtain an initial sub-surface scattering range of each pixel point to be rendered under the standard coordinate system;
acquiring a noise sampling result corresponding to a preset noise map;
and carrying out disturbance processing on the initial sub-surface scattering range by combining the noise sampling result and preset noise influence strength to obtain the sub-surface scattering range.
7. The method of claim 3, wherein determining a sub-surface scattering color intensity of the object to be rendered based on the adjusted scattering color comprises:
determining a target close range parameter corresponding to the adjusted scattering color based on the ratio of the close range parameter threshold to the long range parameter threshold;
determining the adjusted scattering color as a target remote parameter;
and determining a mixing weight threshold, the target close-range parameter and the target far-range parameter as the sub-surface scattering intensity information.
8. The method according to any one of claims 3 to 7, wherein the rendering of the sub-surface scattering object by combining the sub-surface scattering range and the sub-surface scattering color intensity comprises:
simulating a scattering profile of the subsurface scattering object in combination with the subsurface scattering color intensity;
and performing post-processing on the scattering profile based on the convolution direction corresponding to the sub-surface scattering range, thereby finishing rendering processing of the sub-surface scattering object.
9. The method of any of claims 1 to 7, wherein after responding to a scatter rendering operation for the object to be rendered, the method further comprises:
displaying a scattering depth unit control, wherein the scattering depth unit control is used to trigger adjustment of a unit of scattering depth;
displaying the adjusted scattering depth units in response to a scattering depth unit adjustment operation acting on the scattering depth unit control;
the displaying, in response to an adjustment operation acting on a scattering parameter control, sub-surface scattering information adapted to the adjusted scattering parameter for the object to be rendered, includes:
in response to the adjustment operation acting on the scattering parameter control, displaying, for the object to be rendered, the sub-surface scattering information adapted to the adjusted scattering parameter in the adjusted scattering depth unit.
10. The method of any of claims 1 to 7, wherein after displaying the virtual object model, the method further comprises:
acquiring the remaining objects to be rendered of the virtual object model except the objects to be rendered;
obtaining target residual rendering objects corresponding to the residual objects to be rendered;
after displaying the sub-surface scattering information adapted to the adjusted scattering parameter for the object to be rendered, the method further includes:
combining the target residual rendering objects and the target rendering objects into rendering models corresponding to the virtual object models, wherein the target rendering objects are the objects to be rendered which display the sub-surface scattering information matched with the adjusted scattering parameters;
and displaying the rendering model, and finishing the rendering of the virtual object corresponding to the virtual object model.
11. An apparatus for rendering a virtual object, comprising:
the model display module is used for displaying a virtual object model, wherein the virtual object model comprises an object to be rendered, and the material of an entity corresponding to the object to be rendered has a multilayer structure;
the control display module is used for responding to scattering rendering operation aiming at the object to be rendered, and displaying a scattering depth control and a scattering color control, wherein the scattering depth control is used for triggering adjustment of scattering depth, and the scattering color control is used for triggering adjustment of scattering color;
and the scattering rendering module is used for responding to an adjusting operation acted on a scattering parameter control, and displaying sub-surface scattering information matched with the adjusted scattering parameter aiming at the object to be rendered, wherein the scattering parameter control comprises one or two of the scattering depth control and the scattering color control, and the adjusted scattering parameter comprises one or two of the adjusted scattering depth and the adjusted scattering color.
12. An apparatus for rendering a virtual object, comprising:
a memory for storing executable instructions;
a processor for implementing the method of any one of claims 1 to 10 when executing executable instructions stored in the memory.
13. A computer-readable storage medium having stored thereon executable instructions for, when executed by a processor, implementing the method of any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110040561.4A CN112691373B (en) | 2021-01-13 | 2021-01-13 | Rendering method, device and equipment of virtual object and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110040561.4A CN112691373B (en) | 2021-01-13 | 2021-01-13 | Rendering method, device and equipment of virtual object and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112691373A true CN112691373A (en) | 2021-04-23 |
CN112691373B CN112691373B (en) | 2022-07-29 |
Family
ID=75514286
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110040561.4A Active CN112691373B (en) | 2021-01-13 | 2021-01-13 | Rendering method, device and equipment of virtual object and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112691373B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116524061A (en) * | 2023-07-03 | 2023-08-01 | 腾讯科技(深圳)有限公司 | Image rendering method and related device |
WO2024198719A1 (en) * | 2023-03-29 | 2024-10-03 | 腾讯科技(深圳)有限公司 | Data processing method and apparatus, computer device, computer-readable storage medium, and computer program product |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090226049A1 (en) * | 2008-01-31 | 2009-09-10 | University Of Southern California | Practical Modeling and Acquisition of Layered Facial Reflectance |
CN111862285A (en) * | 2020-07-10 | 2020-10-30 | 完美世界(北京)软件科技发展有限公司 | Method and device for rendering figure skin, storage medium and electronic device |
-
2021
- 2021-01-13 CN CN202110040561.4A patent/CN112691373B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090226049A1 (en) * | 2008-01-31 | 2009-09-10 | University Of Southern California | Practical Modeling and Acquisition of Layered Facial Reflectance |
CN111862285A (en) * | 2020-07-10 | 2020-10-30 | 完美世界(北京)软件科技发展有限公司 | Method and device for rendering figure skin, storage medium and electronic device |
Non-Patent Citations (2)
Title |
---|
CHENCHAOGU: "《C4DCN.com》", 28 November 2018 * |
活力网-傻包子: "《bilibili》", 29 August 2019 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024198719A1 (en) * | 2023-03-29 | 2024-10-03 | 腾讯科技(深圳)有限公司 | Data processing method and apparatus, computer device, computer-readable storage medium, and computer program product |
CN116524061A (en) * | 2023-07-03 | 2023-08-01 | 腾讯科技(深圳)有限公司 | Image rendering method and related device |
CN116524061B (en) * | 2023-07-03 | 2023-09-26 | 腾讯科技(深圳)有限公司 | Image rendering method and related device |
Also Published As
Publication number | Publication date |
---|---|
CN112691373B (en) | 2022-07-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021129044A1 (en) | Object rendering method and apparatus, and storage medium and electronic device | |
CN112691373B (en) | Rendering method, device and equipment of virtual object and computer readable storage medium | |
US11276227B2 (en) | Object rendering method and apparatus, storage medium, and electronic device using a simulated pre-integration map | |
US20090046099A1 (en) | Real-time display system | |
CN111467801A (en) | Model blanking method and device, storage medium and electronic equipment | |
CN114119818A (en) | Rendering method, device and equipment of scene model | |
US20230405452A1 (en) | Method for controlling game display, non-transitory computer-readable storage medium and electronic device | |
JP6714357B2 (en) | Video processing device, video processing method, and video processing program | |
CN111489426B (en) | Expression generating method, device, equipment and storage medium | |
de la Houssaye et al. | Virtual ray tracer | |
CN114359458A (en) | Image rendering method, device, equipment, storage medium and program product | |
Lee et al. | Immersive authoring of Tangible Augmented Reality content: A user study | |
CN112700541A (en) | Model updating method, device, equipment and computer readable storage medium | |
CN112132962A (en) | Virtual reality-based urban rail vehicle maintenance operation process research method | |
CN112473135B (en) | Real-time illumination simulation method, device and equipment for mobile game and storage medium | |
CN111738967B (en) | Model generation method and apparatus, storage medium, and electronic apparatus | |
CN111744196B (en) | Task target guiding method and device in game task | |
KR101527545B1 (en) | Method for real-time rendering subsurface scattering for portable device | |
Montusiewicz et al. | The concept of low-cost interactive and gamified virtual exposition | |
Rosema | Adapting virtual ray tracer to a web and mobile application | |
CN118236695B (en) | Virtual scene construction method, device and equipment of three-dimensional game engine and storage medium | |
Mamgain | Autodesk 3ds Max 2021: A Detailed Guide to Arnold Renderer | |
CN111862285B (en) | Character skin rendering method and device, storage medium and electronic device | |
Chebanyuk et al. | Rendering Optimization Approach for Game Engine Development | |
Verschoore de la Houssaije | A Virtual Ray Tracer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40042019 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |