CN115375829A - Self-luminous rendering method and device of virtual model, storage medium and electronic device - Google Patents
Self-luminous rendering method and device of virtual model, storage medium and electronic device Download PDFInfo
- Publication number
- CN115375829A CN115375829A CN202210860047.XA CN202210860047A CN115375829A CN 115375829 A CN115375829 A CN 115375829A CN 202210860047 A CN202210860047 A CN 202210860047A CN 115375829 A CN115375829 A CN 115375829A
- Authority
- CN
- China
- Prior art keywords
- self
- luminous
- map
- parameter
- rendering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/80—Shading
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a self-luminous rendering method and device of a virtual model, a storage medium and an electronic device. The method comprises the following steps: acquiring a first map, a second map and a third map corresponding to a self-luminous area of the target virtual model, wherein the first map is the self-luminous map of the self-luminous area, the second map is a gray scale map of the self-luminous area in the self-luminous flow direction, and the third map is a noise map of the self-luminous area; determining a first parameter based on the first map, and determining a second parameter based on the second map and the third map, wherein the first parameter is an initial self-luminous parameter of the self-luminous region, and the second parameter is a noise parameter of the self-luminous region; and rendering the self-luminous area according to the first parameter and the second parameter. The invention solves the technical problems of low control precision and poor rendering stability of a method for self-luminous flow rendering based on additionally placed maps in the related art.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a self-luminous rendering method and device of a virtual model, a storage medium and an electronic device.
Background
The self-luminous rendering is a rendering mode in which the color parameters of the surface material of the virtual model are within a High-Dynamic Range (HDR). Self-luminous rendering is one of important methods for realizing special effect expression of virtual models (such as virtual characters, virtual props and the like) in virtual scenes. With the development of the computer level and the improvement of the aesthetic sense of users, the static self-luminous effect obtained by the traditional self-luminous rendering method cannot meet the art requirements in application scenes, and for this reason, technicians in the related field continuously try various self-luminous flowing rendering methods.
The self-luminous flow rendering method provided by the related technology mainly comprises the following steps: and additionally arranging a set of model UV (ultraviolet) maps in the self-luminous flow direction, determining the self-luminous flow direction through a shader, and associating the self-luminous flow direction with the model self-luminous maps so as to control the self-luminous flow. However, this method has drawbacks in that: depending on the number of the model surfaces and the placing mode of the model UV map, the self-luminous flow precision is low and the rendering effect is poor; the additionally placed UV map causes large resource consumption of equipment, high system complexity and poor stability; the art manufacturing difficulty is high.
In view of the above problems, no effective solution has been proposed.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present invention and therefore may include information that does not constitute prior art known to a person of ordinary skill in the art.
Disclosure of Invention
The embodiment of the invention provides a self-luminous rendering method and device of a virtual model, a storage medium and an electronic device, and at least solves the technical problems of low control precision and poor rendering stability of a self-luminous flow rendering method based on additionally placed maps in the related art.
According to an aspect of an embodiment of the present invention, there is provided a self-luminous rendering method of a virtual model, including:
acquiring a first map, a second map and a third map corresponding to a self-luminous area of the target virtual model, wherein the first map is the self-luminous map of the self-luminous area, the second map is a gray scale map of the self-luminous area in the self-luminous flow direction, and the third map is a noise map of the self-luminous area; determining a first parameter based on the first map, and determining a second parameter based on the second map and the third map, wherein the first parameter is an initial self-luminous parameter of the self-luminous region, and the second parameter is a noise parameter of the self-luminous region; and rendering the self-luminous area according to the first parameter and the second parameter.
Optionally, determining the first parameter based on the first map comprises: and sampling the first map by adopting an initial texture sampling mode to obtain a first parameter.
Optionally, determining the second parameter based on the second map and the third map comprises: sampling the second map by adopting an initial texture sampling mode to obtain a third parameter, wherein the third parameter is a self-luminous flow parameter of a self-luminous area; determining a target texture sampling mode based on the third parameter; and sampling the third patch by adopting a target texture sampling mode to obtain a second parameter.
Optionally, the determining the target texture sampling manner based on the third parameter includes: acquiring a fourth parameter, wherein the fourth parameter is used for performing displacement control on self-luminous flow of the self-luminous region; adjusting the third parameter by using the fourth parameter to obtain an adjustment result; and determining a target texture sampling mode based on the adjustment result.
Optionally, rendering the self-luminous regions according to the first and second parameters comprises: multiplying the first parameter and the second parameter to obtain the self-luminous color of the self-luminous area; and rendering the self-luminous colors through a rendering pipeline.
Optionally, the obtaining a second map corresponding to a self-luminous region of the target virtual model comprises: drawing the self-luminous flow direction of the self-luminous area to obtain self-luminous flow information; baking the self-luminous flow information to obtain a second map.
Optionally, the plotting the self-luminous flow direction of the self-luminous region to obtain the self-luminous flow information comprises: determining a first gray value and a second gray value based on a self-luminous flow direction of the self-luminous region, wherein the first gray value is used for determining a starting point of the self-luminous flow direction, and the second gray value is used for determining an end point of the self-luminous flow direction; and controlling the brush to perform linear drawing by using the first gray value and the second gray value to obtain self-luminous flow information.
Optionally, the self-luminous rendering method of the virtual model further includes one of: the first map and the second map are stored independently; and storing the second map to the transparent channel of the first map.
According to another aspect of the embodiments of the present invention, there is also provided a self-luminous rendering apparatus of a virtual model, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first map, a second map and a third map which correspond to a self-luminous area of a target virtual model, the first map is the self-luminous map of the self-luminous area, the second map is a gray scale map of the self-luminous area in a self-luminous flowing direction, and the third map is a noise map of the self-luminous area; the determination module is used for determining a first parameter based on the first map and a second parameter based on the second map and the third map, wherein the first parameter is an initial self-luminous parameter of the self-luminous area, and the second parameter is a noise parameter of the self-luminous area; and the rendering module is used for rendering the self-luminous area according to the first parameter and the second parameter.
Optionally, the determining module is further configured to: and sampling the first map by adopting an initial texture sampling mode to obtain a first parameter.
Optionally, the determining module is further configured to: sampling the second map by adopting an initial texture sampling mode to obtain a third parameter, wherein the third parameter is a self-luminous flow parameter of a self-luminous area; determining a target texture sampling mode based on the third parameter; and sampling the third patch by adopting a target texture sampling mode to obtain a second parameter.
Optionally, the determining module is further configured to: acquiring a fourth parameter, wherein the fourth parameter is used for carrying out displacement control on self-luminous flow of the self-luminous area; adjusting the third parameter by using the fourth parameter to obtain an adjustment result; and determining a target texture sampling mode based on the adjustment result.
Optionally, the rendering module is further configured to: multiplying the first parameter and the second parameter to obtain the self-luminous color of the self-luminous area; and rendering the self-luminous colors through a rendering pipeline.
Optionally, the obtaining module is further configured to: drawing the self-luminous flow direction of the self-luminous area to obtain self-luminous flow information; baking the self-luminous flow information to obtain a second map.
Optionally, the obtaining module is further configured to: determining a first gray value and a second gray value based on a self-luminous flow direction of the self-luminous region, wherein the first gray value is used for determining a starting point of the self-luminous flow direction, and the second gray value is used for determining an end point of the self-luminous flow direction; and controlling the brush to perform linear drawing by using the first gray value and the second gray value to obtain self-luminous flow information.
Optionally, the self-light emitting rendering apparatus of the virtual model further includes: the storage module is used for independently storing the first map and the second map respectively; or a transparent channel for storing the second map to the first map.
According to another aspect of embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the self-luminous rendering method of the virtual model in any one of the above when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic apparatus, including: comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the self-luminous rendering method of the virtual model in any one of the above.
In at least some embodiments of the present invention, a first map, a second map and a third map corresponding to a self-luminous region of a target virtual model are obtained, wherein the first map is the self-luminous map of the self-luminous region, the second map is a gray scale map of a self-luminous flow direction of the self-luminous region, the third map is a noise map of the self-luminous region, and a first parameter is determined based on the first map and a second parameter is determined based on the second map and the third map, wherein the first parameter is an initial self-luminous parameter of the self-luminous region, the second parameter is a noise parameter of the self-luminous region, and the self-luminous region is further rendered according to the first parameter and the second parameter, so as to achieve a purpose of performing self-luminous flow rendering based on the self-luminous map, the gray scale map and the noise map of the self-luminous region on the target virtual model, thereby achieving a technical effect of improving control accuracy and rendering stability of self-luminous flow rendering without an additional map, and further solving a problem of poor control accuracy and low rendering stability of a rendering method of performing self-luminous flow rendering based on a map of a self-luminous additional rendering in a related technology.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a mobile terminal of a self-luminous rendering method of a virtual model according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for self-luminous rendering of a virtual model according to an embodiment of the invention;
FIG. 3 is a schematic diagram of an alternative self-luminous rendering process according to an embodiment of the invention;
FIG. 4 is a schematic diagram of an alternative self-luminous flow direction rendering process in accordance with embodiments of the present invention;
FIG. 5 is a schematic illustration of an alternative noise information map according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating an alternative self-luminous rendering result according to an embodiment of the present invention;
FIG. 7 is a block diagram of a self-luminous rendering apparatus of a virtual model according to an embodiment of the present invention;
FIG. 8 is a block diagram of an alternative self-luminous rendering apparatus for a virtual model according to an embodiment of the present invention;
fig. 9 is a schematic diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with one embodiment of the present invention, there is provided an embodiment of a method for self-luminous rendering of a virtual model, wherein the steps illustrated in the flow chart of the drawings may be performed in a computer system, such as a set of computer executable instructions, and wherein, although a logical order is illustrated in the flow chart, in some cases, the steps illustrated or described may be performed in an order different than that illustrated herein.
The self-luminous rendering method of the virtual model in one embodiment of the invention can be operated on a terminal device or a server. The terminal device may be a local terminal device. When the self-luminous rendering method of the virtual model runs on the server, the method can be implemented and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an optional embodiment, various cloud applications may be run under the cloud interaction system, for example: and (6) cloud games. Taking a cloud game as an example, a cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presenting main body are separated, the storage and the running of the self-luminous rendering method of the virtual model are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; however, the terminal device performing the information processing is a cloud game server in the cloud. When a game is played, a player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are coded and compressed, the data are returned to the client device through a network, and finally, the data are decoded through the client device and the game pictures are output.
In an optional embodiment, the terminal device may be a local terminal device. Taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with the player through a graphical user interface, namely, a game program is downloaded and installed and operated through an electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the player through holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a game screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
In a possible implementation manner, an embodiment of the present invention provides a self-luminous rendering method for a virtual model, which provides a graphical user interface through a terminal device, where the terminal device may be the aforementioned local terminal device, and may also be the aforementioned client device in a cloud interaction system.
Taking a Mobile terminal operating in a local terminal device as an example, the Mobile terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a Mobile Internet device (Mobile Internet Devices, abbreviated as MID), a PAD, a game console, etc. Fig. 1 is a block diagram of a hardware structure of a mobile terminal according to a self-light emitting rendering method of a virtual model in an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory 104 for storing data. Optionally, the mobile terminal may further include a transmission device 106, an input/output device 108, and a display device 110 for communication functions. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the self-luminous rendering method of the virtual model in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing, i.e., the self-luminous rendering method of the virtual model described above, by running the computer program stored in the memory 104. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The inputs in the input output Device 108 may come from a plurality of Human Interface Devices (HIDs). For example: keyboard and mouse, game pad, other special game controller (such as steering wheel, fishing rod, dance mat, remote controller, etc.). In addition to providing input functionality, some human interface devices may also provide output functionality, such as: force feedback and vibration of the gamepad, audio output of the controller, etc.
The display device 110 may be, for example, a head-up display (HUD), a touch screen type Liquid Crystal Display (LCD), and a touch display (also referred to as a "touch screen" or "touch display screen"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI) with which a user can interact by touching finger contacts and/or gestures on a touch-sensitive surface, where the human-machine interaction function optionally includes the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or computer-readable storage media.
The self-luminous rendering method of the virtual model in one embodiment of the invention can be run on a local terminal device or a server. When the self-luminous rendering method of the virtual model runs on the server, the method can be implemented and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an optional embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud games. Taking a cloud game as an example, a cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presenting main body are separated, the storage and the running of the self-luminous rendering method of the virtual model are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server which performs information processing is a cloud. When a game is played, a player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are coded and compressed, the data are returned to the client device through a network, and finally, the data are decoded through the client device and the game pictures are output.
In an optional implementation manner, taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with the player through a graphical user interface, namely, a game program is downloaded and installed and operated through an electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the player through holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a game screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
In a possible implementation manner, an embodiment of the present invention provides a self-luminous rendering method for a virtual model, which provides a graphical user interface through a terminal device, where the terminal device may be the aforementioned local terminal device, and may also be the aforementioned client device in a cloud interaction system. Fig. 2 is a flowchart of a self-luminous rendering method of a virtual model according to an embodiment of the present invention, as shown in fig. 2, the method includes the following steps:
step S21, acquiring a first map, a second map and a third map corresponding to a self-luminous area of the target virtual model, wherein the first map is the self-luminous map of the self-luminous area, the second map is a gray scale map of the self-luminous area in the self-luminous flow direction, and the third map is a noise map of the self-luminous area;
the target virtual model can be a virtual object needing rendering self-luminous effect in a virtual game scene. For example: the target virtual model can be a virtual character (including a body, an accessory, a special effect and the like of the virtual character), a virtual prop and the like.
The self-luminous region may be a part or all of a region on the surface of the target virtual model, which needs to render a self-luminous effect.
The first map corresponding to the self-luminous area is the self-luminous map of the self-luminous area, and the self-luminous map is used for determining the static self-luminous effect of the self-luminous area. The self-luminous area of the target virtual model surface can be determined through the self-luminous area, and the self-luminous color of the self-luminous area is independently controlled without being influenced by the ambient light.
The second map corresponding to the self-luminous region is a gray scale map of the self-luminous region in the self-luminous flow direction, and gray scale information contained in the gray scale map is used for determining the self-luminous flow direction of the self-luminous region.
The third map corresponding to the self-luminous area is a noise map of the self-luminous area, and noise information contained in the noise map is used for determining the random effect of self-luminous flow in the self-luminous area.
Specifically, the obtaining of the first map, the second map, and the third map corresponding to the self-luminous region of the target virtual model further includes other method steps, which may refer to the following further description of the embodiment of the present invention, and are not repeated herein.
Step S22, determining a first parameter based on the first map, and determining a second parameter based on the second map and the third map, wherein the first parameter is an initial self-luminous parameter of the self-luminous region, and the second parameter is a noise parameter of the self-luminous region;
the first parameter is determined based on the first map, and specifically, the initial self-luminous parameter of the self-luminous region of the target virtual model is determined based on the self-luminous map corresponding to the self-luminous region. The initial self-luminous parameters are used for rendering the self-luminous areas.
The second parameter is determined based on the second map and the third map, and specifically, a noise parameter of a self-luminous region of the target virtual model is determined based on a gray map and a noise map corresponding to the self-luminous region. The noise parameter is used to render a self-luminous region.
Specifically, determining the first parameter based on the first map, and determining the second parameter based on the second map and the third map further include other method steps, which may refer to the following further description of the embodiments of the present invention, and are not repeated herein.
And S23, rendering the self-luminous area according to the first parameter and the second parameter.
And rendering the self-luminous region according to the first parameter and the second parameter, specifically, rendering the self-luminous region according to an initial self-luminous parameter and a noise parameter of the self-luminous region.
Specifically, rendering the self-light emitting region according to the first parameter and the second parameter further includes other method steps, which may refer to the further description of the embodiment of the present invention, which is not repeated herein.
In at least some embodiments of the present invention, a first map, a second map and a third map corresponding to a self-luminous region of a target virtual model are obtained, wherein the first map is the self-luminous map of the self-luminous region, the second map is a gray scale map of a self-luminous flow direction of the self-luminous region, the third map is a noise map of the self-luminous region, and a first parameter is determined based on the first map and a second parameter is determined based on the second map and the third map, wherein the first parameter is an initial self-luminous parameter of the self-luminous region, the second parameter is a noise parameter of the self-luminous region, and the self-luminous region is further rendered according to the first parameter and the second parameter, so as to achieve a purpose of performing self-luminous flow rendering based on the self-luminous map, the gray scale map and the noise map of the self-luminous region on the target virtual model, thereby achieving a technical effect of improving control accuracy and rendering stability of self-luminous flow rendering without an additional map, and further solving a problem of poor control accuracy and low rendering stability of a rendering method of performing self-luminous flow rendering based on a map of a self-luminous additional rendering in a related technology.
The above method of the present embodiment is further described below.
Optionally, in step S22, determining the first parameter based on the first map may include performing the steps of:
step S221, sampling the first map by adopting an initial texture sampling mode to obtain a first parameter.
The initial texture sampling manner may be sampling according to a UV map of the target virtual model. And sampling the initial self-luminous map of the self-luminous area according to the UV map of the target virtual model, so as to obtain the initial self-luminous parameters of the self-luminous area.
Optionally, in step S22, determining the second parameter based on the second map and the third map may include performing the steps of:
step S222, sampling the second map by adopting an initial texture sampling mode to obtain a third parameter, wherein the third parameter is a self-luminous flow parameter of a self-luminous area;
step S223, determining a target texture sampling mode based on the third parameter;
and S224, sampling the third patch by adopting a target texture sampling mode to obtain a second parameter.
Sampling the gray map of the self-luminous flow direction of the self-luminous area according to the UV map of the target virtual model, and obtaining the self-luminous flow parameters of the self-luminous area. Based on the self-luminous flow parameters, a self-luminous flow map can be determined.
The target texture sampling may be based on a self-luminous flow map determined by self-luminous flow parameters. And sampling the noise map of the self-luminous area according to the self-luminous flow map to obtain the noise parameter of the self-luminous area.
Optionally, in step S223, determining the target texture sampling manner based on the third parameter may include the following steps:
step S2231, acquiring a fourth parameter, where the fourth parameter is used to perform displacement control on the self-luminous flow of the self-luminous region;
step S2232, adjusting the third parameter by using the fourth parameter to obtain an adjustment result;
in step S2233, a target texture sampling mode is determined based on the adjustment result.
The fourth parameter is a shift control parameter of the self-luminous flow in the self-luminous region. The fourth parameters may be pre-specified by a technician or input in real time during the rendering process.
Specifically, the displacement control parameter may include a time control parameter for controlling time information (including start-stop time, phase time, and the like) of the self-luminous flow rendering, a speed control parameter for controlling a rendering speed of the self-luminous flow rendering, and the like. By controlling the time and speed of the self-luminous streaming rendering, the displacement of the self-luminous streaming rendering can be controlled.
And adjusting the third parameter by using the fourth parameter to obtain the adjustment result. The adjustment result is the self-luminous flow parameter adjusted by the displacement control parameter and the speed control parameter. Based on the adjustment result, the target texture sampling manner is determined to be sampling according to the self-luminous flow map determined by the self-luminous flow parameters.
Alternatively, in step S23, rendering the self-luminous regions according to the first and second parameters may include performing the steps of:
step S231, multiplying the first parameter and the second parameter to obtain the self-luminous color of the self-luminous area;
and step S232, rendering the self-luminous color through the rendering pipeline.
The self-luminous color of the self-luminous region can be obtained by multiplying the first parameter and the second parameter. The multiplication operation may be an algorithm pre-specified by a technician, and the self-luminous color may be a color parameter for rendering the self-luminous region. Through the rendering pipeline, the self-luminous color can be rendered, and then the self-luminous effect of the target virtual model is obtained.
Alternatively, by entering the self-luminous color parameters into the HDR range, the virtual scene image can achieve a larger dynamic range of exposure than a normal digital image, thereby making the player feel a more obvious scene shading difference.
Alternatively, in step S21, obtaining the second map corresponding to the self-luminous region of the target virtual model may include performing the steps of:
step S211, drawing the self-luminous flow direction of the self-luminous area to obtain self-luminous flow information;
step S212, baking the self-luminous flow information to obtain a second map.
And drawing the self-luminous flow direction of the self-luminous area to obtain self-luminous flow information. The self-luminous flow information may be gray scale information. The self-luminous flow information is baked to obtain a gray map (corresponding to the second map) of the self-luminous region.
Alternatively, in step S211, the drawing of the self-luminous flow direction of the self-luminous region to obtain the self-luminous flow information may include performing the steps of:
a step S2111 of determining a first gradation value for determining a start point of the self-luminous flow direction and a second gradation value for determining an end point of the self-luminous flow direction based on the self-luminous flow direction of the self-luminous region;
step S2112, the brush is controlled to perform linear drawing by utilizing the first gray value and the second gray value, and self-luminous flow information is obtained.
The self-luminous flow direction of the self-luminous region may be a direction previously designated by a technician according to a scene requirement. The first gradation value and the second gradation value may be determined based on a self-luminous flow direction of the self-luminous region. The first gray scale value may be used to determine a start point of the self-luminous flow direction, and the second gray scale value may be used to determine an end point of the self-luminous flow direction.
And controlling the brush to perform linear drawing by using the first gray value and the second gray value so as to obtain the self-luminous flow information. Specifically, the brush is controlled to linearly draw from the start point of the self-luminous flow to the end point of the self-luminous flow, and the resulting drawing result can be used as the self-luminous flow information. The self-luminous flow information includes information on the direction of self-luminous flow.
Optionally, the self-luminous rendering method of the virtual model may further include one of the following steps:
step S24, independently storing the first map and the second map respectively;
and S25, storing the second map to the transparent channel of the first map.
The self-luminous map (corresponding to the first map) and the gray scale map (corresponding to the second map) of the self-luminous region can be independently stored, and the self-luminous map and the gray scale map are respectively read to render the self-luminous region during rendering.
The gray level map (corresponding to the second map) of the self-luminous region can also be stored in a transparent channel of the self-luminous map (corresponding to the first map), and the self-luminous maps containing the gray level map are respectively read during rendering to perform self-luminous region rendering.
Fig. 3 is a schematic diagram of an alternative self-luminous rendering process according to an embodiment of the present invention, which is implemented by using a Unity game engine and blend model design software based on a Shader language, and takes self-luminous rendering of a leg of a virtual character a as an example. As shown in fig. 3, the self-luminous rendering process includes the following method steps:
step E31, drawing the self-luminous flow direction of the model in the model making software;
step E32, baking the drawn gray information graph and importing the gray information graph into an engine;
step E33, sampling the gray information graph, and taking a sampling result as a one-dimensional noise graph;
step E34, controlling the self-luminous movement by controlling the parameters of the one-dimensional noise map;
and E35, multiplying the processed noise parameters by the self-luminous maps, and inputting the calculation result into a rendering pipeline for rendering.
Specifically, in step E31, the self-luminous flow direction of the leg portion of the virtual character a is drawn using the Blender software. Fig. 4 is a schematic diagram of an alternative self-luminous flow direction drawing process according to an embodiment of the present invention, as shown in fig. 4, a self-luminous flow direction drawing can be performed in the blend software from a point where the gray-scale value is 0 (H0 point is a starting point) to a point where the gray-scale value is 1 (H1 point is an end point), and the drawing result is taken as self-luminous flow information.
Alternatively, during the drawing of the self-luminous flow direction, the gradation brush may be linear, that is, as shown in fig. 4, the gradation value is linearly changed between 0 and 1 on the self-luminous flow path drawn between H0 and H1.
It should be noted that, in order to maintain the continuity of self-luminescence of the leg portion of the virtual character a during the drawing process of the self-luminescence flow direction, the parameters (such as thickness, gradient, etc.), the range and the algorithm of the gradation brush in the Blender software can be controlled so that the gradation brush is drawn continuously within the leg portion drawing region of the virtual character a (to the continuous drawing region on the leg portion surface of the virtual character a, the corresponding gradation drawing range should be continuous). The slope of the graduating brush may refer to a drawing rate of the graduating brush between 0 and 1. The algorithm of the gradation brush may be a linear algorithm, a gamma algorithm, or the like. The range of the gradation brush means a range of gradation values associated with the brush drawing order.
It should be noted that, in the process of drawing the leg portion of the virtual character a in the self-luminous flow direction, the grayscale map covers the region to be rendered corresponding to the leg portion of the virtual character a, and therefore, the drawing in the self-luminous flow direction does not affect other regions of the surface of the virtual character a.
Specifically, in step E32, the self-luminous flow information drawn in step E31 is baked into a map, and a grayscale map P2 (corresponding to the second map described above) is obtained. The grayscale map P2 is imported to the Unity game engine.
It should be noted that the gray-scale map obtained by baking can be stored in the storage space separately, so that the gray-scale map (or self-luminous flow information) can be replaced and changed flexibly; and the mapping information can also be stored in a transparent channel of the self-luminous mapping of the leg part of the virtual character A, so that the workload of one mapping sampling in the subsequent rendering process is saved.
Specifically, in step E33, the model UV map P1 of the leg of the virtual character a is sampled to obtain an initial self-luminescence parameter, which is denoted as emision (corresponding to the first parameter); sampling the grayscale map P2 using a model UV map P1 (corresponding to the first map) of the leg of the avatar a to obtain a self-luminous flow parameter, which is denoted as emisionflow (corresponding to the third parameter); then, a preset one-dimensional noise map P3 (corresponding to the third map) is sampled based on the self-luminous flow parameter emisionflow, and a noise parameter is obtained and is written as noise (corresponding to the second parameter).
It should be noted that the preset one-dimensional noise map P4 may be a noise map that is pre-processed, wherein the pre-processing includes: fade-in processing, fade-out processing, and the like.
Fig. 5 is a schematic diagram of an alternative noise information graph according to an embodiment of the present invention, as shown in fig. 5, the noise information graph P5 carries the above noise parameter noise, the left edge in fig. 5 corresponds to a point (self-luminous flow starting point) with a gray-scale value of 0, and the right edge in fig. 5 corresponds to a point (self-luminous flow ending point) with a gray-scale value of 1.
Still as shown in fig. 5, the self-luminous impulse effect of the leg of the avatar a can be achieved by moving the noise information map P5. By modifying or replacing the noise information map P5, the self-luminous effect (such as the gradient of fade-in and fade-out, the noise distribution pattern, the pulse coverage, etc.) of the leg of the virtual character a can be adjusted.
Specifically, in step E34, the displacement control is performed on the noise information map P5 by controlling the displacement control parameters (including the Time control parameter Time and the speed control parameter speed) of the noise information map P5, so as to achieve the self-luminous flow effect.
Alternatively, the Time control parameter Time may control the spontaneous light of the leg portion of the virtual character a to automatically flow according to a designated Time rule, and the speed control parameter speed may control the speed of the spontaneous light of the leg portion of the virtual character a to automatically flow.
The method for adjusting the self-luminous flow parameter emisionflow corresponding to the noise information map P5 by using the Time control parameter Time and the speed control parameter speed may be as shown in the following formula (1):
emisson flow = emisson flow + Time × Speed formula (1)
In formula (1), emisionflow on the right side of the equation represents the self-luminous flow parameter before adjustment, and emisionflow on the left side of the equation represents the self-luminous flow parameter after adjustment.
Specifically, in step E35, the noise parameter noise is multiplied by the initial self-luminous parameter emision and input into the rendering pipeline for rendering, so that the self-luminous flow effect of the leg of the virtual character a can be obtained.
Fig. 6 is a schematic diagram of an alternative self-luminous rendering result according to an embodiment of the present invention, and as shown in fig. 6, a self-luminous flow rendering is performed by the method provided in the embodiment of the present invention, so that a self-luminous effect of the virtual model shown in fig. 6 can be obtained.
By the self-luminous rendering method, the self-luminous flowing effect of the virtual model can be realized, the dynamic sense of a virtual scene picture corresponding to the virtual model is realized, and the user experience is further improved.
In the self-luminous rendering method provided by the invention, a Bloom technology can also be used. By the Bloom technology, imaging of a real-world camera can be reproduced, computer image effects which can be used for video games, demonstrations and high-dynamic-range rendering can be achieved, image effects (such as stripes, feathers and the like) generated by the Bloom can extend into the computer image from the edge of a bright area, strong optical illusion is generated, and the scene effect is difficult to capture by the camera or a player with naked eyes.
It is easy to notice that the method provided by the invention has the following beneficial effects: the self-luminous flow effect of the virtual model can be realized without additionally arranging the map, so that the dependence of self-luminous flow rendering on the number of the surfaces of the model is avoided, and the control precision and the rendering stability of the self-luminous flow are improved.
It is easy to note that the method provided by the invention has the technical key points that: firstly, directly drawing self-luminous flow information of a model in model making software, wherein the self-luminous flow information is determined by a gray level map; then drawing the self-luminous flow direction determined by the self-luminous flow information on the model through a gradient brush to obtain a drawing result; leading the drawing result into a preset game engine, and taking the drawing result as a UV (ultraviolet) mapping in a coloring device of the game engine to carry out noise map sampling to obtain a sampling result; and performing self-luminous rendering according to the sampling result.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention or portions thereof contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a self-light emitting rendering device of a virtual model is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, and the description of the device is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware or a combination of software and hardware is also possible and contemplated.
Fig. 7 is a block diagram illustrating a self-light emitting rendering apparatus of a virtual model according to an embodiment of the present invention, as shown in fig. 7, the apparatus including: an obtaining module 71, configured to obtain a first map, a second map, and a third map corresponding to a self-luminous region of the target virtual model, where the first map is the self-luminous map of the self-luminous region, the second map is a grayscale map of the self-luminous region in a self-luminous flow direction, and the third map is a noise map of the self-luminous region; a determining module 72 for determining a first parameter based on the first map, and determining a second parameter based on the second map and the third map, wherein the first parameter is an initial self-luminous parameter of the self-luminous region, and the second parameter is a noise parameter of the self-luminous region; and a rendering module 73, configured to render the self-luminous region according to the first parameter and the second parameter.
Optionally, the determining module 72 is further configured to: and sampling the first map by adopting an initial texture sampling mode to obtain a first parameter.
Optionally, the determining module 72 is further configured to: sampling the second map by adopting an initial texture sampling mode to obtain a third parameter, wherein the third parameter is a self-luminous flow parameter of a self-luminous area; determining a target texture sampling mode based on the third parameter; and sampling the third patch by adopting a target texture sampling mode to obtain a second parameter.
Optionally, the determining module 72 is further configured to: acquiring a fourth parameter, wherein the fourth parameter is used for performing displacement control on self-luminous flow of the self-luminous region; adjusting the third parameter by using the fourth parameter to obtain an adjustment result; and determining a target texture sampling mode based on the adjustment result.
Optionally, the rendering module 73 is further configured to: multiplying the first parameter and the second parameter to obtain the self-luminous color of the self-luminous area; and rendering the self-luminous colors through a rendering pipeline.
Optionally, the obtaining module 71 is further configured to: drawing the self-luminous flow direction of the self-luminous area to obtain self-luminous flow information; baking the self-luminous flow information to obtain a second map.
Optionally, the obtaining module 71 is further configured to: determining a first gray value and a second gray value based on a self-luminous flow direction of the self-luminous region, wherein the first gray value is used for determining a starting point of the self-luminous flow direction, and the second gray value is used for determining an end point of the self-luminous flow direction; and controlling the brush to perform linear drawing by using the first gray value and the second gray value to obtain self-luminous flow information.
Alternatively, fig. 8 is a block diagram of a self-light-emitting rendering apparatus of an alternative virtual model according to an embodiment of the present invention, and as shown in fig. 8, the apparatus includes, in addition to all modules shown in fig. 7: a storage module 74, configured to store the first map and the second map independently; or a transparent channel for storing the second map to the first map.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
Optionally, in this embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Optionally, in this embodiment, the computer-readable storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of mobile terminals.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: acquiring a first map, a second map and a third map corresponding to a self-luminous area of the target virtual model, wherein the first map is the self-luminous map of the self-luminous area, the second map is a gray scale map of the self-luminous area in the self-luminous flow direction, and the third map is a noise map of the self-luminous area; determining a first parameter based on the first map, and determining a second parameter based on the second map and the third map, wherein the first parameter is an initial self-luminous parameter of the self-luminous region, and the second parameter is a noise parameter of the self-luminous region; and rendering the self-luminous area according to the first parameter and the second parameter.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: and sampling the first map by adopting an initial texture sampling mode to obtain a first parameter.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: sampling the second map by adopting an initial texture sampling mode to obtain a third parameter, wherein the third parameter is a self-luminous flow parameter of a self-luminous area; determining a target texture sampling mode based on the third parameter; and sampling the third paste image by adopting a target texture sampling mode to obtain a second parameter.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: acquiring a fourth parameter, wherein the fourth parameter is used for carrying out displacement control on self-luminous flow of the self-luminous area; adjusting the third parameter by using the fourth parameter to obtain an adjustment result; and determining a target texture sampling mode based on the adjustment result.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: multiplying the first parameter and the second parameter to obtain the self-luminous color of the self-luminous area; and rendering the self-luminous colors through a rendering pipeline.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: drawing the self-luminous flow direction of the self-luminous area to obtain self-luminous flow information; baking the self-luminous flow information to obtain a second map.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: determining a first gray value and a second gray value based on a self-luminous flow direction of the self-luminous region, wherein the first gray value is used for determining a starting point of the self-luminous flow direction, and the second gray value is used for determining an end point of the self-luminous flow direction; and controlling the brush to perform linear drawing by using the first gray value and the second gray value to obtain self-luminous flow information.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: the first map and the second map are stored independently; and storing the second map to the transparent channel of the first map.
In the computer-readable storage medium of this embodiment, a technical solution of a self-luminous rendering method of a virtual model is provided. The method comprises the steps of obtaining a first map, a second map and a third map corresponding to a self-luminous area of a target virtual model, wherein the first map is the self-luminous map of the self-luminous area, the second map is a gray scale map of the self-luminous flow direction of the self-luminous area, the third map is a noise map of the self-luminous area, determining a first parameter based on the first map and determining a second parameter based on the second map and the third map, the first parameter is an initial self-luminous parameter of the self-luminous area, the second parameter is a noise parameter of the self-luminous area, and further rendering the self-luminous area according to the first parameter and the second parameter to achieve the purpose of self-luminous flow rendering based on the self-luminous map, the gray scale map and the noise map of the self-luminous area on the target virtual model, so that the technical effects of improving the control accuracy and rendering stability of self-luminous flow rendering without extra map placement are achieved, and the technical problems of low control accuracy and poor rendering stability of a self-luminous flow rendering method based on extra map in related technologies are solved.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, and may also be implemented by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a computer-readable storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiment of the present invention.
In an exemplary embodiment of the present invention, a program product capable of implementing the above-described method of the present embodiment is stored on a computer-readable storage medium. In some possible implementations, various aspects of the embodiments of the present invention may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary implementations of the present invention described in the above section "exemplary method" of this embodiment, when the program product is run on the terminal device.
According to the program product for realizing the method, the portable compact disc read only memory (CD-ROM) can be adopted, the program code is included, and the program product can be operated on terminal equipment, such as a personal computer. However, the program product of the embodiments of the invention is not limited thereto, and in the embodiments of the invention, the computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product described above may employ any combination of one or more computer-readable media. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the program code embodied on the computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, the processor may be further configured to execute the following steps by a computer program: acquiring a first map, a second map and a third map corresponding to a self-luminous area of the target virtual model, wherein the first map is the self-luminous map of the self-luminous area, the second map is a gray scale map of the self-luminous area in the self-luminous flow direction, and the third map is a noise map of the self-luminous area; determining a first parameter based on the first map, and determining a second parameter based on the second map and the third map, wherein the first parameter is an initial self-luminous parameter of the self-luminous region, and the second parameter is a noise parameter of the self-luminous region; and rendering the self-luminous area according to the first parameter and the second parameter.
Optionally, the processor may be further configured to execute the following steps by a computer program: and sampling the first map by adopting an initial texture sampling mode to obtain a first parameter.
Optionally, the processor may be further configured to execute the following steps by a computer program: sampling the second map by adopting an initial texture sampling mode to obtain a third parameter, wherein the third parameter is a self-luminous flow parameter of a self-luminous area; determining a target texture sampling mode based on the third parameter; and sampling the third patch by adopting a target texture sampling mode to obtain a second parameter.
Optionally, the processor may be further configured to execute the following steps by a computer program: acquiring a fourth parameter, wherein the fourth parameter is used for carrying out displacement control on self-luminous flow of the self-luminous area; adjusting the third parameter by using the fourth parameter to obtain an adjustment result; and determining a target texture sampling mode based on the adjustment result.
Optionally, the processor may be further configured to execute the following steps by a computer program: multiplying the first parameter and the second parameter to obtain the self-luminous color of the self-luminous area; and rendering the self-luminous colors through a rendering pipeline.
Optionally, the processor may be further configured to execute the following steps by a computer program: drawing the self-luminous flow direction of the self-luminous area to obtain self-luminous flow information; baking the self-luminous flow information to obtain a second map.
Optionally, the processor may be further configured to execute the following steps by a computer program: determining a first gray value and a second gray value based on a self-luminous flow direction of the self-luminous region, wherein the first gray value is used for determining a starting point of the self-luminous flow direction, and the second gray value is used for determining an end point of the self-luminous flow direction; and controlling the brush to perform linear drawing by using the first gray value and the second gray value to obtain self-luminous flow information.
Optionally, the processor may be further configured to execute the following steps by a computer program: the first map and the second map are stored independently; and storing the second map to the transparent channel of the first map.
In the electronic device of the embodiment, a technical solution of a self-luminous rendering method of a virtual model is provided. The method comprises the steps of obtaining a first map, a second map and a third map corresponding to a self-luminous area of a target virtual model, wherein the first map is the self-luminous map of the self-luminous area, the second map is a gray scale map of the self-luminous flow direction of the self-luminous area, the third map is a noise map of the self-luminous area, determining a first parameter based on the first map and determining a second parameter based on the second map and the third map, the first parameter is an initial self-luminous parameter of the self-luminous area, the second parameter is a noise parameter of the self-luminous area, and further rendering the self-luminous area according to the first parameter and the second parameter to achieve the purpose of self-luminous flow rendering based on the self-luminous map, the gray scale map and the noise map of the self-luminous area on the target virtual model, so that the technical effects of improving the control accuracy and rendering stability of self-luminous flow rendering without extra map placement are achieved, and the technical problems of low control accuracy and poor rendering stability of a self-luminous flow rendering method based on extra map in related technologies are solved.
Fig. 9 is a schematic diagram of an electronic device according to an embodiment of the invention. As shown in fig. 9, the electronic device 900 is only an example and should not bring any limitation to the functions and the scope of the application of the embodiments of the present invention.
As shown in fig. 9, the electronic apparatus 900 is embodied in the form of a general purpose computing device. The components of electronic device 900 may include, but are not limited to: the at least one processor 910, the at least one memory 920, the bus 930 connecting the various system components (including the memory 920 and the processor 910), and the display 940.
Wherein the above-mentioned memory 920 stores program code, which can be executed by the processor 910, to cause the processor 910 to perform the steps according to various exemplary embodiments of the present invention described in the above-mentioned method section of the embodiments of the present invention.
The memory 920 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM) 9201 and/or a cache memory unit 9202, and may further include a read only memory unit (ROM) 9203, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
In some examples, memory 920 can also include program/utility 9204 having a set (at least one) of program modules 9205, such program modules 9205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment. The memory 920 may further include memory located remotely from the processor 910 and such remote memory may be coupled to the electronic device 900 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Optionally, the electronic apparatus 900 may also communicate with one or more external devices 1400 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic apparatus 900, and/or with any devices (e.g., router, modem, etc.) that enable the electronic apparatus 900 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interface 950. Also, the electronic device 900 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 960. As shown in fig. 9, the network adapter 960 communicates with the other modules of the electronic device 900 via the bus 930. It should be appreciated that although not shown in FIG. 9, other hardware and/or software modules may be used in conjunction with electronic device 900, which may include but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The electronic device 900 may further include: a keyboard, a cursor control device (e.g., a mouse), an input/output interface (I/O interface), a network interface, a power source, and/or a camera.
It will be understood by those skilled in the art that the structure shown in fig. 9 is only an illustration and is not intended to limit the structure of the electronic device. For example, electronic device 900 may also include more or fewer components than shown in FIG. 9, or have a different configuration than shown in FIG. 9. The memory 920 may be used to store a computer program and corresponding data, such as a computer program corresponding to the self-luminous rendering method of the virtual model and corresponding data in the embodiment of the present invention. The processor 910 executes various functional applications and data processing, i.e., the self-luminous rendering method of the virtual model described above, by running the computer program stored in the memory 920.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present invention, it should be understood that the disclosed technical contents can be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (11)
1. A self-luminous rendering method of a virtual model is characterized by comprising the following steps:
acquiring a first map, a second map and a third map corresponding to a self-luminous area of a target virtual model, wherein the first map is the self-luminous map of the self-luminous area, the second map is a gray scale map of the self-luminous area in a self-luminous flow direction, and the third map is a noise map of the self-luminous area;
determining a first parameter based on the first map, and determining a second parameter based on the second map and the third map, wherein the first parameter is an initial self-luminous parameter of the self-luminous region, and the second parameter is a noise parameter of the self-luminous region;
rendering the self-luminous area according to the first parameter and the second parameter.
2. The self-luminous rendering method according to claim 1, wherein determining the first parameter based on the first map comprises:
and sampling the first map by adopting an initial texture sampling mode to obtain the first parameter.
3. The self-light emitting rendering method according to claim 1, wherein determining the second parameter based on the second map and the third map comprises:
sampling the second map by adopting an initial texture sampling mode to obtain a third parameter, wherein the third parameter is a self-luminous flow parameter of the self-luminous area;
determining a target texture sampling mode based on the third parameter;
and sampling the third patch by adopting the target texture sampling mode to obtain the second parameter.
4. The self-luminous rendering method according to claim 3, wherein determining the target texture sampling pattern based on the third parameter comprises:
acquiring a fourth parameter, wherein the fourth parameter is used for carrying out displacement control on self-luminous flow of the self-luminous area;
adjusting the third parameter by using the fourth parameter to obtain an adjustment result;
and determining the target texture sampling mode based on the adjustment result.
5. The self-luminous rendering method according to claim 1, wherein rendering the self-luminous regions according to the first parameter and the second parameter includes:
multiplying the first parameter and the second parameter to obtain the self-luminous color of the self-luminous area;
and rendering the self-luminous colors through a rendering pipeline.
6. The self-luminous rendering method according to claim 1, wherein obtaining the second map corresponding to the self-luminous region of the target virtual model comprises:
drawing the self-luminous flow direction of the self-luminous area to obtain self-luminous flow information;
baking the self-luminous flow information to obtain the second map.
7. The self-luminous rendering method according to claim 6, wherein drawing the self-luminous flow direction of the self-luminous region to obtain the self-luminous flow information comprises:
determining a first gray scale value and a second gray scale value based on the self-luminous flow direction of the self-luminous region, wherein the first gray scale value is used for determining the starting point of the self-luminous flow direction, and the second gray scale value is used for determining the end point of the self-luminous flow direction;
and controlling a brush to perform linear drawing by using the first gray value and the second gray value to obtain the self-luminous flow information.
8. The self-luminous rendering method according to claim 6, further comprising one of:
the first map and the second map are stored independently;
and storing the second map to a transparent channel of the first map.
9. A self-luminous rendering apparatus of a virtual model, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first map, a second map and a third map corresponding to a self-luminous area of a target virtual model, the first map is the self-luminous map of the self-luminous area, the second map is a gray scale map of the self-luminous area in a self-luminous flow direction, and the third map is a noise map of the self-luminous area;
a determining module, configured to determine a first parameter based on the first map, and determine a second parameter based on the second map and the third map, wherein the first parameter is an initial self-luminescence parameter of the self-luminescence region, and the second parameter is a noise parameter of the self-luminescence region;
and the rendering module is used for rendering the self-luminous area according to the first parameter and the second parameter.
10. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to, when executed by a processor, perform a method of self-luminous rendering of a virtual model according to any one of claims 1 to 8.
11. An electronic device comprising a memory in which a computer program is stored and a processor arranged to run the computer program to perform the method of self-luminous rendering of a virtual model according to any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210860047.XA CN115375829A (en) | 2022-07-21 | 2022-07-21 | Self-luminous rendering method and device of virtual model, storage medium and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210860047.XA CN115375829A (en) | 2022-07-21 | 2022-07-21 | Self-luminous rendering method and device of virtual model, storage medium and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115375829A true CN115375829A (en) | 2022-11-22 |
Family
ID=84062519
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210860047.XA Pending CN115375829A (en) | 2022-07-21 | 2022-07-21 | Self-luminous rendering method and device of virtual model, storage medium and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115375829A (en) |
-
2022
- 2022-07-21 CN CN202210860047.XA patent/CN115375829A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111145326A (en) | Processing method of three-dimensional virtual cloud model, storage medium, processor and electronic device | |
US20180253892A1 (en) | Eye image generation method and apparatus | |
CN113706666A (en) | Animation data processing method, non-volatile storage medium, and electronic device | |
US20240037839A1 (en) | Image rendering | |
CN113318428A (en) | Game display control method, non-volatile storage medium, and electronic device | |
CN115375822A (en) | Cloud model rendering method and device, storage medium and electronic device | |
CN114820915A (en) | Method and device for rendering shading light, storage medium and electronic device | |
WO2024124805A1 (en) | Interactive animation processing method and device, storage medium and electronic device | |
CN115131489A (en) | Cloud layer rendering method and device, storage medium and electronic device | |
CN115089964A (en) | Method and device for rendering virtual fog model, storage medium and electronic device | |
CN109493428B (en) | Optimization method and device for three-dimensional virtual model, electronic equipment and storage medium | |
CN116452704A (en) | Method and device for generating lens halation special effect, storage medium and electronic device | |
CN115375829A (en) | Self-luminous rendering method and device of virtual model, storage medium and electronic device | |
CN115713589A (en) | Image generation method and device for virtual building group, storage medium and electronic device | |
CN115375797A (en) | Layer processing method and device, storage medium and electronic device | |
CN113947663A (en) | Vegetation model generation method and device, storage medium and electronic device | |
CN114742970A (en) | Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device | |
CN114419211A (en) | Method, device, storage medium and electronic device for controlling virtual character skeleton | |
CN116664750A (en) | Display information processing method and device, storage medium and electronic device | |
CN117482501A (en) | Method and device for generating scene resource model, storage medium and electronic device | |
CN116310039A (en) | Model rendering method and device and electronic device | |
CN114299207A (en) | Virtual object rendering method and device, readable storage medium and electronic device | |
CN114299211A (en) | Information processing method, information processing apparatus, readable storage medium, and electronic apparatus | |
CN115797524A (en) | Virtual splash animation generation method and device, storage medium and electronic device | |
CN116889723A (en) | Picture generation method and device of virtual scene, storage medium and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |