CN115131489A - Cloud layer rendering method and device, storage medium and electronic device - Google Patents

Cloud layer rendering method and device, storage medium and electronic device Download PDF

Info

Publication number
CN115131489A
CN115131489A CN202210689819.8A CN202210689819A CN115131489A CN 115131489 A CN115131489 A CN 115131489A CN 202210689819 A CN202210689819 A CN 202210689819A CN 115131489 A CN115131489 A CN 115131489A
Authority
CN
China
Prior art keywords
cloud layer
cloud
mask
target
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210689819.8A
Other languages
Chinese (zh)
Inventor
陈卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210689819.8A priority Critical patent/CN115131489A/en
Publication of CN115131489A publication Critical patent/CN115131489A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a cloud layer rendering method, a cloud layer rendering device, a storage medium and an electronic device. The method comprises the following steps: acquiring texture information of a plurality of channels in a target map, wherein the target map is used for rendering a virtual cloud layer model; generating a plurality of target cloud layer masks based on the texture information; and mixing the target cloud layer masks to obtain a first cloud layer rendering result corresponding to the virtual cloud layer model. The cloud layer rendering method solves the technical problems that the cloud layer rendering method provided by the related technology is high in rendering cost and poor in virtual reality effect of rendering results.

Description

Cloud layer rendering method and device, storage medium and electronic device
Technical Field
The invention relates to the technical field of computers, in particular to a cloud layer rendering method, a cloud layer rendering device, a storage medium and an electronic device.
Background
In virtual scenes, it is often desirable to show cloud effects that follow weather changes to augment virtual reality. In the related art, the methods for rendering the cloud layer effect mainly include the following: firstly, a volume cloud with a vivid rendering effect is rendered according to a three-dimensional texture and ray tracing mode, but the rendering performance of the method is high in consumption and is difficult to apply to a mobile terminal; secondly, cloud layers are rendered through a particle system and cloud layer maps in various forms, but the method is difficult to simulate the high-density cloud layer effect and poor in real-time shadow rendering effect; thirdly, a high-definition sky image generated by real photography or software rendering is designed to be used as a map to make a sky ball, but the resource inclusion designed by the method is extremely large and is difficult to apply to a mobile terminal; fourth, a programmed material cloud is created based on a single-layer noise map, but the cloud effect obtained by this method cannot support a real-time shadow effect.
Therefore, how to perform cloud layer rendering in a virtual scene to adapt real-time light shadow and enhance virtual reality effect becomes one of the important issues in the related art. In view of the above problems, no effective solution has been proposed.
It is noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure and therefore may include information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
The embodiment of the invention provides a cloud layer rendering method, a cloud layer rendering device, a storage medium and an electronic device, and at least solves the technical problems that the cloud layer rendering method provided in the related art is high in rendering cost and poor in virtual reality effect of rendering results.
According to an aspect of an embodiment of the present invention, there is provided a cloud layer rendering method including:
acquiring texture information of a plurality of channels in a target map, wherein the target map is used for rendering a virtual cloud layer model; generating a plurality of target cloud layer masks based on the texture information; and mixing the target cloud layer masks to obtain a first cloud layer rendering result corresponding to the virtual cloud layer model.
Optionally, the plurality of channels comprises: the first color channel, the second color channel, and the third color channel, the texture information comprising: the virtual cloud layer model comprises a first noise texture, a second noise texture and a bottom shadow form texture corresponding to the virtual cloud layer model, wherein a first color channel is used for storing the first noise texture, a second color channel is used for storing the second noise texture, and a third color channel is used for storing the bottom shadow form texture.
Optionally, generating a plurality of target cloud layer masks based on the texture information comprises: generating a plurality of initial cloud layer masks based on the texture information; and performing cloud layer mask calculation on the plurality of initial cloud layer masks to obtain a plurality of target cloud layer masks.
Optionally, the plurality of initial cloud masks comprises: first initial cloud layer mask, the initial cloud layer mask of second, the initial cloud layer mask of third and the initial cloud layer mask of fourth, it includes to generate a plurality of initial cloud layer masks based on texture information: generating a first initial cloud layer mask based on the bottom shadow morphological texture, wherein the first initial cloud layer mask is used for determining a bottom cloud layer of the virtual cloud layer model; and generating a second initial cloud layer mask, a third initial cloud layer mask and a fourth initial cloud layer mask based on the first noise texture and the second noise texture, wherein the second initial cloud layer mask is used for determining a main noise layer of the virtual cloud layer model, the third initial cloud layer mask is used for determining an additional noise layer of the virtual cloud layer model, and the fourth initial cloud layer mask is used for determining a light receiving coloring layer of the virtual cloud layer model.
Optionally, the plurality of target cloud masks comprises: first target cloud layer shade, second target cloud layer shade, third target cloud layer shade and fourth target cloud layer shade, carry out cloud layer shade to a plurality of initial cloud layer shades and calculate, obtain a plurality of target cloud layer shades and include: performing cloud layer mask calculation on the first initial cloud layer mask, the second initial cloud layer mask, the third initial cloud layer mask and the fourth initial cloud layer mask to obtain a first target cloud layer mask, a second target cloud layer mask, a third target cloud layer mask and a fourth target cloud layer mask; the first target cloud layer mask is used for simulating the effect of the virtual cloud layer model that the self-shadow changes along with real-time illumination, the second target cloud layer mask is used for rendering the transparency of the virtual cloud layer model, the third target cloud layer mask is used for simulating the shielding of the input illumination intensity when the virtual cloud layer model is a multi-layer cloud, the fourth target cloud layer mask is used for transiting the boundary between the virtual cloud layer model and the virtual association model, and the virtual association model is a virtual model shielded by the virtual cloud layer model.
Optionally, the mixing the multiple target cloud layer masks to obtain the first cloud layer rendering result includes: and in response to the virtual cloud layer model being a single-layer cloud, performing mixing processing on the first target cloud layer mask, the second target cloud layer mask and the fourth target cloud layer mask based on the first illumination intensity to obtain a first cloud layer rendering result, wherein the first illumination intensity is an original illumination intensity.
Optionally, the mixing the multiple target cloud layer masks to obtain the first cloud layer rendering result includes: in response to the virtual cloud layer model being a multi-layer cloud, performing mixing processing on a first target cloud layer mask, a second target cloud layer mask and a fourth target cloud layer mask based on a first illumination intensity to obtain a first processing result, and performing mixing processing on the first target cloud layer mask, the second target cloud layer mask and the fourth target cloud layer mask based on a second illumination intensity to obtain a second processing result, wherein the first processing result is a rendering result of a top layer cloud with the highest height in the multi-layer cloud, the second processing result is a rendering result of other layers of clouds except the top layer cloud in the multi-layer cloud, the first illumination intensity is an original illumination intensity, and the second illumination intensity is determined by the first illumination intensity and the third target cloud layer mask; and determining a first cloud layer rendering result by using the first processing result and the second processing result.
Optionally, the cloud layer rendering method further includes: generating a fifth target cloud layer mask by using the position of the light source, wherein the gray scale corresponding to the fifth target cloud layer mask is determined by the distance relative to the position of the light source; calculating to obtain a third illumination intensity by adopting the first illumination intensity, the current illumination color and a fifth target cloud layer mask, wherein the first illumination intensity is the original illumination intensity; and adjusting the first cloud layer rendering result into a second cloud layer rendering result based on the third illumination intensity.
According to another aspect of the embodiments of the present invention, there is also provided a cloud layer rendering apparatus, including:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring texture information of a plurality of channels in a target map, and the target map is used for rendering a virtual cloud layer model; a generation module for generating a plurality of target cloud layer masks based on the texture information; and the rendering module is used for performing mixing processing on the target cloud layer masks to obtain a first cloud layer rendering result corresponding to the virtual cloud layer model.
Optionally, the generating module is further configured to: generating a plurality of initial cloud layer masks based on the texture information; and performing cloud layer mask calculation on the plurality of initial cloud layer masks to obtain a plurality of target cloud layer masks.
Optionally, the plurality of initial cloud masks comprises: first initial cloud layer shade, the initial cloud layer shade of second, the initial cloud layer shade of third and the initial cloud layer shade of fourth, above-mentioned generation module still is used for: generating a first initial cloud layer mask based on the bottom shadow morphological texture, wherein the first initial cloud layer mask is used for determining a bottom cloud layer of the virtual cloud layer model; and generating a second initial cloud layer mask, a third initial cloud layer mask and a fourth initial cloud layer mask based on the first noise texture and the second noise texture, wherein the second initial cloud layer mask is used for determining a main noise layer of the virtual cloud layer model, the third initial cloud layer mask is used for determining an additional noise layer of the virtual cloud layer model, and the fourth initial cloud layer mask is used for determining a light receiving coloring layer of the virtual cloud layer model.
Optionally, the plurality of target cloud masks comprises: first target cloud layer shade, second target cloud layer shade, third target cloud layer shade and fourth target cloud layer shade, above-mentioned generation module still is used for: cloud layer mask calculation is carried out on the first initial cloud layer mask, the second initial cloud layer mask, the third initial cloud layer mask and the fourth initial cloud layer mask to obtain a first target cloud layer mask, a second target cloud layer mask, a third target cloud layer mask and a fourth target cloud layer mask; the first target cloud layer mask is used for simulating the effect of the virtual cloud layer model that the self-shadow changes along with real-time illumination, the second target cloud layer mask is used for rendering the transparency of the virtual cloud layer model, the third target cloud layer mask is used for simulating the shielding of the input illumination intensity when the virtual cloud layer model is a multi-layer cloud, the fourth target cloud layer mask is used for transiting the boundary between the virtual cloud layer model and the virtual association model, and the virtual association model is a virtual model shielded by the virtual cloud layer model.
Optionally, the rendering module is further configured to: and in response to the virtual cloud layer model being a single-layer cloud, performing mixing processing on the first target cloud layer mask, the second target cloud layer mask and the fourth target cloud layer mask based on the first illumination intensity to obtain a first cloud layer rendering result, wherein the first illumination intensity is an original illumination intensity.
Optionally, the rendering module is further configured to: in response to the virtual cloud layer model being a multi-layer cloud, performing mixing processing on a first target cloud layer mask, a second target cloud layer mask and a fourth target cloud layer mask based on a first illumination intensity to obtain a first processing result, and performing mixing processing on the first target cloud layer mask, the second target cloud layer mask and the fourth target cloud layer mask based on a second illumination intensity to obtain a second processing result, wherein the first processing result is a rendering result of a top layer cloud with the highest height in the multi-layer cloud, the second processing result is a rendering result of other layers of clouds except the top layer cloud in the multi-layer cloud, the first illumination intensity is an original illumination intensity, and the second illumination intensity is determined by the first illumination intensity and the third target cloud layer mask; and determining a first cloud layer rendering result by using the first processing result and the second processing result.
Optionally, the cloud layer rendering apparatus further includes: the adjusting module is used for generating a fifth target cloud layer mask by utilizing the position of the light source, wherein the gray scale corresponding to the fifth target cloud layer mask is determined by the distance relative to the position of the light source; calculating to obtain a third illumination intensity by adopting the first illumination intensity, the current illumination color and a fifth target cloud layer mask, wherein the first illumination intensity is the original illumination intensity; and adjusting the first cloud layer rendering result into a second cloud layer rendering result based on the third illumination intensity.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, where the computer program is configured to execute the cloud rendering method in any one of the above when the computer program is run.
According to another aspect of the embodiments of the present invention, there is also provided an electronic apparatus, including: comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the cloud rendering method of any one of the above.
In at least some embodiments of the present invention, texture information of multiple channels in a target map is obtained, where the target map is used to render a virtual cloud layer model, multiple target cloud layer masks are generated based on the texture information, and a first cloud layer rendering result corresponding to the virtual cloud layer model is obtained by performing blending processing on the multiple target cloud layer masks, so as to achieve a purpose of generating a cloud layer rendering result by performing blending processing on the multiple cloud layer masks based on the map texture information, thereby achieving a technical effect of performing more realistic cloud layer effect rendering at a lower cost, and further solving technical problems of high rendering cost and poor virtual reality effect of the rendering result of a cloud layer rendering method provided in the related art.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal of a cloud layer rendering method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a cloud rendering method according to an embodiment of the invention;
FIG. 3 is a schematic diagram of an alternative mask acquisition process according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an alternative target cloud mask according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of another alternative target cloud mask according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of another alternative target cloud mask according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of another alternative target cloud mask according to an embodiment of the present invention;
FIG. 8 is a diagram of an alternative cloud rendering result, according to an embodiment of the invention;
fig. 9 is a block diagram of a cloud layer rendering apparatus according to an embodiment of the present invention;
FIG. 10 is a block diagram of an alternative cloud rendering apparatus according to an embodiment of the present invention;
fig. 11 is a schematic diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with one embodiment of the present invention, there is provided an embodiment of a cloud rendering method, it is noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than that described herein.
The cloud layer rendering method in one embodiment of the invention can be run on a terminal device or a server. The terminal device may be a local terminal device. When the cloud layer rendering method is operated on a server, the method can be implemented and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and a client device.
In an optional embodiment, various cloud applications may be run under the cloud interaction system, for example: and (6) cloud games. Taking a cloud game as an example, a cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presenting main body are separated, the storage and the running of the cloud layer rendering method are finished on a cloud game server, and the client equipment is used for receiving and sending data and presenting the game picture, for example, the client equipment can be display equipment with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; however, the terminal device performing information processing is a cloud game server in the cloud. When a game is played, a player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are coded and compressed, the data are returned to the client device through a network, and finally, the data are decoded through the client device and the game pictures are output.
In an alternative embodiment, the terminal device may be a local terminal device. Taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with the player through a graphical user interface, namely, a game program is downloaded and installed and operated through an electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the player by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a game screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
In a possible implementation manner, an embodiment of the present invention provides a cloud layer rendering method, where a graphical user interface is provided by a terminal device, where the terminal device may be the aforementioned local terminal device, or the aforementioned client device in a cloud interaction system.
Taking a Mobile terminal operating in a local terminal device as an example, the Mobile terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a Mobile Internet device (Mobile Internet Devices, abbreviated as MID), a PAD, a game console, etc. Fig. 1 is a block diagram of a hardware structure of a mobile terminal of a cloud layer rendering method according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, processing devices such as Central Processing Units (CPUs), Graphics Processing Units (GPUs), Digital Signal Processing (DSP) chips, Microprocessors (MCUs), programmable logic devices (FPGAs), neural Network Processors (NPUs), Tensor Processors (TPUs), Artificial Intelligence (AI) type processors, and the like) and a memory 104 for storing data. Optionally, the mobile terminal may further include a transmission device 106, an input/output device 108, and a display device 110 for communication functions. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of an application software, such as a computer program corresponding to the cloud layer rendering method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the cloud layer rendering method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet via wireless.
The inputs in the input output Device 108 may come from a plurality of Human Interface Devices (HIDs). For example: keyboard and mouse, game pad, other special game controller (such as steering wheel, fishing rod, dance mat, remote controller, etc.). Some human interface devices may provide output functions in addition to input functions, such as: force feedback and vibration of the gamepad, audio output of the controller, etc.
The display device 110 may be, for example, a head-up display (HUD), a touch screen type Liquid Crystal Display (LCD), and a touch display (also referred to as a "touch screen" or "touch display screen"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI) with which a user can interact by touching finger contacts and/or gestures on a touch-sensitive surface, where the human-machine interaction function optionally includes the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, talking interfaces, playing digital video, playing digital music, and/or web browsing, etc., and for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or readable storage media.
In this embodiment, a cloud layer rendering method operating on the mobile terminal is provided, and fig. 2 is a flowchart of a cloud layer rendering method according to an embodiment of the present invention, as shown in fig. 2, the method includes the following steps:
step S21, acquiring texture information of a plurality of channels in a target map, wherein the target map is used for rendering a virtual cloud layer model;
the target map may be a preset map for rendering the virtual cloud layer model, and in an actual application scene, the target map may be a noise resource map previously made by an artist. The object map may contain multiple channels (e.g., RGB channels), and texture information may be stored in each of the multiple channels.
Step S22, generating a plurality of target cloud layer masks based on the texture information;
based on texture information of multiple channels in the target map, the multiple target cloud masks (masks) may be generated, which may be used to render a virtual cloud model.
Specifically, the generating of the plurality of target cloud layer masks based on the texture information further includes other method steps, which may refer to the further description of the embodiments of the present invention, and will not be described herein again.
Step S23, performing blending processing on the multiple target cloud layer masks to obtain a first cloud layer rendering result corresponding to the virtual cloud layer model.
The blending processing of the target cloud masks may be multiple blending calculations and/or multiple result processing of the target cloud masks, where a calculation object of each blending calculation may be a part or all of the target cloud masks, and a processing object of each result processing may be a calculation result of each blending calculation.
And mixing the target cloud layer masks to obtain a first cloud layer rendering result corresponding to the virtual cloud layer model. In an actual application scene, the first cloud layer rendering result can show a better cloud layer virtual reality effect.
Specifically, the step of performing blending processing on the multiple target cloud layer masks to obtain the first cloud layer rendering result corresponding to the virtual cloud layer model further includes other method steps, which may refer to the following further description of the embodiment of the present invention, and is not repeated herein.
In at least some embodiments of the present invention, by obtaining texture information of multiple channels in a target map, where the target map is used to render a virtual cloud layer model, generating multiple target cloud layer masks based on the texture information, and performing blending processing on the multiple target cloud layer masks, a first cloud layer rendering result corresponding to the virtual cloud layer model is obtained, so as to achieve a purpose of generating a cloud layer rendering result by performing blending processing on the multiple cloud layer masks based on the map texture information, thereby achieving a technical effect of performing more realistic cloud layer effect rendering at a lower cost, and further solving technical problems of high rendering cost and poor virtual reality effect of a rendering result of a cloud layer rendering method provided in the related art.
The above-described method of embodiments of the present invention is further described below.
Optionally, in the cloud layer rendering method, the plurality of channels include: the first color channel, the second color channel, and the third color channel, the texture information comprising: the virtual cloud layer model comprises a first noise texture, a second noise texture and a bottom shadow form texture corresponding to the virtual cloud layer model, wherein a first color channel is used for storing the first noise texture, a second color channel is used for storing the second noise texture, and a third color channel is used for storing the bottom shadow form texture.
The plurality of channels of the target map may include: the first color channel (e.g., R channel in RGB channel), the first color channel (e.g., G channel in RGB channel), and the first color channel (e.g., B channel in RGB channel).
The texture information may include the first noise texture, the second noise texture, and a bottom shadow shape texture corresponding to the virtual cloud model. The first noise texture and the second noise texture are different noise textures, and the richness of the cloud layer form can be enhanced through the two different noise textures, so that the cloud layer with an approximate shape is avoided. The bottom shadow state texture corresponding to the virtual cloud layer model can be used for rendering the shadow effect of the cloud layer, and the volume sense of the cloud layer model is increased so as to enhance the virtual reality.
For example, when rendering the Cloud effect Cloud01 in the virtual game scene a, a Perlin Noise algorithm, a frame Noise algorithm, and a Voronoi Noise algorithm may be used to generate two different Noise textures, denoted as Noise01 and Noise 02; it is also necessary to obtain the bottom Shadow morphological texture Shadow01 previously made by the artist.
Generating the resource texture Tax from the Noise texture Noise01, the Noise texture Noise02, and the bottom Shadow morphological texture Shadow01 may include: storing the Noise texture Noise01 into an R channel of a resource texture Tex; storing the Noise texture Noise02 into a G channel of a resource texture Tex; the bottom Shadow morphological texture Shadow01 is put into the B channel of the resource texture Tex.
Optionally, in step S22, generating a plurality of target cloud layer masks based on the texture information may include performing the steps of:
step S221, generating a plurality of initial cloud layer masks based on the texture information;
step S222, cloud layer mask calculation is performed on the plurality of initial cloud layer masks to obtain a plurality of target cloud layer masks.
Still taking the example of rendering the Cloud effect Cloud01 in the virtual game scene a as an example, based on the resource texture Tax, a plurality of initial Cloud masks may be generated, and a plurality of target Cloud masks may be further generated through Cloud mask calculation.
Optionally, in step S221, the plurality of initial cloud layer masks include: the generating of the plurality of initial cloud layer masks based on the texture information may comprise performing the steps of:
step S2211, generating a first initial cloud layer mask based on the bottom shadow morphological texture, wherein the first initial cloud layer mask is used for determining a bottom cloud layer of the virtual cloud layer model;
step S2212, a second initial cloud layer mask, a third initial cloud layer mask and a fourth initial cloud layer mask are generated based on the first noise texture and the second noise texture, where the second initial cloud layer mask is used to determine a main noise layer of the virtual cloud layer model, the third initial cloud layer mask is used to determine an additional noise layer of the virtual cloud layer model, and the fourth initial cloud layer mask is used to determine a light-receiving and coloring layer of the virtual cloud layer model.
When rendering the Cloud effect Cloud01 in virtual game scene a, based on the resource texture Tax, the following four initial Cloud masks may be generated: an Overcast Swirl mask (equivalent to the first initial cloud mask), a Main cloud mask (equivalent to the second initial cloud mask), a Variation cloud mask (equivalent to the third initial cloud mask), and a Shading cloud mask (equivalent to the fourth initial cloud mask).
Specifically, based on the bottom Shadow morphological texture Shadow01 in the texture resource Tex, an Overcast Swirl mask is generated. The Overcast Swirl mask is used to render the bottom cloud layer in the virtual cloud layer model.
Specifically, based on the Noise texture Noise01 and the Noise texture Noise02 in the texture resource Tex, a Main cloud mask, a Variation cloud mask and a Shading cloud mask are generated, wherein the Main cloud mask is a cloud Main Noise layer obtained by multiplying the Noise texture Noise01 and the Noise texture Noise02, the Variation cloud mask is an additional cloud Noise layer obtained by mapping the multiplication result of the Noise texture Noise01 and the Noise texture Noise02 to a numerical interval [0.5, 1], and the Shading cloud mask is a cloud light-receiving coloring layer obtained by shifting, multiplying and superimposing the Noise texture Noise01 and the Noise texture Noise02 according to the real-time illumination direction in the virtual game scene a.
It should be noted that the value interval corresponding to the Variation cloud mask is relatively smaller than that of the Main cloud mask, and can be used to render the softer effect of the virtual cloud model. The Shading cloud shade takes the influence of the real-time direction of illumination (capable of reflecting weather change) in the virtual game scene A on the cloud layer effect into consideration, and can be used for rendering the bottom self-Shading effect of the virtual cloud layer model influenced by the real-time illumination.
It should be noted that a more cluttered Noise pattern can be obtained by the Main Clouds mask generated by the Noise texture Noise01 and the Noise texture Noise02, and the Main Clouds mask can be used for simulating the cloud effect of "fluffy floccules" in a real scene. In addition, other cloud layer masks involved in the actual application scene can be further calculated according to the Main cloud masks.
It should be noted that the Shading Clouds mask is a mask generated based on Noise texture Noise01 and Noise texture Noise02 to simulate the cloud self-Shading effect, that is, the Shading relationship in the Shading Clouds mask is associated with the shape of the Noise texture. An Overcast Swirl mask is used to render the bottom cloud layer in the virtual cloud layer model, which may be an optional mask in the actual application scene. Fig. 3 is a schematic diagram of an optional mask obtaining process according to an embodiment of the present invention, and as shown in fig. 3, texture resources Tex (including Noise texture Noise01, Noise texture Noise02, and bottom Shadow shape texture Shadow01 stored in each channel of the texture resources Tex) for rendering Cloud effect Cloud01 may be obtained by presetting material resources in a game engine (here, UE 4).
Still as shown in fig. 3, in the default game engine, four initial Cloud masks for rendering the Cloud effect Cloud01 may be generated by multiplication, superposition, migration, and other functions: overcast Swirl mask, Main cloud mask, Variation cloud mask and Shading cloud mask.
It should be noted that, in the process of calculating and generating the four initial cloud layer masks, the textures in each channel of the texture resource Tex are independent and controllable, so that the diversity of the generated four initial cloud layer masks can be improved.
Optionally, in step S222, the plurality of target cloud layer masks comprise: the cloud layer mask calculation is performed on the plurality of initial cloud layer masks to obtain the plurality of target cloud layer masks, and the method comprises the following execution steps of:
step S2221, cloud layer mask calculation is carried out on the first initial cloud layer mask, the second initial cloud layer mask, the third initial cloud layer mask and the fourth initial cloud layer mask to obtain a first target cloud layer mask, a second target cloud layer mask, a third target cloud layer mask and a fourth target cloud layer mask;
the first target cloud layer mask is used for simulating the effect of the virtual cloud layer model that the self-shadow changes along with real-time illumination, the second target cloud layer mask is used for rendering the transparency of the virtual cloud layer model, the third target cloud layer mask is used for simulating the shielding of input illumination intensity when the virtual cloud layer model is a multilayer cloud, the fourth target cloud layer mask is used for transiting the boundary between the virtual cloud layer model and the virtual association model, and the virtual association model is a virtual model shielded by the virtual cloud layer model.
Still taking the example of rendering the Cloud effect Cloud01 in the virtual game scene a, Cloud mask calculation is performed on four initial Cloud masks, and four target Cloud masks can be obtained.
Fig. 4 is a schematic diagram of an alternative target cloud mask according to an embodiment of the invention. Based on the Shading Clouds Mask, a cloud Shading Mask (as shown in fig. 4) for simulating the effect of the virtual cloud model that the self-shadow follows the real-time illumination change can be calculated through a migration calculation function, and is marked as Mask1 (which is equivalent to the first target cloud Mask).
Fig. 5 is a schematic diagram of another alternative target cloud mask according to an embodiment of the invention. Based on the Main cloud Mask obtained by multiplying the Noise texture Noise01 and the Noise texture Noise02, a cloud transparency Mask (as shown in fig. 5) is determined, which is denoted as Mask2 (corresponding to the second target cloud Mask).
Fig. 6 is a schematic diagram of another alternative target cloud mask according to an embodiment of the invention. Based on the variance cloud Mask, the transparency Mask2 is remapped to a smaller value range, for example, the value range of the transparency Mask2 is [0, 1], the value of the transparency Mask2 is mapped to [0.5, 1], and a cloud soft shadow Mask (as shown in fig. 6) is obtained, which is denoted as Mask3 (corresponding to the third target cloud Mask).
Fig. 7 is a schematic diagram of another alternative target cloud mask according to an embodiment of the invention. Based on the Overcast Swirl Mask, a blend Mask (shown in fig. 7) that is not affected by the transparency Mask and is inverse to the transparency Mask is calculated as Mask4 (corresponding to the fourth target cloud Mask).
It should be noted that the specific process of cloud mask calculation described above may be a blending calculation based on part or all of the Overcast Swirl mask, the Main cloud mask, the Variation cloud mask, and the Shading cloud mask.
It should be noted that the above process of generating the Mask1, the Mask2, the Mask3 and the Mask4 based on the Overcast Swirl Mask, the Main cloud Mask, the Variation cloud Mask and the Shading cloud Mask is an optional calculation process, and does not limit the correspondence between the Overcast Swirl Mask, the Main cloud Mask, the Variation cloud Mask and the Shading cloud Mask and the Mask1, the Mask2, the Mask3 and the Mask4, and other calculation methods may be selected in practical application scenarios. Optionally, in step S23, the blending the plurality of target cloud layer masks to obtain the first cloud layer rendering result may include the following steps:
step S231, in response to the virtual cloud layer model being a single-layer cloud, performing blending processing on the first target cloud layer mask, the second target cloud layer mask, and the fourth target cloud layer mask based on a first illumination intensity to obtain a first cloud layer rendering result, where the first illumination intensity is an original illumination intensity.
The first illumination intensity is the original illumination intensity related to the real-time illumination condition of the scene when the virtual cloud layer model is manufactured in the actual application scene.
Still taking the example of rendering the Cloud effect Cloud01 in the virtual game scene a, when the Cloud model corresponding to the Cloud effect Cloud01 is a single-layer Cloud, the Mask1, the Mask2, and the Mask4 in the target Cloud Mask are mixed, and a corresponding single-layer Cloud rendering result (equivalent to the first Cloud rendering result) can be obtained.
Specifically, the output single-layer cloud rendering result may be represented by the output cloud layer coloring Mask c1, and the blending processing method for obtaining the output cloud layer coloring Mask c1 may be to perform blending calculation on the transparency Mask2 and the cloud layer coloring Mask1 by using the blending Mask4 as a coefficient, as shown in the following formula (1):
MaskC1 ═ Mask4 × Mask2+ (1-Mask4) × Mask1 formula (1)
In the above equation (1), Mask1, Mask2, and Mask4 in the target cloud Mask are associated with the original illumination intensity.
Optionally, in step S23, the blending the plurality of target cloud layer masks to obtain the first cloud layer rendering result may include the following steps:
step S232, in response to the fact that the virtual cloud layer model is a multilayer cloud, performing mixing processing on a first target cloud layer mask, a second target cloud layer mask and a fourth target cloud layer mask based on a first illumination intensity to obtain a first processing result, and performing mixing processing on the first target cloud layer mask, the second target cloud layer mask and the fourth target cloud layer mask based on a second illumination intensity to obtain a second processing result, wherein the first processing result is a rendering result of a top layer cloud with the highest height in the multilayer cloud, the second processing result is a rendering result of other layers of clouds except the top layer cloud in the multilayer cloud, the first illumination intensity is an original illumination intensity, and the second illumination intensity is determined by the first illumination intensity and a third target cloud layer mask;
in step S233, a first cloud layer rendering result is determined using the first processing result and the second processing result.
Still taking the example of rendering the Cloud effect Cloud01 in the virtual game scene a as an example, when the Cloud model corresponding to the Cloud effect Cloud01 is a two-layer Cloud, the calculation method of the MaskD1 (equivalent to the first processing result) output Cloud layer coloring of the higher Cloud layer in the two-layer Cloud may be as shown in the following formula (2):
MaskD1 ═ Mask4 × Mask2+ (1-Mask4) × Mask1 formula (2)
In the above equation (2), the masks 1, 2, and 4 in the target cloud Mask are associated with the original illumination intensity.
In the double-layer cloud, when the MaskD2 is calculated for the output cloud layer coloring of the lower cloud layer, the shading of the higher cloud layer on the illumination intensity needs to be considered. The original illumination intensity is multiplied by the cloud layer soft shadow Mask3 to obtain the input illumination intensity corresponding to the lower layer of cloud.
Based on the input illumination intensity corresponding to the lower cloud layer, the output cloud layer coloring MaskD2 (corresponding to the second processing result) of the lower cloud layer can be obtained by calculating according to the following formula (3):
MaskD2 ═ Mask4 × Mask2+ (1-Mask4) × Mask1 formula (3)
In the above equation (3), Mask1, Mask2, and Mask4 in the target cloud layer Mask are associated with the input illumination intensities corresponding to the lower layer cloud.
Fig. 8 is a diagram illustrating an alternative cloud rendering result according to an embodiment of the present invention. The output cloud layer coloring MaskD1 of the higher cloud layer and the output cloud layer coloring MaskD2 of the lower cloud layer are overlapped, so that a cloud layer rendering result of the double cloud layer can be obtained (as shown in fig. 8).
It is easy to notice that the light intensity is adjusted through the cloud layer soft shadow mask so as to simulate the shielding of a lower cloud layer by a higher cloud layer in a multi-layer cloud, and the virtual reality effect of the rendering result of the virtual cloud layer model can be enhanced.
Optionally, the cloud layer rendering method may further include the following steps:
step S24, generating a fifth target cloud layer mask by using the position of the light source, wherein the gray scale corresponding to the fifth target cloud layer mask is determined by the distance relative to the position of the light source;
step S25, calculating to obtain a third illumination intensity by adopting the first illumination intensity, the current illumination color and a fifth target cloud layer mask, wherein the first illumination intensity is the original illumination intensity;
in step S26, the first cloud layer rendering result is adjusted to a second cloud layer rendering result based on the third illumination intensity.
In an actual application scenario, the light source position may be a position of a light source (such as the sun, the moon, the stars, the luminous flyer, and the like) in a virtual game scenario. A circular gray-scale gradation Mask (the farther from the center of the circle, the stronger the gray scale) is generated with the position of the light source as the center of the circle, and the Mask5 (corresponding to the fifth target cloud layer Mask) is marked. The current illumination color may be a color of a light source in the virtual game scene. The first illumination intensity may be a raw illumination intensity of a light source in the virtual game scene. The real-time illumination intensity of the light source (corresponding to the third illumination intensity) can be calculated according to the first illumination intensity, the current illumination color and the circular gray scale gradation Mask 5.
Based on the third illumination intensity, the first cloud layer rendering result may be adjusted to a second cloud layer rendering result. When the first cloud layer rendering result is output cloud layer coloring of the virtual cloud layer model, the output cloud layer coloring may be multiplied by the third illumination intensity to obtain an adjusted output cloud layer coloring. Specifically, for example: in a certain virtual game scene, the circle center position of the circular mask MaskY is determined according to the position of the sun, and the circular mask MaskY can be generated by controlling the radius, the coloring intensity and the edge gray level threshold of the circular mask MaskY. The circular mask MaskY may be used to calculate the real-time illumination intensity of the sun in the virtual game scene. Adjusting a cloud layer rendering result in the virtual game scene through the real-time illumination intensity, wherein the visual representation of the adjusted cloud layer rendering result may be: the cloud layer near the sun has higher brightness, and the cloud layer farther away from the sun has lower brightness.
It is easy to note that the light source (including the light source position, the illumination color, etc.) in the virtual game scene may be changed in real time, and the light source information may be obtained in real time from a real-time weather system associated with the virtual game scene, so as to enhance the virtual display effect of cloud layer rendering. Through the circular gray level gradient shade, the effect of gradient illumination intensity can be displayed on the sky ball of the virtual game scene, and the volume sense in the coloring effect of the virtual cloud layer model is enhanced.
It should be noted that, by the method provided by the embodiment of the present invention, the cloud shadow effect can be increased by increasing the color rendering parameter of the Overcast Swirl mask, and the method is suitable for rendering the black cloud dense effect in the virtual game scene.
It is easy to note that, compared with the cloud layer rendering method provided by the related art, the method provided by the embodiment of the present invention requires less performance consumption for rendering the virtual cloud layer model based on one texture resource, and is convenient for application at the mobile end.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method according to the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a cloud layer rendering apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and details of which have been already described are not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 9 is a block diagram of a cloud layer rendering apparatus according to an embodiment of the present invention, where as shown in fig. 9, the apparatus includes: an obtaining module 91, configured to obtain texture information of multiple channels in a target map, where the target map is used to render a virtual cloud model; a generating module 92 for generating a plurality of target cloud layer masks based on the texture information; and a rendering module 93, configured to perform blending processing on the multiple target cloud masks to obtain a first cloud rendering result corresponding to the virtual cloud model.
Optionally, the generating module 92 is further configured to: generating a plurality of initial cloud layer masks based on the texture information; and performing cloud layer mask calculation on the plurality of initial cloud layer masks to obtain a plurality of target cloud layer masks.
Optionally, the plurality of initial cloud layer masks comprises: the first initial cloud layer mask, the second initial cloud layer mask, the third initial cloud layer mask, and the fourth initial cloud layer mask, and the generating module 92 is further configured to: generating a first initial cloud layer mask based on the bottom shadow morphological texture, wherein the first initial cloud layer mask is used for determining a bottom cloud layer of the virtual cloud layer model; and generating a second initial cloud layer mask, a third initial cloud layer mask and a fourth initial cloud layer mask based on the first noise texture and the second noise texture, wherein the second initial cloud layer mask is used for determining a main noise layer of the virtual cloud layer model, the third initial cloud layer mask is used for determining an additional noise layer of the virtual cloud layer model, and the fourth initial cloud layer mask is used for determining a light receiving coloring layer of the virtual cloud layer model.
Optionally, the plurality of target cloud masks comprises: first target cloud layer mask, second target cloud layer mask, third target cloud layer mask and fourth target cloud layer mask, above-mentioned generation module 92, still is used for: cloud layer mask calculation is carried out on the first initial cloud layer mask, the second initial cloud layer mask, the third initial cloud layer mask and the fourth initial cloud layer mask to obtain a first target cloud layer mask, a second target cloud layer mask, a third target cloud layer mask and a fourth target cloud layer mask; the first target cloud layer mask is used for simulating the effect of the virtual cloud layer model that the self-shadow changes along with real-time illumination, the second target cloud layer mask is used for rendering the transparency of the virtual cloud layer model, the third target cloud layer mask is used for simulating the shielding of the input illumination intensity when the virtual cloud layer model is a multi-layer cloud, the fourth target cloud layer mask is used for transiting the boundary between the virtual cloud layer model and the virtual association model, and the virtual association model is a virtual model shielded by the virtual cloud layer model.
Optionally, the rendering module 93 is further configured to: and in response to the virtual cloud layer model being a single-layer cloud, performing mixing processing on the first target cloud layer mask, the second target cloud layer mask and the fourth target cloud layer mask based on the first illumination intensity to obtain a first cloud layer rendering result, wherein the first illumination intensity is an original illumination intensity.
Optionally, the rendering module 93 is further configured to: in response to the virtual cloud layer model being a multi-layer cloud, performing mixing processing on a first target cloud layer mask, a second target cloud layer mask and a fourth target cloud layer mask based on a first illumination intensity to obtain a first processing result, and performing mixing processing on the first target cloud layer mask, the second target cloud layer mask and the fourth target cloud layer mask based on a second illumination intensity to obtain a second processing result, wherein the first processing result is a rendering result of a top layer cloud with the highest height in the multi-layer cloud, the second processing result is a rendering result of other layers of clouds except the top layer cloud in the multi-layer cloud, the first illumination intensity is an original illumination intensity, and the second illumination intensity is determined by the first illumination intensity and the third target cloud layer mask; and determining a first cloud layer rendering result by using the first processing result and the second processing result.
Optionally, fig. 10 is a block diagram of a structure of an optional cloud layer rendering apparatus according to an embodiment of the present invention, and as shown in fig. 10, the apparatus includes, in addition to all modules shown in fig. 9: an adjusting module 94, configured to generate a fifth target cloud mask by using the light source position, where a gray level corresponding to the fifth target cloud mask is determined by a distance relative to the light source position; calculating to obtain a third illumination intensity by adopting the first illumination intensity, the current illumination color and a fifth target cloud layer mask, wherein the first illumination intensity is the original illumination intensity; and adjusting the first cloud layer rendering result into a second cloud layer rendering result based on the third illumination intensity.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Optionally, in this embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Optionally, in this embodiment, the computer-readable storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of mobile terminals.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
acquiring texture information of a plurality of channels in a target map, wherein the target map is used for rendering a virtual cloud layer model; generating a plurality of target cloud layer masks based on the texture information; and mixing the target cloud layer masks to obtain a first cloud layer rendering result corresponding to the virtual cloud layer model.
Optionally, the plurality of channels comprises: the first color channel, the second color channel, and the third color channel, the texture information comprising: the virtual cloud layer model comprises a first noise texture, a second noise texture and a bottom shadow form texture corresponding to the virtual cloud layer model, wherein a first color channel is used for storing the first noise texture, a second color channel is used for storing the second noise texture, and a third color channel is used for storing the bottom shadow form texture.
Optionally, generating a plurality of target cloud layer masks based on the texture information comprises: generating a plurality of initial cloud layer masks based on the texture information; and performing cloud layer mask calculation on the plurality of initial cloud layer masks to obtain a plurality of target cloud layer masks.
Optionally, the plurality of initial cloud masks comprises: first initial cloud layer mask, the initial cloud layer mask of second, the initial cloud layer mask of third and the initial cloud layer mask of fourth, it includes to generate a plurality of initial cloud layer masks based on texture information: generating a first initial cloud layer mask based on the bottom shadow morphological texture, wherein the first initial cloud layer mask is used for determining a bottom cloud layer of the virtual cloud layer model; and generating a second initial cloud layer mask, a third initial cloud layer mask and a fourth initial cloud layer mask based on the first noise texture and the second noise texture, wherein the second initial cloud layer mask is used for determining a main noise layer of the virtual cloud layer model, the third initial cloud layer mask is used for determining an additional noise layer of the virtual cloud layer model, and the fourth initial cloud layer mask is used for determining a light receiving coloring layer of the virtual cloud layer model.
Optionally, the plurality of target cloud masks comprises: first target cloud layer shade, second target cloud layer shade, third target cloud layer shade and fourth target cloud layer shade, carry out cloud layer shade to a plurality of initial cloud layer shades and calculate, obtain a plurality of target cloud layer shades and include: cloud layer mask calculation is carried out on the first initial cloud layer mask, the second initial cloud layer mask, the third initial cloud layer mask and the fourth initial cloud layer mask to obtain a first target cloud layer mask, a second target cloud layer mask, a third target cloud layer mask and a fourth target cloud layer mask; the first target cloud layer mask is used for simulating the effect of the virtual cloud layer model that the self-shadow changes along with real-time illumination, the second target cloud layer mask is used for rendering the transparency of the virtual cloud layer model, the third target cloud layer mask is used for simulating the shielding of input illumination intensity when the virtual cloud layer model is a multilayer cloud, the fourth target cloud layer mask is used for transiting the boundary between the virtual cloud layer model and the virtual association model, and the virtual association model is a virtual model shielded by the virtual cloud layer model.
Optionally, the blending the target cloud masks, and obtaining the first cloud rendering result includes: and in response to the virtual cloud layer model being a single-layer cloud, performing mixing processing on the first target cloud layer mask, the second target cloud layer mask and the fourth target cloud layer mask based on the first illumination intensity to obtain a first cloud layer rendering result, wherein the first illumination intensity is an original illumination intensity.
Optionally, the blending the target cloud masks, and obtaining the first cloud rendering result includes: in response to the virtual cloud layer model being a multi-layer cloud, performing hybrid processing on a first target cloud layer mask, a second target cloud layer mask and a fourth target cloud layer mask based on a first illumination intensity to obtain a first processing result, and performing hybrid processing on the first target cloud layer mask, the second target cloud layer mask and the fourth target cloud layer mask based on a second illumination intensity to obtain a second processing result, wherein the first processing result is a rendering result of a top layer cloud with the highest height in the multi-layer cloud, the second processing result is a rendering result of other layers of clouds except the top layer cloud in the multi-layer cloud, the first illumination intensity is an original illumination intensity, and the second illumination intensity is determined by the first illumination intensity and the third target cloud layer mask; and determining a first cloud layer rendering result by using the first processing result and the second processing result.
Optionally, the cloud rendering method further includes: generating a fifth target cloud layer mask by using the position of the light source, wherein the gray scale corresponding to the fifth target cloud layer mask is determined by the distance relative to the position of the light source; calculating to obtain a third illumination intensity by adopting the first illumination intensity, the current illumination color and a fifth target cloud layer mask, wherein the first illumination intensity is the original illumination intensity; and adjusting the first cloud layer rendering result into a second cloud layer rendering result based on the third illumination intensity.
In at least some embodiments of the present invention, texture information of multiple channels in a target map is obtained, where the target map is used to render a virtual cloud layer model, multiple target cloud layer masks are generated based on the texture information, and a first cloud layer rendering result corresponding to the virtual cloud layer model is obtained by performing blending processing on the multiple target cloud layer masks, so as to achieve a purpose of generating a cloud layer rendering result by performing blending processing on the multiple cloud layer masks based on the map texture information, thereby achieving a technical effect of performing more realistic cloud layer effect rendering at a lower cost, and further solving technical problems of high rendering cost and poor virtual reality effect of the rendering result of a cloud layer rendering method provided in the related art.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, and may also be implemented by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a computer-readable storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiment of the present invention.
In an exemplary embodiment of the present invention, a program product capable of implementing the above-described method of the present embodiment is stored on a computer-readable storage medium. In some possible implementations, various aspects of the embodiments of the present invention may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary implementations of the present invention described in the above section "exemplary method" of this embodiment, when the program product is run on the terminal device.
The program product for implementing the above method according to the embodiment of the present invention may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a terminal device, such as a personal computer. However, the program product of the embodiments of the invention is not limited thereto, and in the embodiments of the invention, the computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product described above may employ any combination of one or more computer-readable media. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the program code embodied on the computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
acquiring texture information of a plurality of channels in a target map, wherein the target map is used for rendering a virtual cloud layer model; generating a plurality of target cloud layer masks based on the texture information; and mixing the target cloud layer masks to obtain a first cloud layer rendering result corresponding to the virtual cloud layer model.
Optionally, the plurality of channels comprises: the first color channel, the second color channel, and the third color channel, the texture information comprising: the virtual cloud layer model comprises a first noise texture, a second noise texture and a bottom shadow form texture corresponding to the virtual cloud layer model, wherein a first color channel is used for storing the first noise texture, a second color channel is used for storing the second noise texture, and a third color channel is used for storing the bottom shadow form texture.
Optionally, generating a plurality of target cloud layer masks based on the texture information comprises: generating a plurality of initial cloud layer masks based on the texture information; and performing cloud layer mask calculation on the plurality of initial cloud layer masks to obtain a plurality of target cloud layer masks.
Optionally, the plurality of initial cloud masks comprises: first initial cloud layer mask, the initial cloud layer mask of second, the initial cloud layer mask of third and the initial cloud layer mask of fourth, it includes to generate a plurality of initial cloud layer masks based on texture information: generating a first initial cloud layer mask based on the bottom shadow morphological texture, wherein the first initial cloud layer mask is used for determining a bottom cloud layer of the virtual cloud layer model; and generating a second initial cloud layer mask, a third initial cloud layer mask and a fourth initial cloud layer mask based on the first noise texture and the second noise texture, wherein the second initial cloud layer mask is used for determining a main noise layer of the virtual cloud layer model, the third initial cloud layer mask is used for determining an additional noise layer of the virtual cloud layer model, and the fourth initial cloud layer mask is used for determining a light receiving coloring layer of the virtual cloud layer model.
Optionally, the plurality of target cloud masks comprises: first target cloud layer shade, second target cloud layer shade, third target cloud layer shade and fourth target cloud layer shade carry out cloud layer shade to a plurality of initial cloud layer shades and calculate, obtain a plurality of target cloud layer shades and include: cloud layer mask calculation is carried out on the first initial cloud layer mask, the second initial cloud layer mask, the third initial cloud layer mask and the fourth initial cloud layer mask to obtain a first target cloud layer mask, a second target cloud layer mask, a third target cloud layer mask and a fourth target cloud layer mask; the first target cloud layer mask is used for simulating the effect of the virtual cloud layer model that the self-shadow changes along with real-time illumination, the second target cloud layer mask is used for rendering the transparency of the virtual cloud layer model, the third target cloud layer mask is used for simulating the shielding of input illumination intensity when the virtual cloud layer model is a multilayer cloud, the fourth target cloud layer mask is used for transiting the boundary between the virtual cloud layer model and the virtual association model, and the virtual association model is a virtual model shielded by the virtual cloud layer model.
Optionally, the mixing the multiple target cloud layer masks to obtain the first cloud layer rendering result includes: and in response to the virtual cloud layer model being a single-layer cloud, performing mixing processing on the first target cloud layer mask, the second target cloud layer mask and the fourth target cloud layer mask based on the first illumination intensity to obtain a first cloud layer rendering result, wherein the first illumination intensity is an original illumination intensity.
Optionally, the mixing the multiple target cloud layer masks to obtain the first cloud layer rendering result includes: in response to the virtual cloud layer model being a multi-layer cloud, performing mixing processing on a first target cloud layer mask, a second target cloud layer mask and a fourth target cloud layer mask based on a first illumination intensity to obtain a first processing result, and performing mixing processing on the first target cloud layer mask, the second target cloud layer mask and the fourth target cloud layer mask based on a second illumination intensity to obtain a second processing result, wherein the first processing result is a rendering result of a top layer cloud with the highest height in the multi-layer cloud, the second processing result is a rendering result of other layers of clouds except the top layer cloud in the multi-layer cloud, the first illumination intensity is an original illumination intensity, and the second illumination intensity is determined by the first illumination intensity and the third target cloud layer mask; and determining a first cloud layer rendering result by using the first processing result and the second processing result.
Optionally, the cloud layer rendering method further includes: generating a fifth target cloud layer mask by using the position of the light source, wherein the gray scale corresponding to the fifth target cloud layer mask is determined by the distance relative to the position of the light source; calculating to obtain a third illumination intensity by adopting the first illumination intensity, the current illumination color and a fifth target cloud layer mask, wherein the first illumination intensity is the original illumination intensity; and adjusting the first cloud layer rendering result into a second cloud layer rendering result based on the third illumination intensity.
In at least some embodiments of the present invention, texture information of multiple channels in a target map is obtained, where the target map is used to render a virtual cloud layer model, multiple target cloud layer masks are generated based on the texture information, and a first cloud layer rendering result corresponding to the virtual cloud layer model is obtained by performing blending processing on the multiple target cloud layer masks, so as to achieve a purpose of generating a cloud layer rendering result by performing blending processing on the multiple cloud layer masks based on the map texture information, thereby achieving a technical effect of performing more realistic cloud layer effect rendering at a lower cost, and further solving technical problems of high rendering cost and poor virtual reality effect of the rendering result of a cloud layer rendering method provided in the related art.
Fig. 11 is a schematic diagram of an electronic device according to an embodiment of the invention. As shown in fig. 11, the electronic device 1100 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 11, the electronic apparatus 1100 is embodied in the form of a general purpose computing device. The components of the electronic device 1100 may include, but are not limited to: the at least one processor 1110, the at least one memory 1120, the bus 1130 connecting the various system components including the memory 1120 and the processor 1110, and the display 1140.
The memory 1120 stores program code that may be executed by the processor 1110 to cause the processor 1110 to perform the steps according to various exemplary embodiments of the present invention described in the method section above of an embodiment of the present invention.
The memory 1120 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)11201 and/or a cache memory unit 11202, may further include a read-only memory unit (ROM)11203, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
In some examples, memory 1120 may also include a program/utility 11204 having a set (at least one) of program modules 11205, such program modules 11205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment. The memory 1120 may further include memory remotely located from the processor 1110, and such remote memory may be connected to the electronic device 1100 through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Bus 1130 may be any one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, processor 1110, or a local bus using any of a variety of bus architectures.
Display 1140 may, for example, be a touch screen Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of electronic device 1100.
Optionally, the electronic apparatus 1100 may also communicate with one or more external devices 1400 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic apparatus 1100, and/or any device (e.g., router, modem, etc.) that enables the electronic apparatus 1100 to communicate with one or more other computing devices. Such communication can occur via an input/output (I/O) interface 1150. Also, the electronic device 1100 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 1160. As shown in FIG. 11, the network adapter 1160 communicates with the other modules of the electronic device 1100 via the bus 1130. It should be appreciated that although not shown in FIG. 11, other hardware and/or software modules may be used in conjunction with the electronic device 1100, which may include but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The electronic device 1100 may further include: a keyboard, a cursor control device (e.g., a mouse), an input/output interface (I/O interface), a network interface, a power source, and/or a camera.
It will be understood by those skilled in the art that the structure shown in fig. 11 is only an illustration and is not intended to limit the structure of the electronic device. For example, electronic device 1100 may also include more or fewer components than shown in FIG. 11, or have a different configuration than shown in FIG. 11. The memory 1120 can be used for storing a computer program and corresponding data, such as a computer program and corresponding data corresponding to the cloud layer rendering method in the embodiment of the present invention. The processor 1110 executes various functional applications and data processing by executing computer programs stored in the memory 1120, that is, implements the cloud layer rendering method described above.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present invention, it should be understood that the disclosed technical contents can be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention, which is substantially or partly contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (11)

1. A cloud layer rendering method, comprising:
acquiring texture information of a plurality of channels in a target map, wherein the target map is used for rendering a virtual cloud layer model;
generating a plurality of target cloud layer masks based on the texture information;
and mixing the target cloud layer masks to obtain a first cloud layer rendering result corresponding to the virtual cloud layer model.
2. The cloud rendering method of claim 1, wherein the plurality of channels comprises: a first color channel, a second color channel, and a third color channel, the texture information comprising: the virtual cloud layer model comprises a first noise texture, a second noise texture and a bottom shadow shape texture corresponding to the virtual cloud layer model, wherein the first color channel is used for storing the first noise texture, the second color channel is used for storing the second noise texture, and the third color channel is used for storing the bottom shadow shape texture.
3. The cloud rendering method of claim 2, wherein generating the plurality of target cloud masks based on the texture information comprises:
generating a plurality of initial cloud layer masks based on the texture information;
and performing cloud layer mask calculation on the plurality of initial cloud layer masks to obtain a plurality of target cloud layer masks.
4. The cloud rendering method of claim 3, wherein said plurality of initial cloud masks comprises: a first initial cloud layer mask, a second initial cloud layer mask, a third initial cloud layer mask, and a fourth initial cloud layer mask, the generating the plurality of initial cloud layer masks based on the texture information comprising:
generating the first initial cloud layer mask based on the bottom shadow morphological texture, wherein the first initial cloud layer mask is used to determine a bottom cloud layer of the virtual cloud layer model;
generating the second initial cloud layer mask, the third initial cloud layer mask and the fourth initial cloud layer mask based on the first noise texture and the second noise texture, wherein the second initial cloud layer mask is used for determining a main noise layer of the virtual cloud layer model, the third initial cloud layer mask is used for determining an additional noise layer of the virtual cloud layer model, and the fourth initial cloud layer mask is used for determining a light-receiving coloring layer of the virtual cloud layer model.
5. The cloud rendering method of claim 4, wherein said plurality of target cloud masks comprises:
a first target cloud layer mask, a second target cloud layer mask, a third target cloud layer mask and a fourth target cloud layer mask, and performing cloud layer mask calculation on the plurality of initial cloud layer masks to obtain the plurality of target cloud layer masks, wherein:
performing cloud layer mask calculation on the first initial cloud layer mask, the second initial cloud layer mask, the third initial cloud layer mask and the fourth initial cloud layer mask to obtain a first target cloud layer mask, a second target cloud layer mask, a third target cloud layer mask and a fourth target cloud layer mask;
the first target cloud layer mask is used for simulating the effect of the virtual cloud layer model that the self-shadow changes along with real-time illumination, the second target cloud layer mask is used for rendering the transparency of the virtual cloud layer model, the third target cloud layer mask is used for simulating the shielding of the input illumination intensity when the virtual cloud layer model is a multi-layer cloud, the fourth target cloud layer mask is used for transiting the boundary between the virtual cloud layer model and the virtual association model, and the virtual association model is a virtual model shielded by the virtual cloud layer model.
6. The cloud rendering method of claim 5, wherein blending the target cloud masks to obtain the first cloud rendering result comprises:
and in response to the virtual cloud layer model being a single-layer cloud, performing blending processing on the first target cloud layer mask, the second target cloud layer mask and the fourth target cloud layer mask based on a first illumination intensity to obtain the first cloud layer rendering result, wherein the first illumination intensity is an original illumination intensity.
7. The cloud rendering method of claim 5, wherein blending the target cloud masks to obtain the first cloud rendering result comprises:
in response to that the virtual cloud layer model is a multi-layer cloud, performing hybrid processing on the first target cloud layer mask, the second target cloud layer mask and the fourth target cloud layer mask based on a first illumination intensity to obtain a first processing result, and performing hybrid processing on the first target cloud layer mask, the second target cloud layer mask and the fourth target cloud layer mask based on a second illumination intensity to obtain a second processing result, wherein the first processing result is a rendering result of a top-layer cloud with the highest height in the multi-layer cloud, the second processing result is a rendering result of other layers of clouds except the top-layer cloud in the multi-layer cloud, the first illumination intensity is an original illumination intensity, and the second illumination intensity is determined by the first illumination intensity and the third target cloud layer mask;
and determining the first cloud layer rendering result by using the first processing result and the second processing result.
8. The cloud rendering method of claim 1, wherein the cloud rendering method further comprises:
generating a fifth target cloud layer mask by using the light source position, wherein the gray scale corresponding to the fifth target cloud layer mask is determined by the distance relative to the light source position;
calculating to obtain a third illumination intensity by adopting the first illumination intensity, the current illumination color and the fifth target cloud layer mask, wherein the first illumination intensity is the original illumination intensity;
adjusting the first cloud layer rendering result to a second cloud layer rendering result based on the third illumination intensity.
9. A cloud layer rendering apparatus, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring texture information of a plurality of channels in a target map, and the target map is used for rendering a virtual cloud layer model;
a generation module to generate a plurality of target cloud layer masks based on the texture information;
and the rendering module is used for performing mixing processing on the target cloud layer masks to obtain a first cloud layer rendering result corresponding to the virtual cloud layer model.
10. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to execute the cloud rendering method of any of claims 1 to 8 when executed.
11. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the cloud rendering method of any of claims 1 to 8.
CN202210689819.8A 2022-06-17 2022-06-17 Cloud layer rendering method and device, storage medium and electronic device Pending CN115131489A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210689819.8A CN115131489A (en) 2022-06-17 2022-06-17 Cloud layer rendering method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210689819.8A CN115131489A (en) 2022-06-17 2022-06-17 Cloud layer rendering method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN115131489A true CN115131489A (en) 2022-09-30

Family

ID=83377782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210689819.8A Pending CN115131489A (en) 2022-06-17 2022-06-17 Cloud layer rendering method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN115131489A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024103849A1 (en) * 2022-11-15 2024-05-23 网易(杭州)网络有限公司 Method and device for displaying three-dimensional model of game character, and electronic device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024103849A1 (en) * 2022-11-15 2024-05-23 网易(杭州)网络有限公司 Method and device for displaying three-dimensional model of game character, and electronic device

Similar Documents

Publication Publication Date Title
CN110196746B (en) Interactive interface rendering method and device, electronic equipment and storage medium
CN110384924A (en) The display control method of virtual objects, device, medium and equipment in scene of game
CN109448137A (en) Exchange method, interactive device, electronic equipment and storage medium
US9483873B2 (en) Easy selection threshold
CN115375822A (en) Cloud model rendering method and device, storage medium and electronic device
CN115131489A (en) Cloud layer rendering method and device, storage medium and electronic device
CN114820915A (en) Method and device for rendering shading light, storage medium and electronic device
CN109493428B (en) Optimization method and device for three-dimensional virtual model, electronic equipment and storage medium
CN117252982A (en) Material attribute generation method and device for virtual three-dimensional model and storage medium
CN115375797A (en) Layer processing method and device, storage medium and electronic device
CN115501590A (en) Display method, display device, electronic equipment and storage medium
CN115713589A (en) Image generation method and device for virtual building group, storage medium and electronic device
CN114299203A (en) Processing method and device of virtual model
CN114742970A (en) Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device
CN115089964A (en) Method and device for rendering virtual fog model, storage medium and electronic device
CN116112657B (en) Image processing method, image processing device, computer readable storage medium and electronic device
CN115375829A (en) Self-luminous rendering method and device of virtual model, storage medium and electronic device
CN116778047A (en) Method and device for generating aurora animation, storage medium and electronic device
CN114299207A (en) Virtual object rendering method and device, readable storage medium and electronic device
CN116452704A (en) Method and device for generating lens halation special effect, storage medium and electronic device
CN117482501A (en) Method and device for generating scene resource model, storage medium and electronic device
CN114299211A (en) Information processing method, information processing apparatus, readable storage medium, and electronic apparatus
CN117036573A (en) Method and device for rendering virtual model, storage medium and electronic equipment
CN114419214A (en) Method and device for switching weather types in game, storage medium and electronic device
CN117745892A (en) Particle generation performance control method, device, storage medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination