CN118079373A - Model rendering method and device, storage medium and electronic device - Google Patents

Model rendering method and device, storage medium and electronic device Download PDF

Info

Publication number
CN118079373A
CN118079373A CN202410252155.8A CN202410252155A CN118079373A CN 118079373 A CN118079373 A CN 118079373A CN 202410252155 A CN202410252155 A CN 202410252155A CN 118079373 A CN118079373 A CN 118079373A
Authority
CN
China
Prior art keywords
parameter
preset
color
target model
sampling result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410252155.8A
Other languages
Chinese (zh)
Inventor
赵阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202410252155.8A priority Critical patent/CN118079373A/en
Publication of CN118079373A publication Critical patent/CN118079373A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Generation (AREA)

Abstract

The application discloses a model rendering method, a model rendering device, a storage medium and an electronic device. Providing a graphical user interface through the terminal device, wherein the content displayed by the graphical user interface comprises a virtual scene, and the method comprises the following steps: sampling a preset map according to preset texture coordinates to obtain color parameters of a target model; converting the scene depth of the target model under the current view angle to obtain the transparency of the target model, wherein the current view angle is used for representing the view angle corresponding to the virtual camera in the virtual scene, and the scene depth is used for representing the distance information from the pixel point in the target model to the virtual camera; and rendering and outputting the target model according to the color parameters and the transparency. The application solves the technical problems of higher performance consumption and poorer rendering effect of the foggy effect in the game.

Description

Model rendering method and device, storage medium and electronic device
Technical Field
The present disclosure relates to the field of games, and in particular, to a model rendering method and apparatus, and a storage medium and an electronic apparatus.
Background
At present, when a game with a first person or a third person viewing angle is manufactured, in some cases, such as a special event of a scene or hit by an enemy skill, the effect of war misting is often used to obstruct the view of a player so as to convey the effect of blocking the view, however, in the related art, the later-stage material is usually used for calculation in combination with the available world position information, so that the performance consumption of the misting effect in the game is higher, and the rendering effect is poor.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
At least some embodiments of the present disclosure provide a method and apparatus for rendering a model, a storage medium, and an electronic device, so as to at least solve the technical problem that performance consumption of a foggy effect in a game is higher and rendering effect is worse.
According to one embodiment of the present disclosure, there is provided a model rendering method, providing a graphical user interface through a terminal device, wherein content displayed on the graphical user interface includes a virtual scene, the method including: sampling a preset map according to preset texture coordinates to obtain color parameters of a target model; converting the scene depth of the target model under the current view angle to obtain the transparency of the target model, wherein the current view angle is used for representing the view angle corresponding to the virtual camera in the virtual scene, and the scene depth is used for representing the distance information from the pixel point in the target model to the virtual camera; and rendering and outputting the target model according to the color parameters and the transparency.
According to one embodiment of the present disclosure, there is also provided a model rendering apparatus for providing a graphical user interface through a terminal device, wherein content displayed on the graphical user interface includes a virtual scene, the apparatus including: the sampling module is used for sampling the preset mapping according to the preset texture coordinates to obtain color parameters of the target model; the conversion module is used for converting the scene depth of the target model under the current view angle to obtain the transparency of the target model, wherein the current view angle is used for representing the view angle corresponding to the virtual camera in the virtual scene, and the scene depth is used for representing the distance information from the pixel point in the target model to the virtual camera; and the rendering module is used for rendering and outputting the target model according to the color parameters and the transparency.
According to one embodiment of the present disclosure, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to perform the model rendering method in any one of the above when run.
There is further provided, in accordance with an embodiment of the present disclosure, an electronic device including a memory having a computer program stored therein, and a processor configured to run the computer program to perform the model rendering method of any one of the above.
In at least some embodiments of the present disclosure, color parameters of a target model are obtained by sampling a preset map according to preset texture coordinates; converting the scene depth of the target model under the current view angle to obtain the transparency of the target model, wherein the current view angle is used for representing the view angle corresponding to the virtual camera in the virtual scene, and the scene depth is used for representing the distance information from the pixel point in the target model to the virtual camera; the target model is rendered and output according to the color parameters and the transparency, and it is easy to notice that the color parameters obtained by sampling the preset map and the transparency obtained by converting the scene depth under the current visual angle render the target model covered in front of the virtual camera based on the color parameters and the transparency to obtain the foggy effect, so that the performance consumption of the foggy effect is reduced, the foggy effect changes along with the visual angle of a player, the situation that the near place is clearly visible in the real world and the far place is blocked by the foggy and cannot be clearly visible is simulated, the performance consumption is reduced, the rendering fidelity of the foggy effect in the game is improved, the technical effect of the sense of reality of the game is further improved, and the technical problems of higher performance consumption of the foggy effect in the game and poor rendering effect are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the present disclosure, and together with the description serve to explain the present disclosure. In the drawings:
Fig. 1 is a hardware configuration block diagram of a mobile terminal of a model rendering method according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a model rendering method according to one embodiment of the present disclosure;
FIG. 3 is an effect diagram of scene depth according to one embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a screen UV based arc masking algorithm according to one embodiment of the present disclosure;
FIG. 5 is a schematic diagram of screen UV information according to one embodiment of the present disclosure;
FIG. 6 is a schematic diagram of screen UV information corresponding to a first coordinate difference value according to one embodiment of the present disclosure;
FIG. 7 is a schematic diagram of screen UV information corresponding to a second coordinate difference value according to one embodiment of the present disclosure;
FIG. 8 is a schematic diagram of radius length correspondence of a circular arc in accordance with one embodiment of the present disclosure;
FIG. 9 is a schematic diagram of screen UV information for calculating a complete root according to one embodiment of the present disclosure;
FIG. 10 is a schematic illustration of a warfare foggy effect without an arc masking algorithm in accordance with one embodiment of the present disclosure;
FIG. 11 is a schematic illustration of a warfare foggy effect with an arc masking algorithm in accordance with one embodiment of the present disclosure;
FIG. 12 is a schematic illustration of depth information processing of a competing mist material in accordance with one embodiment of the present disclosure;
FIG. 13 is a schematic illustration of color information processing of a fight camouflage material in accordance with one embodiment of the disclosure;
FIG. 14 is a schematic diagram of controlling a map UV repacking, UV translation, UV displacement speed in accordance with one embodiment of the present disclosure;
FIG. 15 is a schematic diagram of a control map UV repacking according to one embodiment of the present disclosure;
FIG. 16 is a schematic diagram of a control map UV repacking according to one embodiment of the present disclosure;
FIG. 17 is a schematic diagram of creating a blueprint based on a misting effect in accordance with one embodiment of the present disclosure;
FIG. 18 is a schematic diagram of dynamic materials creating a misting effect in a blueprint according to one embodiment of the present disclosure;
FIG. 19 is a block diagram of a model rendering device according to one embodiment of the present disclosure;
fig. 20 is a schematic diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order that those skilled in the art will better understand the present disclosure, a technical solution in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure, shall fall within the scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terminology appearing in the course of describing embodiments of the application are applicable to the following explanation:
UE4: the illusion engine, a game engine full scale Unreal, UE4 for short, is commonly used for game making.
Game engine: the game engine refers to the core components of some compiled editable computer game systems or some interactive real-time image applications. These systems provide game designers with the various tools required to write games in order to allow the game designer to easily and quickly make game programs without starting from zero. Currently, game engines commonly used in the market include Unity and UE 4.
UE4 blueprint: the blueprint of the UE4 is a visual script programming tool for creating and controlling the logic and functions of the game. Blueprints use nodes and connections to represent code logic without writing complex program code, allowing developers to create various functions of a game through simple drag and connect operations.
War misting: is a design approach to increase combat strategic by limiting player vision and information acquisition. It can make the player more invested and challenged, and increase the depth and complexity of the game.
UV: generally indicated by TexCoord in the engine, the texture coordinates generally have two axes, U and V, and are therefore referred to as UV coordinates. U represents the distribution in the transverse coordinates and V represents the distribution in the longitudinal coordinates. "UV" in three-dimensional modeling is understood to be the "skin" of a three-dimensional model, which is then unfolded and then rendered on a two-dimensional plane and imparted to an object. The colors of a graph can be divided into RGB abbreviations meaning red, blue and green, and a colored graph can be divided into three channels of RGB meaning red color ratio, blue color ratio and green color ratio. In the engine, the UV has only two red and green channels, U is the R channel of the UV diagram, and V is the G channel of the UV diagram.
Buffer of UE4 engine: in the UE4, buffer refers to a Buffer for storing and processing various data in the graphics rendering process. Each Buffer contains a specific type of information, such as depth, normal, color, etc., for controlling and rendering objects and effects in the scene.
Depth Buffer (Depth Buffer): distance information from each pixel in the scene to the camera is stored for occlusion and depth testing. A Normal Buffer (Normal Buffer) stores Normal information for each pixel for computing illumination and shading effects. A Color Buffer (Color Buffer) stores the final pixel Color for rendering the final rendering result.
Material function: the texture function is equivalent to a "fragment" of a texture, allowing you to save as a module and multiplex among multiple textures.
Floating point value: float floating point value, one-dimensional parameter.
Translucent materials: translucent is a translucent material for rendering translucent objects. It allows light to partially penetrate the object and uses different techniques to simulate the propagation of light. Is somewhat complex in the rendering pipeline relative to the Opaque material and may have some impact on performance.
Input: the input nodes in the material function can expose the parameters to external input of different parameter values.
Output: the output nodes in the material function can expose the parameters to be externally output to different output interfaces.
Application: the expression allows combining channels, creating one more vector than the original channel. For example, two separate Constant values may be obtained and added to a two-channel Constant2 vector value. This operation is suitable for rearranging channels in a single texture or combining multiple grayscale textures into one RGB color texture.
Time: a time node to add elapsed time to a material (e.g., panner (translation), cosine, or other time-dependent operations).
In one possible implementation, aiming at the technical problems that the performance consumption of the in-game misting effect is high and the rendering effect is poor after the inventor is practiced and studied carefully in the background in the in-game misting manufacturing field, based on the technical problems, the disclosed embodiment provides a model rendering method, the adopted technical concept is that the color parameters of a target model are obtained by sampling a preset map, the transparency obtained by converting the scene depth under the current view angle is obtained, the target model covered in front of a virtual camera is rendered based on the color parameters and the transparency to obtain the misting effect, the performance consumption of the misting effect is reduced, the change of the misting effect along with the view angle of a player is realized, the situation that the near place is clearly visible and the far place is blocked by the misting and cannot be clearly visible is simulated, the technical problems that the performance consumption is reduced, the fidelity of the rendering effect in the game is improved, the reality of the game misting effect is further increased, and the performance consumption of the misting effect in the game is high and the rendering effect is poor are solved.
The model rendering method in one embodiment of the present disclosure may be run on a local terminal device or a server. When the model rendering method is run on a server, the method can be implemented and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an alternative embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presentation main body are separated, the storage and the running of the model rendering method are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server which performs information processing is a cloud. When playing the game, the player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In an alternative embodiment, taking a game as an example, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection.
The local terminal device may be, for example, a mobile terminal, a computer terminal or similar computing device, and may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
Taking the mobile terminal as an example, the mobile terminal can be a smart phone, a tablet computer, a palm computer, a mobile internet device, a PAD, a game machine and other terminal devices. Fig. 1 is a hardware configuration block diagram of a mobile terminal of a model rendering method according to an embodiment of the present disclosure. As shown in fig. 1, a mobile terminal may include one or more processors 102 (only one shown in fig. 1) and memory 104 for storing data. Optionally, the mobile terminal may further include a transmission device 106, an input/output device 108, and a display device 110.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
According to one embodiment of the present disclosure, an embodiment of a model rendering method is provided, it being noted that the steps shown in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that herein.
In a possible implementation manner, the embodiment of the disclosure provides a model rendering method, and a graphical user interface is provided through a terminal device, where the terminal device may be the aforementioned local terminal device or the aforementioned client device in the cloud interaction system. Fig. 2 is a flowchart of a model rendering method according to one embodiment of the present disclosure, in which a graphic user interface is provided through a terminal device, and contents displayed on the graphic user interface include virtual scenes, as shown in fig. 2, the method includes the steps of:
step S202, sampling the preset map according to the preset texture coordinates to obtain color parameters of the target model.
Specifically, the preset texture coordinates may be used to represent a preset texture function for controlling the sampling map UV information.
The preset map can be used for representing the map in the map sampler, and generally, different mist effect maps can be drawn in advance by an artist as required, wherein information such as a mist texture, a shape, an edge and the like in the mist effect map is not particularly limited, and the mist effect maps can be drawn according to actual conditions.
The object model described above may be used to represent a square box or a tile to be rendered.
The color parameter may be used to represent a color of the foggy final light emission obtained by performing color information processing on the preset map, and may be, in general, green, yellow, or the like, and the self-luminous color is not particularly limited herein.
In an alternative embodiment, during the process of performing the rendering of the foggy effect, the preset map needs to be sampled according to the preset texture coordinates to ensure that the map is correctly mapped onto the model, that is, color parameters of the target model can be obtained by performing color information processing on the preset map.
Step S204, converting the scene depth of the target model under the current view angle to obtain the transparency of the target model, wherein the current view angle is used for representing the view angle corresponding to the virtual camera in the virtual scene, and the scene depth is used for representing the distance information from the pixel point in the target model to the virtual camera.
Specifically, the above-mentioned scene depth may be used to represent the distance between each pixel point in the target model and the virtual camera, and by calculating the depth value of each pixel point, the positional relationship of each object in the virtual scene under the view angle of the camera may be determined.
The transparency can be used for representing the black-and-white information from the near to the far with the black-and-white information between 0 and 1 obtained by converting the depth of the scene.
Generally, by inputting black and white information into the transparent channel of the material, the black representation 0 is transparent, the white representation 1 is opaque, so that the near black portion of the final object model is transparent, the far white portion is opaque, and the middle gray portion is a translucent transition effect, and finally the scene depth as shown in fig. 3 can be obtained. Fig. 3 is an effect diagram of scene depth according to one embodiment of the present disclosure.
In an alternative embodiment, in the process of performing the rendering of the foggy effect, the depth of the virtual scene under the current view angle needs to be converted into black-and-white information with black-and-white information from the near to the far between 0 and 1, and the transparency of the target model is displayed through the black-and-white information between 0 and 1.
As shown in fig. 12, in the depth information processing box, SCENEDEPTH corresponds to the above-mentioned scene depth, the scene depth output by the SCENEDEPTH node is used as the a parameter in the input of the screen node, and meanwhile, the distance parameter and the radian parameter are mixed based on the arc masking algorithm of the screen UV, in another alternative embodiment, in the process of converting the scene depth of the virtual scene under the current view angle, the ratio between the scene depth and the mixed parameter distance parameter needs to be calculated, so as to obtain the transparency. The Distance parameter corresponds to Distance in the graph, the radian parameter corresponds to Radian in the graph, the multiple node is used for performing product operation, the Lerp node is used for performing mixing operation, the Divide node is used for performing division operation, and the final transparency, such as the transparency in the m_ec_ FogOfWar node, can be obtained by performing conversion processing on the scene depth of the virtual scene under the current view angle.
And S206, rendering and outputting the target model according to the color parameters and the transparency.
Specifically, in an alternative embodiment, after the color parameters and transparency are obtained, the object model needs to be rendered based on the color parameters and the transparency, that is, the square box or the face piece covered by the front of the virtual camera is rendered, so as to obtain the foggy effect with self-luminous color and transparency.
In conclusion, the color parameters of the target model are obtained by sampling the preset map according to the preset texture coordinates; converting the scene depth of the target model under the current view angle to obtain the transparency of the target model, wherein the current view angle is used for representing the view angle corresponding to the virtual camera in the virtual scene, and the scene depth is used for representing the distance information from the pixel point in the target model to the virtual camera; and rendering and outputting the target model according to the color parameters and the transparency. It is easy to notice that, the color parameters are obtained by sampling the preset map, the transparency obtained by converting the scene depth under the current visual angle is obtained, the target model covered in front of the virtual camera is rendered based on the color parameters and the transparency to obtain the foggy effect, the performance consumption of the foggy effect is reduced, the foggy effect is changed along with the visual angle of a player, the situation that the near place in the real world is clearly visible and the far place is blocked by the foggy and cannot be clearly visible is simulated, the performance consumption is reduced, the rendering fidelity of the foggy effect in the game is improved, and the technical effect of the sense of reality of the game is further improved, so that the technical problems of higher performance consumption of the foggy effect in the game and poor rendering effect are solved.
Optionally, converting the scene depth of the target model under the current view angle to obtain the transparency of the target model, including: acquiring a screen position of a target model in a screen space to obtain screen texture coordinates; generating an arc mask based on screen texture coordinates; mixing a distance parameter and an radian parameter based on the arc shade to obtain a mixed parameter, wherein the radian parameter is used for representing the radian of the arc shade, and the distance parameter is used for reducing the depth of a scene; and obtaining the ratio of the scene depth to the mixing parameter to obtain the transparency.
Specifically, the screen texture coordinates described above may be used to represent the position coordinates of the currently rendered pixel in screen space.
In general, the position coordinates of the object model in the screen space can be obtained through a screen position expression, as shown in fig. 4, fig. 4 is a schematic diagram of an arc-shaped mask algorithm based on the screen UV according to one embodiment of the disclosure, where the screen position is shown as a ScreenPosition node in fig. 4, the current screen texture coordinate can be output through a ScreenPosition node, as a ViewportUV node in fig. 4, and ViewportUV is input as an a of the sub-track node.
The arc mask can be used for representing arc mask parameters obtained by processing the screen texture coordinates.
As shown in fig. 4, the screen texture coordinates ViewportUV output by the ScreenPosition node are sequentially subjected to difference processing, product processing and Dot product processing, so as to obtain an arc mask output by the Sqrt node, wherein the difference processing is performed by a subset node, the product processing is performed by a multiple (2) node, and the Dot product processing is performed by a Dot node.
The Distance parameter may be a parameter preset by a game developer, for example, the Distance parameter is set to 1000, and the Distance parameter is not limited herein, and a Distance floating point value is added as a parameter for controlling the Distance in the present application.
The radian parameter may be a parameter preset by a game developer, for example, the radian parameter is set to 0.7, and the radian parameter is not limited herein, and a Radian floating point value is added as a parameter for controlling the radian of the arc mask in the present application.
In an alternative embodiment, in the process of converting the scene depth of the target model under the current view angle to obtain the transparency of the target model, the screen position of the target model in the screen space needs to be obtained to obtain corresponding screen texture coordinates, then an arc mask is generated based on the screen texture coordinates, the distance parameter and the radian parameter are mixed based on the arc mask to obtain a mixed parameter, then the scene depth and the obtained mixed parameter are divided, and the obtained ratio is used as the transparency.
Optionally, generating the arc mask based on the screen texture coordinates includes: obtaining a difference value between a screen texture coordinate and a preset coordinate to obtain a first coordinate difference value; obtaining a product of the first coordinate difference value and a preset value to obtain a second coordinate difference value; obtaining a dot product of the first coordinate difference value and the second coordinate difference value to obtain a target length; the square root of the target length is obtained, resulting in an arc mask.
Specifically, the preset coordinates may be used to represent texture coordinates preset by a game developer, and may be (0.5 ), and the preset coordinates are not limited only.
The preset value may be used to represent a preset parameter value for performing the product operation, and may be 2, where the preset value is not limited only.
The target length described above may be used to represent the radius length of the circular arc obtained after the dot product.
In an alternative embodiment, in the process of generating the arc mask based on the screen texture coordinates, the difference between the screen texture coordinates and the preset coordinates is required to be obtained and used as a first coordinate difference, then the product of the first coordinate difference and the preset value is obtained and used as a second coordinate difference, then dot product processing is performed on the first coordinate difference and the second coordinate difference to obtain a target length, and square root processing is performed on the target length to obtain the arc mask.
As shown in fig. 4, the preset coordinates correspond to (0.5 ) in fig. 4, the difference value processing is performed on the screen texture coordinates corresponding to the a input and the preset coordinates corresponding to the B input through the sub node to obtain a first coordinate difference value, the first coordinate difference value is used as the a input of a multiple (.2) node, the point multiplication processing with the preset value of 2 is performed on the first coordinate difference value to obtain a second coordinate difference value, the second coordinate difference value is used as the a input and the B input of the Dot node, the point multiplication processing is performed on the second coordinate difference value to obtain a target length, the target length is used as the input of the Sqrt node, and the square root processing is performed on the target length to obtain the arc mask.
Wherein ScreenPosition (screen position) expression outputs the screen space position of the currently rendered pixel; the sub-track node accepts two inputs, subtracts the second input from the first input, and then outputs a difference value; dot expression computes Dot products, which can be described as the length of one vector projected onto another, or as the cosine between two vectors multiplied by their magnitudes, dot requires that the two vector inputs have the same number of channels; the Sqrt expression outputs the square root of the input value, and if applied to a vector, each component will be processed separately.
In addition, in the process of obtaining Screen-based UV information using the Screen UV expression, the following conversion process is included:
FIG. 5 is a schematic diagram of screen UV information according to one embodiment of the present disclosure, where the screen UV information corresponding to the first coordinate difference value is obtained by subtracting the two-dimensional (0.5 ) vector from the screen UV information in FIG. 5, as shown in FIG. 6, and FIG. 6 is a schematic diagram of screen UV information corresponding to the first coordinate difference value according to one embodiment of the present disclosure; obtaining screen UV information corresponding to the second coordinate difference value by multiplying the screen UV information in fig. 6 by 2, as shown in fig. 7, fig. 7 is a schematic view of the screen UV information corresponding to the second coordinate difference value according to one embodiment of the present disclosure; after Dot (Dot product) is performed on the screen UV information in fig. 7, a radius length of a circular arc is obtained, as shown in fig. 8, fig. 8 is a schematic diagram corresponding to the radius length of the circular arc according to an embodiment of the present disclosure, which may be a circle with a radius of 1; the final screen UV information is obtained by square root processing of the radius length in fig. 8, as shown in fig. 9, fig. 9 is a schematic diagram of the screen UV information calculated as a square root according to one embodiment of the present disclosure.
Further, the screen UV information with the calculated square root is outputted as a mask to Alpha input of Lerp node and divided by SCENEDEPTH node, so as to obtain arc-shaped war misting effect, as shown in fig. 10, fig. 10 is a schematic view of the war misting effect without arc masking algorithm according to one embodiment of the present disclosure, fig. 11 is a schematic view of the war misting effect with arc masking algorithm according to one embodiment of the present disclosure, and as can be seen by comparing fig. 10 and fig. 11, the war misting effect in fig. 10 is distinguished and displayed by a horizontal line, and the war misting effect in fig. 11 is displayed by a certain arc.
Optionally, mixing the distance parameter and the radian parameter based on the arc mask to obtain a mixed parameter includes: obtaining the product of the distance parameter and the radian parameter to obtain a target parameter; determining a mixing ratio of the distance parameter and the target parameter based on the arc mask; and mixing the distance parameter and the target parameter according to the mixing proportion to obtain a mixing parameter.
Specifically, in the process of mixing the distance parameter and the radian parameter based on the arc mask to obtain the mixed parameter, the product operation is required to be carried out on the distance parameter and the radian parameter to obtain the target parameter, then the mixing proportion between the distance parameter and the target parameter is determined based on the arc mask, the distance parameter and the target parameter are mixed according to the mixing proportion to obtain the mixed parameter, and then the division operation is carried out on the mixed parameter based on the scene depth.
Fig. 12 is a schematic illustration of depth information processing of a competing mist material in accordance with one embodiment of the present disclosure. As shown in fig. 12, the depth information of the scene acquired by SCENE DEPTH is first divided by a floating point value of a parameter name Distance because the value of the scene depth is relatively large. Distance is taken as an A input of Lerp, a Radian floating point value is added as a parameter for controlling radian of the arc-shaped mask, and the value is multiplied by the Distance parameter to be taken as a B input of Lerp. The Distance parameter in the Distance node is used as the A input of a multiple node, the radian parameter in the Radian node is used as the B input of the multiple node, the obtained target parameter is used as the B input of the Lerp node by multiplying the A input and the B input, the Distance parameter in the Distance node is used as the A input of the Lerp node, the arc mask is used as the Alpha input of the Lerp node, and the mixing proportion between the Distance parameter and the target parameter is determined in the Lerp node and is mixed through the mixing proportion.
Typically, the third input value at the Lerp node is a mask parameter, and then interpolation is performed between the two input values. It is conceivable that the two textures are transitioned according to a mask, like the layer mask in Photoshop. The intensity of the mask Alpha determines the weight contributed by the two input values. If Alpha is 0.0, the first input value is used; if Alpha is 1.0, then use the second input value; if Alpha is between 0.0 and 1.0, the output is an interpolation between the two inputs. It should be noted that the mixing is performed channel by channel. So if Alpha is an RGB color, then Alpha's red channel would define an interpolation between a and B's red channels, independent of Alpha's green channel, which defines an interpolation between a and B's green channels.
The Divide node has two inputs, divides the first input by the second input, and outputs a value.
Optionally, sampling the preset map according to the preset texture coordinates to obtain color parameters of the target model, including: sampling the preset mapping according to the preset texture coordinates to obtain an initial sampling result; adjusting the contrast of the initial sampling result based on the contrast parameter to obtain a first sampling result; adjusting the brightness and the color saturation of the first sampling result based on the intensity parameter to obtain a second sampling result; and adjusting the color of the second sampling result based on the preset color to obtain a color parameter.
Specifically, the initial sampling result may be used to represent UV information obtained by initially sampling the misty information in the preset map.
The contrast parameter, the intensity parameter and the preset color may be preset by a game developer, and the contrast parameter, the intensity parameter and the preset color are not particularly limited and may be adjusted according to practical situations.
In an alternative embodiment, in the process of sampling the preset map according to the preset texture coordinates to obtain the color parameters of the target model, the preset map may be sampled according to the preset texture coordinates to obtain an initial sampling result, and then the contrast of the initial sampling result is adjusted to obtain the first sampling result, and then the intensity of brightness and color saturation of the first sampling result is adjusted to obtain the second sampling result, and then the color of the second sampling result is adjusted based on the preset color to obtain the color parameters.
Fig. 13 is a schematic illustration of color information processing of a contention-based mist texture in accordance with one embodiment of the present disclosure. As shown in fig. 13, in the color information processing frame, the channel value of the four-dimensional vector is set by the VectorParameter function in the TexUVControl node, the UV information in the MF UVControl node is controlled based on the set value, specifically, the u_tin is rebuilt in the U direction by the R channel, the v_tin is rebuilt in the V direction by the G channel, the U direction Speed u_speed is controlled by the B channel, the V direction Speed v_speed is controlled by the a channel, the preset texture coordinates, such as UVs in the Tex node, are obtained, the preset map, such as APPLY VIEW MipBias in the Tex node, is further sampled by the preset texture coordinates, the Tex node can output the RGB value of the sampled preset map, and further the RGB value is processed by the subsequent node, so as to obtain the color parameters of the final target model, such as Emissive Color in the m_ec_ FogOfWar node.
In a Tex node, sampling a preset map corresponding to APPLY VIEW MipBias according to preset texture coordinates corresponding to UVs to obtain an initial sampling result with RGB color channels, taking the initial sampling result as In input In a CheapContrast node, taking Contrast parameters In a Contrast(s) node as Contrast input In a CheapContrast node, carrying out Contrast adjustment on the initial sampling result In a CheapContrast node to obtain a first sampling result, and taking the first sampling result as A input of a first multiple node; taking the intensity parameter in the TexIntensity node as the B input of the multiplexing node, adjusting the brightness and the color saturation of the first sampling result in the multiplexing node to obtain the second sampling result, and taking the second sampling result as the A input of the second multiplexing node; and taking the preset Color in the Color node as the B input of a second multiple node, and obtaining the Color parameters by performing Color adjustment on the second sampling result in the multiple node.
Typically VectorParameter is used as a color, so a color extractor can be used to set its value; MF UVControl function: nodes for controlling the UV re-paving, UV translation and UV displacement speed of the map, and the like, which are frequently used by special effects.
Optionally, adjusting the contrast of the initial sampling result based on the contrast parameter to obtain a first sampling result, including: and adjusting the contrast of the initial sampling result based on the contrast parameter by using a preset material function to obtain a first sampling result.
Specifically, the predetermined material function may be used to represent a predetermined function for performing contrast adjustment.
In an alternative embodiment, in the process of adjusting the contrast of the initial sampling result based on the contrast parameter to obtain the first sampling result, the contrast of the initial sampling result may be adjusted based on the contrast parameter by using a material function corresponding to CheapContrast (UE 4 engine official provides a material function for controlling the contrast of the map, and consumes less power), so as to obtain the first sampling result.
Optionally, adjusting the brightness and the color saturation of the first sampling result based on the intensity parameter to obtain a second sampling result, including: and obtaining a product of the first sampling result and the intensity parameter to obtain a second sampling result.
Specifically, in the process of adjusting the brightness and the color saturation of the first sampling result based on the intensity parameter to obtain the second sampling result, the product processing may be performed on the first sampling result and the intensity parameter to obtain the second sampling result, as shown in fig. 13, and the product processing may be performed on the first sampling result in the multiple node to obtain the second sampling result.
Optionally, adjusting the color of the second sampling result based on the preset color to obtain a color parameter, including: and obtaining the product of the second sampling result and the preset color to obtain the color parameter.
Specifically, in the process of adjusting the color of the second sampling result based on the preset color to obtain the self-luminous color, the product processing may be performed on the second sampling result and the preset color to obtain the color parameter, as shown in fig. 13, and the color adjustment may be performed on the second sampling result in the second multiple node to obtain the green self-luminous color.
Optionally, sampling the preset map according to the preset texture coordinates to obtain color parameters of the target model, including: determining a current function value of a preset function corresponding to the target model; responding to the current function value of the preset function as the preset function value, and controlling the preset texture coordinates to sample the preset map to obtain self-luminous color; and determining the self-luminous color as the preset color in response to the current function value of the preset function not being the preset function value.
Specifically, the above-mentioned predetermined function may be used to indicate whether the predetermined function of the texture of the map is enabled or not.
The preset function value can be used for representing a function value corresponding to a preset enabling map texture.
In an alternative embodiment, in the process of sampling the preset map according to the preset texture coordinates to obtain the color parameters of the target model, determining whether the map texture is enabled or not, and controlling the preset texture coordinates to sample the preset map to obtain the self-luminous color in response to the fact that the current function value of the preset function is the preset function value; otherwise, if the current function value of the preset function is not the preset function value, namely, the fact that the mapping texture is not enabled is determined, the self-luminous color is determined to be the preset color.
As shown in FIG. 13, the current state of the static switch is determined by the UseTexture node, if True is pointed to, then the tile texture is enabled, and if False is pointed to, then the tile texture is not enabled.
Optionally, the method further comprises: obtaining channel values of a plurality of channels of a preset image; based on the channel values of the channels, determining repacking information and direction displacement speed information, wherein the repacking information is used for representing the number of preset maps arranged on texture coordinates of the preset maps, and the direction displacement speed information is used for representing the scrolling speed of the texture coordinates of the preset maps; overlapping the re-paving information and the direction displacement speed information to obtain preset texture coordinates.
Specifically, the channel values may be used to represent values corresponding to a plurality of pixel channels in a preset image, and at least include R-channel pixel values, G-channel pixel values, B-channel pixel values, a-channel pixel values, and the like.
In an alternative embodiment, before the preset map is sampled according to the preset texture coordinates, the preset texture coordinates need to be obtained, specifically, channel values of a plurality of channels in the preset image need to be obtained, and then, based on the channel values of the plurality of channels, the re-paving information and the direction displacement speed information of the map UV are determined, and then, the re-paving information and the direction displacement speed information are overlapped by using the mf_ UVControl function, so that the preset texture coordinates are obtained.
As shown in fig. 13, a plurality of channel values of the preset image are output at a TexUVControl node, and then corresponding mapping UV information is determined based on each channel value, namely, U-direction resurfacing u_tiling is controlled through an R channel, V-direction resurfacing v_tiling is controlled through a G channel, U-direction displacement Speed u_speed is controlled through a B channel, V-direction displacement Speed v_speed is controlled through an a channel, U-direction resurfacing u_tiling and V-direction resurfacing v_tiling are overlapped, U-direction displacement Speed u_speed and V-direction displacement Speed v_speed are overlapped, and the preset texture coordinates are obtained through overlapping results of resurfacing and overlapping results of direction displacement speeds.
It should be noted that, in the present application, since only the repacking information and the directional displacement speed information of the map UV are adjusted, the U-directional offset and the V-directional offset are not adjusted, but parameters corresponding to the U-directional offset and the V-directional offset in the mf_ UVControl function are reserved, so that the U-directional offset and the V-directional offset need to be adjusted in the later period.
FIG. 14 is a schematic diagram of controlling the speed of UV repacking, UV translation, and UV displacement of a map according to one embodiment of the present disclosure. As shown in fig. 14, for the control map UV resurfacing, input u_tining is taken as an a Input of application, input v_tining is taken as a B Input of application, channel combination is performed on the a Input and the B Input through the application node, a channel combination result is taken as a B Input of the multiple node, texCoord is taken as an a Input of the multiple node, product processing is performed on the a Input and the B Input through the multiple node, and a product result is taken as an a Input of the first Add node.
For controlling the UV direction displacement Speed of the map, taking Input U_speed as an A Input of an application, taking Input V_speed as a B Input of the application, carrying out channel combination on the A Input and the B Input through the application node, taking a channel combination result as an A Input of a multiple node, taking Time as a B Input of the multiple node, carrying out product processing on the A Input and the B Input through the multiple node, taking a product result as a B Input of a first Add node, carrying out superposition on the A Input and the B Input in the first Add node, and taking a superposition result as an A Input in a second Add node.
For controlling the UV direction offset of the map, taking Input U_offset as an Input A of an application, taking Input V_offset as an Input B of the application, carrying out channel combination on the Input A and the Input B through the application node, taking a channel combination result as an Input B of a second Add node, and outputting a superposition result as the preset texture coordinate through superposition of the Input A and the Input B in the second Add node.
Optionally, determining the resurfacing information based on the channel values of the plurality of channels includes: based on the channel values of the channels, determining a resurfacing parameter corresponding to the first coordinate and a resurfacing parameter corresponding to the second coordinate; and combining the resurfacing parameters corresponding to the first coordinates and the resurfacing parameters corresponding to the second coordinates to obtain resurfacing information.
Specifically, the repacking parameter corresponding to the first coordinate may be used to represent the U-direction repacking parameter in the map UV.
The repacking parameter corresponding to the second coordinate can be used for representing the repacking parameter in the V direction in the map UV.
In an alternative embodiment, in determining the repacking information based on the channel values of the multiple channels, the repacking parameters corresponding to the first coordinates and the repacking parameters corresponding to the second coordinates need to be determined based on the channel values of the multiple channels, that is, the U-direction repacking and the V-direction repacking in the map UV are determined, and then the channel combination is performed on the repacking parameters corresponding to the first coordinates and the repacking parameters corresponding to the second coordinates, so as to obtain the repacking information.
Fig. 15 is a schematic diagram of a control map UV resurfacing according to one embodiment of the present disclosure. As shown in fig. 15, since the parameter TexCoord is temporarily not adjusted in the present application, the parameter is defaulted to be 1, and the Input u_Tiling is taken as the a Input of the application, the Input v_Tiling is taken as the B Input of the application, the a Input and the B Input are channel-combined by the application node, and the channel combination result is taken as the resurfacing information.
Optionally, determining the direction displacement velocity information based on the channel values of the plurality of channels includes: determining a direction displacement speed corresponding to the first coordinate and a direction displacement speed corresponding to the second coordinate based on channel values of the plurality of channels; and combining the channel of the direction displacement speed corresponding to the first coordinate and the direction displacement speed corresponding to the second coordinate to obtain direction displacement speed information.
Specifically, the above-described directional displacement velocity corresponding to the first coordinate may be used to represent the U-directional displacement velocity in the map UV.
The direction displacement speed corresponding to the second coordinate can be used for representing the V direction displacement speed in the map UV.
In an alternative embodiment, in the process of determining the direction displacement speed information based on the channel values of the plurality of channels, the direction displacement speed corresponding to the first coordinate and the direction displacement speed corresponding to the second coordinate need to be determined based on the channel values of the plurality of channels, that is, the U direction displacement speed and the V direction displacement speed in the map UV are determined, and then the channel combination is performed on the direction displacement speed corresponding to the first coordinate and the direction displacement speed corresponding to the second coordinate, so as to obtain the direction displacement speed information.
FIG. 16 is a schematic diagram of a control map UV repacking according to one embodiment of the present disclosure. As shown in fig. 16, since the Time parameter is temporarily not adjusted in the present application, the parameter is set to 1 by default, i.e., input u_speed is set as the a Input of application, input v_speed is set as the B Input of application, channel combination is performed on the a Input and the B Input by the application node, and the channel combination result is set as the above-described directional displacement velocity information.
In summary, the effect of performing color information processing on the competing and foggy material can be achieved through the following steps:
Sampling a parameterized map, wherein the map can be replaced according to the requirement; the UVControl node is used to control the UV information of the map to ensure that the map maps correctly onto the model; using CheapConstrast nodes and Contrast (S) floating point values to adjust the Contrast of the map may make the shading of the map more prominent or soft; using multiple nodes and floating point values with parameter names TexIntensity to control the doubling of the map, which can increase or decrease the brightness and color saturation of the map; superposing the effects with VectorParameter with the parameter name Color by using a multiple node so as to adjust the overall Color of the map; a static switch UseTexture is used to control whether or not the mapping sampling is used, and by turning this switch on or off, it is possible to flexibly select whether or not the mapping effect is applied.
Through the steps, the control and adjustment of various aspects of the mapping can be realized, so that the expected effect is achieved, and the parameter adjustment and the mapping replacement can be carried out according to the requirement in the process, so that the final result is more flexible and personalized.
Optionally, the method further comprises: creating a blueprint object based on the target model; determining that the blueprint object is in a hidden state in the virtual scene, and the target model is in a display state in the virtual scene; creating a dynamic material of the target model; dynamically adjusting dynamic materials based on the visible distance of the target model; and adjusting the relative position relation between the target model and the center point position of the blueprint object based on the preset position parameter.
Specifically, the blueprint object described above may be used to represent a blueprint that needs to be used to add a misting effect to a virtual scene, that is, an object processed by a blueprint editor.
The dynamic material described above may be used to represent a material that dynamically adjusts the visible distance of the misting effect.
The preset position parameter may be used to represent a position parameter preset by a game developer, where the preset position parameter is not specifically limited and may be adjusted according to actual situations.
In an alternative embodiment, in order to dynamically adjust the visible distance of the foggy, a blueprint object may be created based on the target model, in order to display the target model in the virtual scene, it is required to determine that the blueprint object is in a hidden state in the virtual scene and that the target model is in a display state in the virtual scene, and meanwhile, in the process of dynamically adjusting the visible distance of the foggy, a dynamic material of the target model may be created, and dynamically adjust the dynamic material based on the visible distance of the target model, and adjust a relative positional relationship between the target model and a center point position of the blueprint object based on a preset positional parameter, so as to realize a function of dynamically adjusting the visible distance of the foggy. The visible distance of the foggy can be adjusted according to the situation of the game by the real-time variation of the program logic. For example, when a player is in a certain combat situation or a particular environment, the visible distance of the foggy may be reduced such that the player can only see nearby areas, increasing the tension and strategic nature of the game.
Fig. 17 is a schematic diagram of creating a blueprint based on a misting effect in accordance with one embodiment of the present disclosure. As shown in fig. 17, the purpose of creating a blueprint object based on the foggy effect is achieved through the ConstructionScript node, so that the created blueprint object is shown in Viewport.
Fig. 18 is a schematic diagram of dynamic materials creating a misting effect in a blueprint according to one embodiment of the present disclosure. As shown in fig. 18, the blueprint is controlled to be in a hidden state in the virtual scene in the Set Actor HIDDEN IN GAME node, the misting effect is controlled to be in a display state in the virtual scene in the Set vision quality node, the dynamic material of the misting effect is created in the CREAT DYNAMIC MATERIAL INSTANCE node, meanwhile, the visible Distance in the Distance node is used as the Value input in the SET SCALAR PARAMETER Value node, the dynamic material is dynamically adjusted in the SET SCALAR PARAMETER Value node based on the visible Distance of the misting effect, meanwhile, the preset position parameter is Set in the Self and is used as the input in the Set Actor Relative Location node, and the relative position relation between the misting effect and the central point position of the blueprint is adjusted based on the preset position parameter.
Generally, the square box or patch model is programmed to meet the code required by the game item, based on the fact that it has been given a battle-engaging fog material. The ConstructionScript node will execute the following logic first when creating the blueprint class. Setting blueprints in the Set Actor HIDDEN IN GAME nodes are in a hidden state in the virtual scene, and setting the foggy effect in the Set Visibility nodes is in a display state in the virtual scene, so that the method is the Debug use method of the nodes. The parameters of the texture of the patch are required to be dynamically changeable in the project, so that the texture of the model is required to be created in blueprint logic to be a dynamic texture, and the parameters in the texture can be dynamically controlled, so that the node CREATDYNAMICMATERIAL INSTANCE is used. What is needed to control is that the parameter named Distance in the material, which is controlled by a floating point Value in the blueprint, is used for this node SET SCALAR PARAMETER Value. And setting the model to a relative position to the blueprint class center point location uses Set Actor Relative Location the node.
Wherein ConstructionScript is a special blueprint chart in Unreal Engine for performing operations when constructing and initializing blueprint instances in the editor. Through ConstructionScript, a developer can perform custom setting and initialization work on the blueprint when the blueprint is created, so that a required initial state is obtained at the running time.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present disclosure may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present disclosure.
The embodiment also provides a model rendering device, which is used for implementing the above embodiment and the preferred implementation manner, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 19 is a block diagram of a model rendering apparatus according to one embodiment of the present disclosure, providing a graphic user interface through a terminal device, the contents displayed by the graphic user interface including virtual scenes, as shown in fig. 19, the apparatus comprising:
the sampling module 1902 is configured to sample a preset map according to a preset texture coordinate to obtain a color parameter of the target model;
The conversion module 1904 is configured to convert a scene depth of the target model under a current view angle to obtain transparency of the target model, where the current view angle is used to represent a view angle corresponding to a virtual camera in the virtual scene, and the scene depth is used to represent distance information from a pixel point in the target model to the virtual camera;
and a rendering module 1906, configured to render and output the target model according to the color parameter and the transparency.
Optionally, the conversion module includes: the acquisition module is used for acquiring the screen position of the target model in the screen space corresponding to the graphical user interface to obtain screen texture coordinates; the generating module is used for generating an arc mask based on the screen texture coordinates; the mixing module is used for mixing the distance parameter and the radian parameter based on the arc shade to obtain a mixing parameter, wherein the radian parameter is used for representing the radian of the arc shade, and the distance parameter is used for reducing the depth of a scene; and the ratio acquisition module is used for acquiring the ratio of the scene depth to the mixing parameter to obtain the transparency.
Optionally, the generating module includes: the difference value acquisition module is used for acquiring the difference value between the screen texture coordinate and the preset coordinate to obtain a first coordinate difference value; the product acquisition module is used for acquiring the product of the first coordinate difference value and a preset value to obtain a second coordinate difference value; the dot product acquisition module is used for acquiring dot products of the first coordinate difference value and the second coordinate difference value to obtain a target length; and the square root acquisition module is used for acquiring the square root of the target length to obtain the arc-shaped mask.
Optionally, the mixing module includes: the product acquisition module is used for acquiring the product of the distance parameter and the radian parameter to obtain a target parameter; the mixing proportion determining module is used for determining the mixing proportion of the distance parameter and the target parameter based on the arc mask; and the parameter mixing module is used for mixing the distance parameter and the target parameter according to the mixing proportion to obtain a mixing parameter.
Optionally, the sampling module includes: the initial sampling result obtaining module is used for sampling the preset mapping according to the preset texture coordinates to obtain an initial sampling result; the first sampling result obtaining module is used for adjusting the contrast of the initial sampling result based on the contrast parameter to obtain a first sampling result; the second sampling result obtaining module is used for adjusting the brightness and the color saturation of the first sampling result based on the intensity parameter to obtain a second sampling result; and the color adjustment module is used for adjusting the color of the second sampling result based on the preset color to obtain a color parameter.
Optionally, the first sampling result obtaining module includes: and the contrast adjusting module is used for adjusting the contrast of the initial sampling result based on the contrast parameter by utilizing a preset material function to obtain a first sampling result.
Optionally, the second sampling result obtaining module includes: and the first product module is used for obtaining the product of the first sampling result and the intensity parameter to obtain a second sampling result.
Optionally, the color adjustment module includes: and the second product module is used for obtaining the product of the second sampling result and the preset color to obtain the color parameter.
Optionally, the sampling module further includes: the function value determining module is used for determining the current function value of the preset function corresponding to the target model; the first determining module is used for responding to the fact that the current function value of the preset function is the preset function value, and controlling the preset texture coordinates to sample the preset map to obtain self-luminous colors; and the second determining module is used for determining that the self-luminous color is a preset color in response to the fact that the current function value of the preset function is not the preset function value.
Optionally, the apparatus further comprises: the channel value acquisition module is used for acquiring channel values of a plurality of channels of a preset image; the information determining module is used for determining resurfacing information and direction displacement speed information based on channel values of a plurality of channels, wherein the resurfacing information is used for representing the number of preset maps arranged on texture coordinates of the preset maps, and the direction displacement speed information is used for representing the scrolling speed of the texture coordinates of the preset maps; and the information superposition module is used for superposing the re-paving information and the directional displacement speed information to obtain preset texture coordinates.
Optionally, the information determining module includes: the resurfacing determining module is used for determining resurfacing parameters corresponding to the first coordinates and resurfacing parameters corresponding to the second coordinates based on the channel values of the plurality of channels; the first channel combination module is used for carrying out channel combination on the resurfacing parameters corresponding to the first coordinates and the resurfacing parameters corresponding to the second coordinates to obtain resurfacing information.
Optionally, the information determining module further includes: the direction displacement speed determining module is used for determining the direction displacement speed corresponding to the first coordinate and the direction displacement speed corresponding to the second coordinate based on the channel values of the channels; and the second channel combination module is used for carrying out channel combination on the direction displacement speed corresponding to the first coordinate and the direction displacement speed corresponding to the second coordinate to obtain direction displacement speed information.
Optionally, the apparatus further comprises: the blueprint creation module is used for creating a blueprint object based on the target model; the state determining module is used for determining that the blueprint object is in a hidden state in the virtual scene and the target model is in a display state in the virtual scene; the material creation module is used for creating dynamic materials of the target model; the dynamic adjustment module is used for dynamically adjusting the dynamic materials based on the visible distance of the target model; and the relative position adjustment module is used for adjusting the relative position relation between the target model and the center point position of the blueprint object based on the preset position parameter.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; or the above modules may be located in different processors in any combination.
Embodiments of the present disclosure also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
Alternatively, in this embodiment, the above-mentioned computer-readable storage medium may be located in any one of the computer terminals in the computer terminal group in the computer network, or in any one of the mobile terminals in the mobile terminal group.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for performing the steps of:
S1, sampling a preset map according to preset texture coordinates to obtain color parameters of a target model;
S2, converting the scene depth of the target model under the current view angle to obtain the transparency of the target model, wherein the current view angle is used for representing the view angle corresponding to the virtual camera in the virtual scene, and the scene depth is used for representing the distance information from the pixel point in the target model to the virtual camera;
and S3, rendering and outputting the target model according to the color parameters and the transparency.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: acquiring a screen position of a target model in a screen space corresponding to a graphical user interface, and obtaining screen texture coordinates; generating an arc mask based on screen texture coordinates; mixing a distance parameter and an radian parameter based on the arc shade to obtain a mixed parameter, wherein the radian parameter is used for representing the radian of the arc shade, and the distance parameter is used for reducing the depth of a scene; and obtaining the ratio of the scene depth to the mixing parameter to obtain the transparency.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: obtaining a difference value between a screen texture coordinate and a preset coordinate to obtain a first coordinate difference value; obtaining a product of the first coordinate difference value and a preset value to obtain a second coordinate difference value; obtaining a dot product of the first coordinate difference value and the second coordinate difference value to obtain a target length; the square root of the target length is obtained, resulting in an arc mask.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: obtaining the product of the distance parameter and the radian parameter to obtain a target parameter; determining a mixing ratio of the distance parameter and the target parameter based on the arc mask; and mixing the distance parameter and the target parameter according to the mixing proportion to obtain a mixing parameter.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: sampling the preset mapping according to the preset texture coordinates to obtain an initial sampling result; adjusting the contrast of the initial sampling result based on the contrast parameter to obtain a first sampling result; adjusting the brightness and the color saturation of the first sampling result based on the intensity parameter to obtain a second sampling result; and adjusting the color of the second sampling result based on the preset color to obtain a color parameter.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: and adjusting the contrast of the initial sampling result based on the contrast parameter by using a preset material function to obtain a first sampling result.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: and obtaining a product of the first sampling result and the intensity parameter to obtain a second sampling result.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: and obtaining the product of the second sampling result and the preset color to obtain the color parameter.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: determining a current function value of a preset function corresponding to the target model; responding to the current function value of the preset function as the preset function value, and controlling the preset texture coordinates to sample the preset map to obtain self-luminous color; and determining the self-luminous color as the preset color in response to the current function value of the preset function not being the preset function value.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: obtaining channel values of a plurality of channels of a preset image; based on the channel values of the channels, determining repacking information and direction displacement speed information, wherein the repacking information is used for representing the number of preset maps arranged on texture coordinates of the preset maps, and the direction displacement speed information is used for representing the scrolling speed of the texture coordinates of the preset maps; overlapping the re-paving information and the direction displacement speed information to obtain preset texture coordinates.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: based on the channel values of the channels, determining a resurfacing parameter corresponding to the first coordinate and a resurfacing parameter corresponding to the second coordinate; and combining the resurfacing parameters corresponding to the first coordinates and the resurfacing parameters corresponding to the second coordinates to obtain resurfacing information.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: determining a direction displacement speed corresponding to the first coordinate and a direction displacement speed corresponding to the second coordinate based on channel values of the plurality of channels; and combining the channel of the direction displacement speed corresponding to the first coordinate and the direction displacement speed corresponding to the second coordinate to obtain direction displacement speed information.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: creating a blueprint object based on the target model; determining that the blueprint object is in a hidden state in the virtual scene, and the target model is in a display state in the virtual scene; creating a dynamic material of the target model; dynamically adjusting dynamic materials based on the visible distance of the target model; and adjusting the relative position relation between the target model and the center point position of the blueprint object based on the preset position parameter.
In the computer-readable storage medium of this embodiment, a technical solution of a model rendering method is provided. Sampling a preset map according to preset texture coordinates to obtain color parameters of a target model; converting the scene depth of the target model under the current view angle to obtain the transparency of the target model, wherein the current view angle is used for representing the view angle corresponding to the virtual camera in the virtual scene, and the scene depth is used for representing the distance information from the pixel point in the target model to the virtual camera; and rendering and outputting the target model according to the color parameters and the transparency. It is easy to notice that, through the color parameter that samples the map of predetermining and the transparency that the scene degree of depth under the current visual angle was converted, the target model that covers in front of the virtual camera is rendered based on color parameter and transparency and is obtained the vague effect, realize reducing the performance consumption of vague effect, simultaneously, the vague effect changes along with the player visual angle, the situation that near place in the real world is clearly visible, and far place is blocked by the vague and is unable to be clearly visible has been simulated, the performance consumption is reduced, improve the fidelity of the vague effect rendering in the recreation, and then increase the technological effect of the sense of reality of recreation, and then the performance consumption of the vague effect in the recreation is higher, technical problem that the rendering effect is relatively poor is solved.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a computer readable storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present application, a computer-readable storage medium stores thereon a program product capable of implementing the method described above in this embodiment. In some possible implementations, aspects of the disclosed embodiments may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of the disclosure, when the program product is run on the terminal device.
A program product for implementing the above-described method according to an embodiment of the present disclosure may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the embodiments of the present disclosure is not limited thereto, and in the embodiments of the present disclosure, the computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Any combination of one or more computer readable media may be employed by the program product described above. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the program code embodied on the computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Embodiments of the present disclosure also provide an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
S1, sampling a preset map according to preset texture coordinates to obtain color parameters of a target model;
S2, converting the scene depth of the target model under the current view angle to obtain the transparency of the target model, wherein the current view angle is used for representing the view angle corresponding to the virtual camera in the virtual scene, and the scene depth is used for representing the distance information from the pixel point in the target model to the virtual camera;
and S3, rendering and outputting the target model according to the color parameters and the transparency.
Optionally, the above processor may be further configured to perform the following steps by a computer program: acquiring a screen position of a target model in a screen space corresponding to a graphical user interface, and obtaining screen texture coordinates; generating an arc mask based on screen texture coordinates; mixing a distance parameter and an radian parameter based on the arc shade to obtain a mixed parameter, wherein the radian parameter is used for representing the radian of the arc shade, and the distance parameter is used for reducing the depth of a scene; and obtaining the ratio of the scene depth to the mixing parameter to obtain the transparency.
Optionally, the above processor may be further configured to perform the following steps by a computer program: obtaining a difference value between a screen texture coordinate and a preset coordinate to obtain a first coordinate difference value; obtaining a product of the first coordinate difference value and a preset value to obtain a second coordinate difference value; obtaining a dot product of the first coordinate difference value and the second coordinate difference value to obtain a target length; the square root of the target length is obtained, resulting in an arc mask.
Optionally, the above processor may be further configured to perform the following steps by a computer program: obtaining the product of the distance parameter and the radian parameter to obtain a target parameter; determining a mixing ratio of the distance parameter and the target parameter based on the arc mask; and mixing the distance parameter and the target parameter according to the mixing proportion to obtain a mixing parameter.
Optionally, the above processor may be further configured to perform the following steps by a computer program: sampling the preset mapping according to the preset texture coordinates to obtain an initial sampling result; adjusting the contrast of the initial sampling result based on the contrast parameter to obtain a first sampling result; adjusting the brightness and the color saturation of the first sampling result based on the intensity parameter to obtain a second sampling result; and adjusting the color of the second sampling result based on the preset color to obtain a color parameter.
Optionally, the above processor may be further configured to perform the following steps by a computer program: and adjusting the contrast of the initial sampling result based on the contrast parameter by using a preset material function to obtain a first sampling result.
Optionally, the above processor may be further configured to perform the following steps by a computer program: and obtaining a product of the first sampling result and the intensity parameter to obtain a second sampling result.
Optionally, the above processor may be further configured to perform the following steps by a computer program: and obtaining the product of the second sampling result and the preset color to obtain the color parameter.
Optionally, the above processor may be further configured to perform the following steps by a computer program: determining a current function value of a preset function corresponding to the target model; responding to the current function value of the preset function as the preset function value, and controlling the preset texture coordinates to sample the preset map to obtain self-luminous color; and determining the self-luminous color as the preset color in response to the current function value of the preset function not being the preset function value.
Optionally, the above processor may be further configured to perform the following steps by a computer program: obtaining channel values of a plurality of channels of a preset image; based on the channel values of the channels, determining repacking information and direction displacement speed information, wherein the repacking information is used for representing the number of preset maps arranged on texture coordinates of the preset maps, and the direction displacement speed information is used for representing the scrolling speed of the texture coordinates of the preset maps; overlapping the re-paving information and the direction displacement speed information to obtain preset texture coordinates.
Optionally, the above processor may be further configured to perform the following steps by a computer program: based on the channel values of the channels, determining a resurfacing parameter corresponding to the first coordinate and a resurfacing parameter corresponding to the second coordinate; and combining the resurfacing parameters corresponding to the first coordinates and the resurfacing parameters corresponding to the second coordinates to obtain resurfacing information.
Optionally, the above processor may be further configured to perform the following steps by a computer program: determining a direction displacement speed corresponding to the first coordinate and a direction displacement speed corresponding to the second coordinate based on channel values of the plurality of channels; and combining the channel of the direction displacement speed corresponding to the first coordinate and the direction displacement speed corresponding to the second coordinate to obtain direction displacement speed information.
Optionally, the above processor may be further configured to perform the following steps by a computer program: creating a blueprint object based on the target model; determining that the blueprint object is in a hidden state in the virtual scene, and the target model is in a display state in the virtual scene; creating a dynamic material of the target model; dynamically adjusting dynamic materials based on the visible distance of the target model; and adjusting the relative position relation between the target model and the center point position of the blueprint object based on the preset position parameter.
In the electronic device of the embodiment, a technical scheme of a model rendering method is provided. Sampling a preset map according to preset texture coordinates to obtain color parameters of a target model; converting the scene depth of the target model under the current view angle to obtain the transparency of the target model, wherein the current view angle is used for representing the view angle corresponding to the virtual camera in the virtual scene, and the scene depth is used for representing the distance information from the pixel point in the target model to the virtual camera; and rendering and outputting the target model according to the color parameters and the transparency. It is easy to notice that, through the color parameter that samples the map of predetermining and the transparency that the scene degree of depth under the current visual angle was converted, the target model that covers in front of the virtual camera is rendered based on color parameter and transparency and is obtained the vague effect, realize reducing the performance consumption of vague effect, simultaneously, the vague effect changes along with the player visual angle, the situation that near place in the real world is clearly visible, and far place is blocked by the vague and is unable to be clearly visible has been simulated, the performance consumption is reduced, improve the fidelity of the vague effect rendering in the recreation, and then increase the technological effect of the sense of reality of recreation, and then the performance consumption of the vague effect in the recreation is higher, technical problem that the rendering effect is relatively poor is solved.
Fig. 20 is a schematic diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 20, the electronic device 2000 is only one example, and should not impose any limitation on the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 20, the electronic apparatus 2000 is embodied in the form of a general purpose computing device. The components of the electronic device 2000 may include, but are not limited to: the at least one processor 2010, the at least one memory 2020, a bus 22102010 connecting the different system components (including the memory 21202020 and the processor 21102010), and a display 21402040.
Wherein the memory 2020 stores program code that can be executed by the processor 2010 to cause the processor 2010 to perform steps according to various exemplary implementations of the present disclosure described in the method section above of the embodiments of the present application.
The memory 2020 may include readable media in the form of volatile memory units such as random access memory unit (RAM) 20201 and/or cache memory unit 20202, and may further include read only memory unit (ROM) 20203, and may also include nonvolatile memory such as one or more magnetic storage devices, flash memory, or other nonvolatile solid state memory.
In some examples, memory 2020 may also include a program/utility 20204 having a set (at least one) of program modules 20205, such program modules 20205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. The memory 2020 may further include memory located remotely from the processor 2010, which may be connected to the electronic device 2000 through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The bus 2010 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processor 2010, or a local bus using any of a variety of bus architectures.
The display 2040 may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the electronic device 2000.
Optionally, the electronic apparatus 2000 may also be in communication with one or more external devices 2100 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic apparatus 2000, and/or any device (e.g., router, modem, etc.) that enables the electronic apparatus 2000 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 2050. Also, the electronic device 2000 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet via the network adapter 2060. As shown in fig. 20, the network adapter 2060 communicates with other modules of the electronic device 2000 via the bus 2010. It should be appreciated that although not shown in fig. 20, other hardware and/or software modules may be used in connection with the electronic device 2000, which may include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The electronic device 2000 may further include: a keyboard, a cursor control device (e.g., a mouse), an input/output interface (I/O interface), a network interface, a power supply, and/or a camera.
It will be appreciated by those skilled in the art that the configuration shown in fig. 20 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the electronic device 2000 may also include more or fewer components than shown in fig. 20, or have a different configuration than shown in fig. 1. The memory 2020 may be used for storing a computer program and corresponding data, such as a computer program and corresponding data corresponding to a model rendering method in an embodiment of the present disclosure. The processor 2010 executes a computer program stored in the memory 2020, thereby executing various functional applications and data processing, that is, implementing the model rendering method described above.
The foregoing embodiment numbers of the present disclosure are merely for description and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present disclosure, the descriptions of the various embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a usb disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present disclosure, and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present disclosure, which are intended to be comprehended within the scope of the present disclosure.

Claims (16)

1. A model rendering method, comprising:
Sampling a preset map according to preset texture coordinates to obtain color parameters of a target model;
Converting the scene depth of the target model under the current view angle to obtain the transparency of the target model, wherein the current view angle is used for representing the view angle corresponding to a virtual camera in a virtual scene, and the scene depth is used for representing the distance information from a pixel point in the target model to the virtual camera;
and rendering and outputting the target model according to the color parameters and the transparency.
2. The method of claim 1, wherein converting the scene depth of the object model at the current perspective to obtain the transparency of the object model comprises:
acquiring a screen position of the target model in a screen space to obtain screen texture coordinates;
Generating an arc mask based on the screen texture coordinates;
Mixing a distance parameter and an radian parameter based on the arc-shaped shade to obtain a mixing parameter, wherein the radian parameter is used for representing the radian of the arc-shaped shade, and the distance parameter is used for reducing the scene depth;
and obtaining the ratio of the scene depth to the mixing parameter to obtain the transparency.
3. The method of claim 2, wherein generating the arc mask based on the screen texture coordinates comprises:
obtaining a difference value between the screen texture coordinates and preset coordinates to obtain a first coordinate difference value;
obtaining a product of the first coordinate difference value and a preset value to obtain a second coordinate difference value;
Obtaining a dot product of the first coordinate difference value and the second coordinate difference value to obtain a target length;
And obtaining the square root of the target length to obtain the arc-shaped shade.
4. A method according to claim 3, wherein blending the distance parameter and the radian parameter based on the arc mask to obtain the blending parameter comprises:
Obtaining the product of the distance parameter and the radian parameter to obtain a target parameter;
Determining a mixing ratio of the distance parameter and the target parameter based on the arc mask;
And mixing the distance parameter and the target parameter according to the mixing proportion to obtain the mixing parameter.
5. The method of claim 1, wherein sampling the pre-determined map according to the pre-determined texture coordinates to obtain the color parameters of the object model comprises:
Sampling the preset mapping according to the preset texture coordinates to obtain an initial sampling result;
Adjusting the contrast of the initial sampling result based on the contrast parameter to obtain a first sampling result;
adjusting the brightness and the color saturation of the first sampling result based on the intensity parameter to obtain a second sampling result;
and adjusting the color of the second sampling result based on a preset color to obtain the color parameter.
6. The method of claim 5, wherein adjusting the contrast of the initial sampling result based on the contrast parameter results in a first sampling result, comprising:
and adjusting the contrast of the initial sampling result based on the contrast parameter by using a preset material function to obtain the first sampling result.
7. The method of claim 5, wherein adjusting the brightness and color saturation of the first sample based on the intensity parameter results in a second sample, comprising:
and obtaining the product of the first sampling result and the intensity parameter to obtain the second sampling result.
8. The method of claim 5, wherein adjusting the color of the second sampling result based on a preset color to obtain the color parameter comprises:
And obtaining the product of the second sampling result and the preset color to obtain the color parameter.
9. The method of claim 1, wherein sampling the pre-determined map according to pre-determined texture coordinates to obtain the color parameters of the object model comprises:
determining a current function value of a preset function corresponding to the target model;
Responding to the current function value of the preset function as a preset function value, and controlling the preset texture coordinates to sample the preset map to obtain self-luminous color;
and determining that the self-luminous color is the preset color in response to the current function value of the preset function not being the preset function value.
10. The method according to claim 1, wherein the method further comprises:
Obtaining channel values of a plurality of channels of a preset image;
based on the channel values of the channels, determining repacking information and direction displacement speed information, wherein the repacking information is used for representing the quantity of the preset maps arranged on the texture coordinates of the preset maps, and the direction displacement speed information is used for representing the scrolling speed of the texture coordinates of the preset maps;
and superposing the resurfacing information and the directional displacement speed information to obtain the preset texture coordinates.
11. The method of claim 10, wherein determining the resurfacing information based on channel values of the plurality of channels comprises:
Based on the channel values of the channels, determining a resurfacing parameter corresponding to the first coordinate and a resurfacing parameter corresponding to the second coordinate;
And carrying out channel combination on the resurfacing parameters corresponding to the first coordinates and the resurfacing parameters corresponding to the second coordinates to obtain the resurfacing information.
12. The method of claim 10, wherein determining the directional displacement velocity information based on the channel values of the plurality of channels comprises:
determining a direction displacement speed corresponding to the first coordinate and a direction displacement speed corresponding to the second coordinate based on the channel values of the plurality of channels;
and carrying out channel combination on the direction displacement speed corresponding to the first coordinate and the direction displacement speed corresponding to the second coordinate to obtain the direction displacement speed information.
13. The method according to claim 1, wherein the method further comprises:
Creating a blueprint object based on the target model;
Determining that the blueprint object is in a hidden state in the virtual scene, and the target model is in a display state in the virtual scene;
creating a dynamic material of the target model;
dynamically adjusting the dynamic material based on the visible distance of the target model;
and adjusting the relative position relation between the target model and the center point position of the blueprint object based on a preset position parameter.
14. A model rendering apparatus, characterized by comprising:
the sampling module is used for sampling the preset mapping according to the preset texture coordinates to obtain color parameters of the target model;
The conversion module is used for converting the scene depth of the target model under the current view angle to obtain the transparency of the target model, wherein the current view angle is used for representing the view angle corresponding to the virtual camera in the virtual scene, and the scene depth is used for representing the distance information from the pixel point in the target model to the virtual camera;
And the rendering module is used for rendering and outputting the target model according to the color parameters and the transparency.
15. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program, wherein the computer program is arranged to perform the method of any of claims 1 to 13 when being run by a processor.
16. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of claims 1 to 13.
CN202410252155.8A 2024-03-05 2024-03-05 Model rendering method and device, storage medium and electronic device Pending CN118079373A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410252155.8A CN118079373A (en) 2024-03-05 2024-03-05 Model rendering method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410252155.8A CN118079373A (en) 2024-03-05 2024-03-05 Model rendering method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN118079373A true CN118079373A (en) 2024-05-28

Family

ID=91163004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410252155.8A Pending CN118079373A (en) 2024-03-05 2024-03-05 Model rendering method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN118079373A (en)

Similar Documents

Publication Publication Date Title
US6580430B1 (en) Method and apparatus for providing improved fog effects in a graphics system
US5870098A (en) Method for rendering shadows on a graphical display
US6664962B1 (en) Shadow mapping in a low cost graphics system
US7034828B1 (en) Recirculating shade tree blender for a graphics system
JP5531093B2 (en) How to add shadows to objects in computer graphics
US7227555B2 (en) Rendering volumetric fog and other gaseous phenomena
US7274365B1 (en) Graphical processing of object perimeter information
RU2427918C2 (en) Metaphor of 2d editing for 3d graphics
WO1998038591A9 (en) Method for rendering shadows on a graphical display
US20060176303A1 (en) Systems and methods for the real-time and realistic simulation of natural atmospheric lighting phenomenon
US20070139408A1 (en) Reflective image objects
US7061502B1 (en) Method and apparatus for providing logical combination of N alpha operations within a graphics system
CN112215934A (en) Rendering method and device of game model, storage medium and electronic device
US6747642B1 (en) Method and apparatus for providing non-photorealistic cartoon outlining within a 3D videographics system
US7064755B2 (en) System and method for implementing shadows using pre-computed textures
US6445395B1 (en) Method and apparatus for radiometrically accurate texture-based lightpoint rendering technique
CN111882631A (en) Model rendering method, device, equipment and storage medium
US20050093882A1 (en) System and method for run-time integration of an inset geometry into a background geometry
US20180005432A1 (en) Shading Using Multiple Texture Maps
US6940504B1 (en) Rendering volumetric fog and other gaseous phenomena using an alpha channel
AU5652700A (en) Method and apparatus for providing non-photorealistic cartoon outlining within a 3D vodeographic system
CN118079373A (en) Model rendering method and device, storage medium and electronic device
CN116958390A (en) Image rendering method, device, equipment, storage medium and program product
US8310483B2 (en) Tinting a surface to simulate a visual effect in a computer generated scene
CN117541709A (en) Surface rendering method and device in virtual scene and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination