CN115761105A - Illumination rendering method and device, electronic equipment and storage medium - Google Patents
Illumination rendering method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115761105A CN115761105A CN202211510885.0A CN202211510885A CN115761105A CN 115761105 A CN115761105 A CN 115761105A CN 202211510885 A CN202211510885 A CN 202211510885A CN 115761105 A CN115761105 A CN 115761105A
- Authority
- CN
- China
- Prior art keywords
- illumination
- data
- target
- information
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Generation (AREA)
Abstract
The present disclosure provides an illumination rendering method, apparatus, electronic device, and storage medium, where the illumination rendering method includes: obtaining model initial illumination data corresponding to a target object model, wherein the model initial illumination data is generated by user pre-configuration; acquiring current position information of the target object model in a target scene and current lens information of a virtual camera, and processing the initial illumination data of the model based on the current position information and the current lens information to obtain model target illumination data; generating first illumination information corresponding to the target object model based on the model target illumination data and the model initial illumination data; and acquiring second illumination information of a scene light source of the target scene, and performing illumination rendering on the target object model based on the first illumination information and the second illumination information. According to the method and the device, the rendering effect can be improved while the smoothness of the rendered picture is guaranteed.
Description
Technical Field
The present disclosure relates to the field of rendering technologies, and in particular, to an illumination rendering method, an illumination rendering apparatus, an electronic device, and a storage medium.
Background
With the continuous development of computer technology, 3D games have become the mainstream of the market, and people experience games and simultaneously put higher requirements on the visual effect of the games. In the related art, in order to improve the visual effect of each character in a game, in the rendering process, a light source is separately set for each character model in addition to a scene light source.
However, although the method can improve the display effect of each character model after rendering, because each character model corresponds to a single light, when different character models are closer, the lights between the character models can affect each other, and further the overall visual display effect is affected. In addition, as the number of the lights is increased, the calculation pressure of a rendering pipeline is increased in the rendering process, and the fluency of the game is further influenced.
Disclosure of Invention
The embodiment of the disclosure at least provides an illumination rendering method, an illumination rendering device, electronic equipment and a storage medium, and can improve the rendering effect of cartoon style rendering.
The embodiment of the disclosure provides an illumination rendering method, which includes:
obtaining model initial illumination data corresponding to a target object model, wherein the model initial illumination data is generated by user pre-configuration;
acquiring current position information of the target object model in a target scene and current lens information of a virtual camera, and processing the initial illumination data of the model based on the current position information and the current lens information to obtain model target illumination data;
generating first illumination information corresponding to the target object model based on the model target illumination data and the model initial illumination data;
and acquiring second illumination information of a scene light source of the target scene, and performing illumination rendering on the target object model based on the first illumination information and the second illumination information.
In the embodiment of the disclosure, since the first illumination information corresponding to the target object model is generated by the model initial illumination data pre-configured by the user, an independent light source is not required to be set for the target object model, and an independent illumination effect on the target object model can be realized, so that different models cannot be influenced mutually when approaching, and thus the model rendering effect can be improved. In addition, because the initial illumination data of the models can be shared among the models, in the process of generating the first illumination information of the models, part of the data only needs to be calculated once, and then the calculation pressure of a rendering pipeline can be reduced, namely, the rendering effect can be improved while the fluency of a rendering picture is ensured.
In one possible embodiment, the model initial lighting data comprises initial lighting direction data and initial lighting position data; the processing the model initial illumination data based on the current position information and the current lens information to obtain model target illumination data comprises:
processing the initial illumination direction data based on the current lens orientation information of the virtual camera to obtain target illumination direction data;
determining target illumination position data according to the target illumination direction data, the current position information of the target object model and the initial illumination position data;
and obtaining the model target illumination data based on the target illumination direction data and the target illumination position data.
In the embodiment of the disclosure, the initial illumination direction data is processed through the current lens orientation information of the virtual camera, and the initial illumination position data is adjusted through the current position information of the target object model, so that the adjusted illumination direction is relative to the virtual camera and the adjusted illumination position is relative to the target object model, but not relative to the scene, and since the rendering shader of each model only receives the model initial illumination data and the model target illumination data corresponding to the model itself, the first illumination information corresponding to the model itself is generated, and no other model is involved, therefore, it can be ensured that the illumination among the models cannot affect each other when different models approach each other.
In a possible embodiment, the initial lighting direction data is expressed by a unit vector direction relative to the virtual camera, and the processing the initial lighting direction data based on the current lens orientation information of the virtual camera to obtain the target lighting direction data includes:
determining current direction information of the unit vector direction in a camera space based on current lens orientation information of the virtual camera;
and converting the current direction information of the unit vector direction in the camera space into a world space according to the world space transformation matrix of the virtual camera to obtain the illumination direction data of the world space, and taking the illumination direction data of the world space as the target illumination direction data.
In the embodiment of the disclosure, since the initial illumination direction data is expressed in the unit vector direction relative to the virtual camera, after the current lens orientation information of the virtual camera is obtained, the current direction information of the unit vector direction in the camera space can be determined, and then the illumination direction data of the world space can be determined through the world space transformation matrix of the virtual camera, so as to obtain the target illumination direction data, so that the determination efficiency of the target illumination direction data can be improved.
In a possible embodiment, the initial lighting position data is represented by a preset distance relative to the virtual camera, and the determining target lighting position data according to the target lighting direction data, the current position information of the target object model and the initial lighting position data comprises:
and offsetting the preset distance by taking the current position information of the target object model as a starting point according to the direction indicated by the target illumination direction data to obtain illumination position data of a world space, and taking the illumination position data of the world space as the target illumination position data.
In the embodiment of the present disclosure, since the initial illumination position data is represented by a preset distance relative to the virtual camera, after the target illumination direction data is obtained, the illumination position data in the world space may be obtained by offsetting the preset distance by using the current position information of the target object model as a starting point according to the direction indicated by the target illumination direction data. Therefore, the determination efficiency of the target illumination position data can be improved.
In one possible embodiment, the model initial lighting data further comprises initial lighting color data and initial lighting intensity data; generating first illumination information corresponding to the target object model based on the model target illumination data and the model initial illumination data, including:
processing the initial illumination intensity according to the current position information of the target object model and the target illumination position data to obtain target illumination intensity data based on the surface of the target object model;
and generating first illumination information corresponding to the target object model based on the target illumination intensity data, the initial illumination color data and the target illumination direction data.
In the embodiment of the disclosure, the processed target illumination intensity data, the processed target illumination direction data, and the processed initial illumination color data are combined to generate the corresponding first illumination information of the target object model, so that the illumination effect of the light source corresponding to the target object model can be realized.
In a possible implementation manner, the processing the initial illumination intensity according to the current position information of the target object model and the target illumination position data to obtain target illumination intensity data based on the surface of the target object model includes:
determining the distance between the target object model and the illumination position indicated by the target illumination position data according to the current position information of the target object model and the target illumination position data;
determining a linear attenuation amount of the initial illumination intensity based on a distance between the target object model and the illumination location;
and processing the initial illumination intensity according to the linear attenuation amount to obtain the target illumination intensity.
In the embodiment of the disclosure, the illumination intensity received by the target object model can be closer to the real illumination effect by performing attenuation processing on the initial illumination intensity, and then the rendering effect of the model can be further improved.
In one possible embodiment, the performing illumination rendering on the target object model based on the first illumination information and the second illumination information includes:
performing first illumination rendering on the target object model based on the first illumination information to obtain a first illumination rendering result;
performing second illumination rendering on the target object model based on the second illumination information to obtain a second illumination rendering result;
and fusing the first illumination rendering result and the second illumination rendering result to obtain an illumination rendering result of the target object model.
In the embodiment of the disclosure, the target object models are respectively rendered through the first illumination information and the second illumination information, and the rendering results are fused, so that the independent rendering effect of the models in a scene can be realized, and the overall rendering effect of a plurality of models in the scene can be improved.
The embodiment of the present disclosure provides an illumination rendering device, including:
the data acquisition module is used for acquiring model initial illumination data corresponding to a target object model, and the model initial illumination data is generated by user pre-configuration;
the data processing module is used for acquiring current position information of the target object model in a target scene and current lens information of the virtual camera, and processing the model initial illumination data based on the current position information and the current lens information to obtain model target illumination data;
the information generation module is used for generating first illumination information corresponding to the target object model based on the model target illumination data and the model initial illumination data;
and the model rendering module is used for acquiring second illumination information of a scene light source of the target scene and performing illumination rendering on the target object model based on the first illumination information and the second illumination information.
In one possible embodiment, the model initial lighting data comprises initial lighting direction data and initial lighting position data; the data processing module is specifically configured to:
processing the initial illumination direction data based on the current lens orientation information of the virtual camera to obtain target illumination direction data;
determining target illumination position data according to the target illumination direction data, the current position information of the target object model and the initial illumination position data;
and obtaining the model target illumination data based on the target illumination direction data and the target illumination position data.
In a possible implementation, the initial lighting direction data is represented in a unit vector direction with respect to the virtual camera, and the data processing module is specifically configured to:
determining current direction information of the unit vector direction in a camera space based on current lens orientation information of the virtual camera;
and converting the current direction information of the unit vector direction in the camera space into a world space according to the world space transformation matrix of the virtual camera to obtain illumination direction data of the world space, and taking the illumination direction data of the world space as the target illumination direction data.
In a possible implementation, the initial illumination position data is represented by a preset distance relative to the virtual camera, and the data processing module is specifically configured to:
and offsetting the preset distance by taking the current position information of the target object model as a starting point according to the direction indicated by the target illumination direction data to obtain illumination position data of the world space, and taking the illumination position data of the world space as the target illumination position data.
In one possible embodiment, the model initial lighting data further comprises initial lighting color data and initial lighting intensity data; the information generation module is specifically configured to:
processing the initial illumination intensity according to the current position information of the target object model and the target illumination position data to obtain target illumination intensity data based on the surface of the target object model;
and generating first illumination information corresponding to the target object model based on the target illumination intensity data, the initial illumination color data and the target illumination direction data.
In a possible implementation manner, the information generating module is specifically configured to:
determining the distance between the target object model and the illumination position indicated by the target illumination position data according to the current position information of the target object model and the target illumination position data;
determining a linear attenuation amount of the initial illumination intensity based on a distance between the target object model and the illumination location;
and processing the initial illumination intensity according to the linear attenuation amount to obtain the target illumination intensity.
In a possible implementation, the model rendering module is specifically configured to:
performing first illumination rendering on the target object model based on the first illumination information to obtain a first illumination rendering result;
performing second illumination rendering on the target object model based on the second illumination information to obtain a second illumination rendering result;
and fusing the first illumination rendering result and the second illumination rendering result to obtain an illumination rendering result of the target object model.
An embodiment of the present disclosure provides an electronic device, including: a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, the processor and the memory communicate with each other via the bus when the electronic device is running, and the machine-readable instructions are executed by the processor to perform the illumination rendering method according to any of the above embodiments.
An embodiment of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the illumination rendering method described in any one of the foregoing embodiments.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 illustrates a flow diagram of a lighting rendering method provided by some embodiments of the present disclosure;
FIG. 2 illustrates a flow diagram of a method of processing initial lighting data of a model provided by some embodiments of the present disclosure;
FIG. 3 illustrates a flowchart of a method for lighting rendering a target object model based on lighting information provided by some embodiments of the present disclosure;
fig. 4 illustrates a schematic structural diagram of an illumination rendering apparatus provided by some embodiments of the present disclosure;
fig. 5 illustrates a schematic diagram of an electronic device provided by some embodiments of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making any creative effort, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of a, B, and C, and may mean including any one or more elements selected from the group consisting of a, B, and C.
The online game is highly popular among users due to the fact that the online game has high reality of game scenes and good appreciation and operability. Research shows that in order to meet the requirement of a user on higher visual effect, in the rendering process, besides a scene light source, a light source is usually set for each character model. However, although the method can highlight the dynamic visual effect of the character in the scene, because each character model corresponds to a single light, when different character models are closer, the lights between the character models can affect each other, and further the overall visual display effect is affected. In addition, as the number of lights is increased, the calculation pressure of a rendering pipeline is increased in the rendering process, so that the performance overhead becomes 9633, and the fluency of the game is further influenced.
Based on the research, the present disclosure provides an illumination rendering method, which includes firstly obtaining model initial illumination data corresponding to a target object model, the model initial illumination data being generated by user pre-configuration; then obtaining current position information of the target object model in a target scene and current lens information of a virtual camera, and processing the initial illumination data of the model based on the current position information and the current lens information to obtain model target illumination data; then generating first illumination information corresponding to the target object model based on the model target illumination data and the model initial illumination data; and finally, second illumination information of a scene light source of the target scene is obtained, and illumination rendering is carried out on the target object model based on the first illumination information and the second illumination information.
In the embodiment of the disclosure, since the first illumination information corresponding to the target object model is generated by the model initial illumination data pre-configured by the user, an independent light source is not required to be set for the target object model, and an independent illumination effect on the target object model can be realized, so that different models cannot be influenced mutually when approaching, and thus the model rendering effect can be improved. In addition, because the initial illumination data of the models can be shared among the models, in the process of generating the first illumination information of the models, part of the data only needs to be calculated once, and then the calculation pressure of a rendering pipeline can be reduced, namely, the rendering effect can be improved while the fluency of a rendering picture is ensured.
To facilitate understanding of the present embodiment, a detailed description is first given of an execution subject of the illumination rendering method provided by the embodiment of the present disclosure. The execution subject of the illumination rendering method provided by the embodiment of the disclosure is an electronic device. The electronic device may be a terminal device or a server. The terminal device may be a mobile device, a user terminal, a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. The server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud storage, big data, an artificial intelligence platform and the like. In other embodiments, the illumination rendering method may also be implemented by the processor calling computer readable instructions stored in the memory.
The following describes the illumination rendering method provided in the embodiment of the present application in detail with reference to the accompanying drawings. Referring to fig. 1, a flowchart of an illumination rendering method provided in the embodiment of the present disclosure is shown, where the illumination rendering method includes the following steps S101 to S104:
s101, obtaining model initial illumination data corresponding to a target object model, wherein the model initial illumination data is generated by user pre-configuration.
Illustratively, the target object may be a virtual object in a virtual scene, the virtual object refers to things in the virtual scene, and the virtual object includes, but is not limited to, at least one of a virtual character, an animal, a plant, furniture, or a building. In the embodiment of the disclosure, the target object is a dynamic object which can be controlled in a virtual scene.
The virtual scene is a digital scene drawn by a computer through a digital communication technology, and comprises a two-dimensional virtual scene and a three-dimensional virtual scene, and information such as various material forms and spatial relationships appearing in the world can be simulated in a real way by using a virtualization technology. The three-dimensional virtual scene can display the form of the object more beautifully and can display the virtual reality world more intuitively. For example, the object objects in the three-dimensional virtual scene may include at least one of a terrain, a house, a tree, a character, and the like.
For the virtual scene, the three-dimensional simulation environment can be displayed through a computer three-dimensional graphic, the three-dimensional simulation environment can be displayed on a screen, and all target objects in the virtual scene can be described through three-dimensional scene data. For example, three-dimensional scene data may be loaded into a three-dimensional scene to present a three-dimensional simulation environment. Wherein the three-dimensional scene data may include at least one of model data, texture data, lighting data, terrain data, raster volume data, and the like.
In the embodiment of the present disclosure, the virtual scene is a scene for a player to control a target object to complete game logic, and the virtual scene may be a simulated environment of a real world, or a semi-simulated semi-fictional virtual environment, or a purely fictional virtual environment. The virtual environment may be sky, land, ocean, etc., wherein the land includes environmental elements such as desert, city, etc.
In addition, in the embodiment of the present disclosure, the illumination data includes, in addition to the scene light source, model initial illumination data corresponding to each model, and the model initial illumination data is generated by user pre-configuration. For example, a user may configure corresponding model initial illumination data for each model in an Excel table, and when the user uses the model initial illumination data, the data may be imported into a corresponding item.
Optionally, the model initial lighting data includes initial lighting direction data, initial lighting position data, initial lighting color data, initial lighting intensity data, and the like.
It can be understood that before rendering the target object model, a scene file that needs to establish a three-dimensional model needs to be acquired, a virtual scene is established according to the scene file, and then the target object model is displayed and processed in the virtual scene.
It should be noted that, in the embodiment of the present invention, when performing three-dimensional model processing, a three-dimensional model design tool may be first used to establish a model, and then a three-dimensional model development tool is used to further process the model, so as to finally obtain a required three-dimensional model.
Further, after the three-dimensional model of the target object is built, the three-dimensional model can be exported for further model processing by a three-dimensional model development tool. The embodiment of the invention mainly describes a process of rendering a three-dimensional model in a game process after the three-dimensional model is established.
S102, obtaining current position information of the target object model in a target scene and current lens information of a virtual camera, and processing the model initial illumination data based on the current position information and the current lens information to obtain model target illumination data.
The target scene may be the virtual scene, and it can be understood that, because the target object is a dynamic control object in the virtual scene, the position of the target object in the virtual scene may change, and when the target object is at different positions, the illumination effect received in the virtual scene is different, that is, when the light source in the virtual scene changes, the ambient illumination effect in the virtual scene may also change accordingly. In addition, as the position of the virtual object changes, the visual presentation content presented by the screen also differs.
Therefore, in some embodiments, it is necessary to obtain current position information of the target object model in a target scene and current lens information of the virtual camera, and process the model initial illumination data based on the current position information and the current lens information to obtain model target illumination data, so that the target illumination data matches with a current state of the target object model.
For example, the initial illumination direction data and the initial illumination position data in the model initial illumination data may be processed based on the current position information and the current lens information to obtain model target illumination data relative to the model and the virtual camera.
In one possible implementation, referring to fig. 2, regarding step S102, when the model initial illumination data is processed based on the current position information and the current lens information to obtain model target illumination data, the following steps S1021 to S1023 may be included:
and S1021, processing the initial illumination direction data based on the current lens orientation information of the virtual camera to obtain target illumination direction data.
It can be understood that, since the initial lighting direction data is of the camera space, the initial lighting direction data needs to be processed to obtain the lighting direction data of the world space for the subsequent lighting rendering. In some embodiments, the initial lighting direction data may be expressed in a unit vector direction relative to the virtual camera, and thus, when the current lens orientation information of the virtual camera is acquired, the current direction information of the unit vector direction in the camera space may be determined; and then converting the current direction information of the unit vector direction in the camera space into a world space according to the world space transformation matrix of the virtual camera to obtain illumination direction data of the world space, and taking the illumination direction data of the world space as the target illumination direction data. Therefore, the determination efficiency of the target illumination direction data can be improved.
And S1022, determining target illumination position data according to the target illumination direction data, the current position information of the target object model and the initial illumination position data.
For example, the initial illumination position data is represented by a preset distance relative to the virtual camera, and therefore, after obtaining the target illumination direction data, the illumination position data of the world space may be obtained by offsetting the preset distance by using the current position information of the target object model as a starting point according to the direction indicated by the target illumination direction data, and the illumination position data of the world space is used as the target illumination position data. Therefore, the determination efficiency of the target illumination position data can be improved. In this embodiment, the initial illumination position data is one offset value of the world space.
And S1023, obtaining the model target illumination data based on the target illumination direction data and the target illumination position data.
It can be understood that after the illumination direction data of the world space and the illumination position data of the world space are obtained, the model target illumination data can be generated.
In the embodiment of the disclosure, the initial illumination direction data is processed through the current lens orientation information of the virtual camera, and the initial illumination position data is adjusted through the current position information of the target object model, so that the adjusted illumination direction is relative to the virtual camera and the adjusted illumination position is relative to the target object model, but not relative to the scene, and since the rendering shader of each model only receives the model initial illumination data and the model target illumination data corresponding to the model itself, the first illumination information corresponding to the model itself is generated, and no other model is involved, therefore, it can be ensured that the illumination among the models cannot affect each other when different models approach each other.
S103, generating first illumination information corresponding to the target object model based on the model target illumination data and the model initial illumination data.
It is to be understood that the model initial lighting data comprises initial lighting color data and initial lighting intensity data in addition to the initial lighting direction data and the initial lighting position data. The initial illumination direction data and the initial illumination position data are processed through the processes to obtain target illumination direction data and the target illumination position data. Therefore, the target illumination direction data, the target illumination position data, the initial illumination color data and the initial illumination intensity data are combined to obtain the simulated light source corresponding to the target object model. It should be noted that the simulated light source is only for the target object model, and does not affect other models.
However, since there is a distance between the illumination position of the simulated light source and the target object model, in order to improve the real effect of illumination, the initial illumination intensity needs to be processed according to the current position information of the target object model and the target illumination position data to obtain target illumination intensity data based on the surface of the target object model, and then the target illumination direction data, the target illumination position data, and the target illumination intensity data are combined to obtain first illumination information corresponding to the target object model.
In one possible implementation, the rendering shader may determine the linear decrement of the initial illumination intensity based on the target object model table (9633), the distance to the illumination location, and obtain the model table (9633), the received target light intensity. That is, the rendering shader may determine, according to the current position information of the target object model and the target illumination position data, a distance between the target object model and the illumination position indicated by the target illumination position data; then determining a linear attenuation amount of the initial illumination intensity based on a distance between the target object model and the illumination position; and processing the initial illumination intensity according to the linear attenuation amount to obtain the target illumination intensity.
S104, second illumination information of a scene light source of the target scene is obtained, and illumination rendering is carried out on the target object model based on the first illumination information and the second illumination information.
In computer graphics, rendering refers to a process of projecting each object model in a three-dimensional scene into a digital image in two dimensions according to set environment, material, illumination and rendering parameters. The illumination rendering refers to illumination calculation processing performed on the target object model in the rendering process, so that the pixel points on the surface of the target object model have an illumination effect.
A light source in a virtual scene is a series of illumination data that can truly simulate the illumination effect of a light source in reality. Similar to real-world light sources, when a colored light source in a virtual scene is projected onto the surface of a colored object, the final rendered color depends on the reflection and absorption of light. Unlike real-world light sources, light sources in a virtual scene may be virtual light source nodes without shapes or contours. The light source in reality refers to an object that can emit light by itself and is emitting light, such as the sun, an electric lamp, a burning substance, and the like.
In some possible embodiments, the target object model may be rendered simultaneously based on the first illumination information and the second illumination information, resulting in an illumination rendering result. In yet other possible embodiments, referring to fig. 3, when performing illumination rendering on the target object model based on the first illumination information and the second illumination information, the following steps S1041 to S1043 may be further included:
s1041, performing first illumination rendering on the target object model based on the first illumination information to obtain a first illumination rendering result.
And S1042, performing second illumination rendering on the target object model based on the second illumination information to obtain a second illumination rendering result.
And S1043, fusing the first illumination rendering result and the second illumination rendering result to obtain an illumination rendering result of the target object model.
In the embodiment of the disclosure, the target object models are respectively rendered through the first illumination information and the second illumination information, and the rendering results are fused, so that the independent rendering effect of the models in a scene can be realized, and the overall rendering effect of a plurality of models in the scene can be improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same technical concept, an illumination rendering device corresponding to the illumination rendering method is further provided in the embodiment of the present disclosure, and as the principle of solving the problem of the device in the embodiment of the present disclosure is similar to that of the illumination rendering method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 4, there is shown a schematic diagram of an illumination rendering apparatus 400 according to an embodiment of the present disclosure, the apparatus includes:
a data obtaining module 401, configured to obtain model initial illumination data corresponding to a target object model, where the model initial illumination data is generated by user pre-configuration;
a data processing module 402, configured to obtain current position information of the target object model in a target scene and current lens information of a virtual camera, and process the model initial illumination data based on the current position information and the current lens information to obtain model target illumination data;
an information generating module 403, configured to generate first illumination information corresponding to the target object model based on the model target illumination data and the model initial illumination data;
a model rendering module 404, configured to obtain second illumination information of a scene light source of the target scene, and perform illumination rendering on the target object model based on the first illumination information and the second illumination information.
In one possible embodiment, the model initial lighting data comprises initial lighting direction data and initial lighting position data; the data processing module 402 is specifically configured to:
processing the initial illumination direction data based on the current lens orientation information of the virtual camera to obtain target illumination direction data;
determining target illumination position data according to the target illumination direction data, the current position information of the target object model and the initial illumination position data;
and obtaining the model target illumination data based on the target illumination direction data and the target illumination position data.
In a possible implementation, the initial lighting direction data is expressed in a unit vector direction relative to the virtual camera, and the data processing module 402 is specifically configured to:
determining current direction information of the unit vector direction in a camera space based on current lens orientation information of the virtual camera;
and converting the current direction information of the unit vector direction in the camera space into a world space according to the world space transformation matrix of the virtual camera to obtain illumination direction data of the world space, and taking the illumination direction data of the world space as the target illumination direction data.
In a possible implementation, the initial illumination position data is represented by a preset distance relative to the virtual camera, and the data processing module 402 is specifically configured to:
and offsetting the preset distance by taking the current position information of the target object model as a starting point according to the direction indicated by the target illumination direction data to obtain illumination position data of the world space, and taking the illumination position data of the world space as the target illumination position data.
In one possible embodiment, the model initial lighting data further comprises initial lighting color data and initial lighting intensity data; the information generating module 403 is specifically configured to:
processing the initial illumination intensity according to the current position information of the target object model and the target illumination position data to obtain target illumination intensity data based on the surface of the target object model;
and generating first illumination information corresponding to the target object model based on the target illumination intensity data, the initial illumination color data and the target illumination direction data.
In a possible implementation manner, the information generating module 403 is specifically configured to:
determining the distance between the target object model and the illumination position indicated by the target illumination position data according to the current position information of the target object model and the target illumination position data;
determining a linear amount of attenuation of the initial illumination intensity based on a distance between the target object model and the illumination location;
and processing the initial illumination intensity according to the linear attenuation amount to obtain the target illumination intensity.
In a possible implementation, the model rendering module 404 is specifically configured to:
performing first illumination rendering on the target object model based on the first illumination information to obtain a first illumination rendering result;
performing second illumination rendering on the target object model based on the second illumination information to obtain a second illumination rendering result;
and fusing the first illumination rendering result and the second illumination rendering result to obtain an illumination rendering result of the target object model.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 5, a schematic structural diagram of an electronic device 500 provided in the embodiment of the present disclosure includes a processor 501, a memory 502, and a bus 503. The memory 502 is used for storing execution instructions and includes a memory 5021 and an external memory 5022; the memory 5021 is also referred to as an internal memory and temporarily stores operation data in the processor 501 and data exchanged with an external memory 5022 such as a hard disk, and the processor 501 exchanges data with the external memory 5022 via the memory 5021.
In this embodiment, the memory 502 is specifically used for storing application program codes for executing the scheme of the present application, and is controlled by the processor 501 to execute. That is, when the electronic device 500 is running, the processor 501 and the memory 502 communicate via the bus 503, so that the processor 501 executes the application program code stored in the memory 502, thereby executing the method described in any of the foregoing embodiments.
The Memory 502 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Read Only Memory (EPROM), an electrically Erasable Read Only Memory (EEPROM), and the like.
The processor 501 may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 500. In other embodiments of the present application, the electronic device 500 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the illumination rendering method in the above method embodiments are performed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute steps of the illumination rendering method in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the system and the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and details are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solutions of the present disclosure, which are essential or part of the technical solutions contributing to the prior art, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: u disk, removable hard disk, read only memory, random access memory, magnetic or optical disk, etc. for storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (10)
1. A lighting rendering method, comprising:
obtaining model initial illumination data corresponding to a target object model, wherein the model initial illumination data is generated by user pre-configuration;
acquiring current position information of the target object model in a target scene and current lens information of a virtual camera, and processing the initial illumination data of the model based on the current position information and the current lens information to obtain model target illumination data;
generating first illumination information corresponding to the target object model based on the model target illumination data and the model initial illumination data;
and acquiring second illumination information of a scene light source of the target scene, and performing illumination rendering on the target object model based on the first illumination information and the second illumination information.
2. The method of claim 1, wherein the model initial lighting data comprises initial lighting direction data and initial lighting position data; the processing the model initial illumination data based on the current position information and the current lens information to obtain model target illumination data comprises:
processing the initial illumination direction data based on the current lens orientation information of the virtual camera to obtain target illumination direction data;
determining target illumination position data according to the target illumination direction data, the current position information of the target object model and the initial illumination position data;
and obtaining the model target illumination data based on the target illumination direction data and the target illumination position data.
3. The method of claim 2, wherein the initial lighting direction data is expressed in a unit vector direction relative to the virtual camera, and wherein the processing the initial lighting direction data based on current lens orientation information of the virtual camera to obtain target lighting direction data comprises:
determining current direction information of the unit vector direction in a camera space based on current lens orientation information of the virtual camera;
and converting the current direction information of the unit vector direction in the camera space into a world space according to the world space transformation matrix of the virtual camera to obtain illumination direction data of the world space, and taking the illumination direction data of the world space as the target illumination direction data.
4. A method according to claim 2 or 3, wherein the initial lighting position data is represented in a preset distance relative to the virtual camera, and wherein determining target lighting position data from the target lighting direction data, current position information of the target object model and the initial lighting position data comprises:
and offsetting the preset distance by taking the current position information of the target object model as a starting point according to the direction indicated by the target illumination direction data to obtain illumination position data of the world space, and taking the illumination position data of the world space as the target illumination position data.
5. The method of claim 2, wherein the model initial lighting data further comprises initial lighting color data and initial lighting intensity data; generating first illumination information corresponding to the target object model based on the model target illumination data and the model initial illumination data, including:
processing the initial illumination intensity according to the current position information of the target object model and the target illumination position data to obtain target illumination intensity data based on the surface of the target object model;
and generating first illumination information corresponding to the target object model based on the target illumination intensity data, the initial illumination color data and the target illumination direction data.
6. The method according to claim 5, wherein the processing the initial illumination intensity according to the current position information of the target object model and the target illumination position data to obtain the target illumination intensity data based on the surface of the target object model comprises:
determining the distance between the target object model and the illumination position indicated by the target illumination position data according to the current position information of the target object model and the target illumination position data;
determining a linear amount of attenuation of the initial illumination intensity based on a distance between the target object model and the illumination location;
and processing the initial illumination intensity according to the linear attenuation amount to obtain the target illumination intensity.
7. The method of claim 1, wherein the photo-rendering the target object model based on the first and second photo-information comprises:
performing first illumination rendering on the target object model based on the first illumination information to obtain a first illumination rendering result;
performing second illumination rendering on the target object model based on the second illumination information to obtain a second illumination rendering result;
and fusing the first illumination rendering result and the second illumination rendering result to obtain an illumination rendering result of the target object model.
8. An illumination rendering apparatus, comprising:
the data acquisition module is used for acquiring model initial illumination data corresponding to a target object model, and the model initial illumination data is generated by user pre-configuration;
the data processing module is used for acquiring current position information of the target object model in a target scene and current lens information of the virtual camera, and processing the model initial illumination data based on the current position information and the current lens information to obtain model target illumination data;
the information generation module is used for generating first illumination information corresponding to the target object model based on the model target illumination data and the model initial illumination data;
and the model rendering module is used for acquiring second illumination information of a scene light source of the target scene and performing illumination rendering on the target object model based on the first illumination information and the second illumination information.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the illumination rendering method of any of claims 1-7.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, performs a lighting rendering method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211510885.0A CN115761105A (en) | 2022-11-29 | 2022-11-29 | Illumination rendering method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211510885.0A CN115761105A (en) | 2022-11-29 | 2022-11-29 | Illumination rendering method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115761105A true CN115761105A (en) | 2023-03-07 |
Family
ID=85340167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211510885.0A Pending CN115761105A (en) | 2022-11-29 | 2022-11-29 | Illumination rendering method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115761105A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116347003A (en) * | 2023-05-30 | 2023-06-27 | 湖南快乐阳光互动娱乐传媒有限公司 | Virtual lamplight real-time rendering method and device |
-
2022
- 2022-11-29 CN CN202211510885.0A patent/CN115761105A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116347003A (en) * | 2023-05-30 | 2023-06-27 | 湖南快乐阳光互动娱乐传媒有限公司 | Virtual lamplight real-time rendering method and device |
CN116347003B (en) * | 2023-05-30 | 2023-08-11 | 湖南快乐阳光互动娱乐传媒有限公司 | Virtual lamplight real-time rendering method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109427088B (en) | Rendering method for simulating illumination and terminal | |
CN112215934B (en) | Game model rendering method and device, storage medium and electronic device | |
CN111369655B (en) | Rendering method, rendering device and terminal equipment | |
US20230120253A1 (en) | Method and apparatus for generating virtual character, electronic device and readable storage medium | |
CN114022607B (en) | Data processing method, device and readable storage medium | |
CN112184873B (en) | Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium | |
CN113436343A (en) | Picture generation method and device for virtual studio, medium and electronic equipment | |
CN108043027B (en) | Storage medium, electronic device, game screen display method and device | |
CN112843704B (en) | Animation model processing method, device, equipment and storage medium | |
CN114119818A (en) | Rendering method, device and equipment of scene model | |
WO2023098358A1 (en) | Model rendering method and apparatus, computer device, and storage medium | |
CN116704103A (en) | Image rendering method, device, equipment, storage medium and program product | |
CN117101127A (en) | Image rendering method and device in virtual scene, electronic equipment and storage medium | |
CN115526976A (en) | Virtual scene rendering method and device, storage medium and electronic equipment | |
CN115761105A (en) | Illumination rendering method and device, electronic equipment and storage medium | |
CN113648652B (en) | Object rendering method and device, storage medium and electronic equipment | |
WO2024082897A1 (en) | Illumination control method and apparatus, and computer device and storage medium | |
CN117974856A (en) | Rendering method, computing device and computer-readable storage medium | |
US10754498B2 (en) | Hybrid image rendering system | |
CN115063330A (en) | Hair rendering method and device, electronic equipment and storage medium | |
CN115845369A (en) | Cartoon style rendering method and device, electronic equipment and storage medium | |
CN113313798B (en) | Cloud picture manufacturing method and device, storage medium and computer equipment | |
CN112473135B (en) | Real-time illumination simulation method, device and equipment for mobile game and storage medium | |
CN116958390A (en) | Image rendering method, device, equipment, storage medium and program product | |
CN116152420A (en) | Rendering method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |