CN113240787A - Shadow rendering method and device and electronic equipment - Google Patents

Shadow rendering method and device and electronic equipment Download PDF

Info

Publication number
CN113240787A
CN113240787A CN202110637669.1A CN202110637669A CN113240787A CN 113240787 A CN113240787 A CN 113240787A CN 202110637669 A CN202110637669 A CN 202110637669A CN 113240787 A CN113240787 A CN 113240787A
Authority
CN
China
Prior art keywords
shadow
rendering
light source
depth information
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110637669.1A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202110637669.1A priority Critical patent/CN113240787A/en
Publication of CN113240787A publication Critical patent/CN113240787A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a shadow rendering method, apparatus, electronic device, and storage medium, the shadow rendering method comprising: acquiring depth information of a shadow forming object, wherein the shadow forming object is an object generating a shadow; judging whether the depth information meets a preset condition or not; under the condition that the depth information meets the preset condition, rendering the shadow generated by the shadow forming object by adopting a first rendering method to obtain a first rendered shadow; under the condition that the depth information does not meet the preset condition, rendering the shadow generated by the shadow forming object by adopting a second rendering method to obtain a second rendered shadow; the second rendering method is different from the first rendering method. According to the embodiment of the application, the rendering efficiency can be improved while the rendering effect is ensured.

Description

Shadow rendering method and device and electronic equipment
Technical Field
The present disclosure relates to the field of graphics rendering technologies, and in particular, to a shadow rendering method, apparatus, electronic device, and storage medium.
Background
In the illumination world, when light is shielded by an opaque object in the transmission process, a dark area is formed on the back of the shielding object, and the dark area is projected onto another object to form a shadow. The shadow can reflect the information such as the spatial relationship between objects in the scene, the position of a light source and the like, so that in the three-dimensional scene rendering, in order to increase the reality and the hierarchy of the scene and enrich the rendering effect of the scene, the real illumination effect needs to be simulated to render the shadow.
In the prior art, some general shadow algorithms, such as a shadow map (Shadowmap) algorithm and the like, can draw shadows, but when the shadows of a remote object are rendered, rendering precision is reduced, only a fuzzy shadow profile can be obtained, and the requirement on the performance of equipment is high.
Disclosure of Invention
The embodiment of the disclosure at least provides a shadow rendering method, a shadow rendering device, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a shadow rendering method, including:
acquiring depth information of a shadow forming object, wherein the shadow forming object is an object generating a shadow;
judging whether the depth information meets a preset condition or not;
under the condition that the depth information meets the preset condition, rendering the shadow generated by the shadow forming object by adopting a first rendering method to obtain a first rendered shadow;
under the condition that the depth information does not meet the preset condition, rendering the shadow generated by the shadow forming object by adopting a second rendering method to obtain a second rendered shadow; the second rendering method is different from the first rendering method.
In the embodiment of the disclosure, different rendering methods are determined according to whether the depth information of the imaging object meets the preset condition, so that a suitable rendering method can be determined according to the different depth information of the imaging object, and therefore, not only can the rendering effect of the imaging object with the different depth information be ensured, but also the rendering efficiency can be improved while the rendering effect is ensured.
In a possible implementation manner, the depth information is determined to meet the preset condition when the depth information is greater than a preset threshold; the rendering precision of the first rendering method is smaller than that of the second rendering method.
In the embodiment of the present disclosure, in the case that the depth information is greater than the preset threshold, that is, in the case that the shadow object is far away from the screen, the method with lower rendering accuracy is used for rendering, and since the shadow object is far away from the screen, even if the rendering accuracy is low, the visual effect is not affected, but the rendering efficiency can be improved.
According to the first aspect, in a possible implementation manner, the determining whether the depth information meets a preset condition includes:
determining multi-level details of the imaged object based on the depth information;
and judging whether the depth information meets the preset condition or not based on the depth information and the multi-level details.
In the embodiment of the disclosure, when determining whether the depth information meets the preset condition, not only the depth information is considered, but also the LOD related to the depth information is considered together, so that the determined rendering method can better meet the current depth information, and the rendering precision and the rendering efficiency are further improved.
According to the first aspect, in a possible implementation manner, the rendering the shadow generated by the shadow object by using a first rendering method to obtain a first rendered shadow includes:
acquiring current time information and a preset basic shadow model; the base shadow model is used to indicate shadows created by the shadow object;
determining light source parameter information based on the current time information;
performing deformation processing on a preset basic shadow model based on the light source parameter information to obtain a space shadow model;
performing coordinate conversion on the space shadow model to obtain a target shadow map;
rendering the shadow generated by the shadow object based on the target shadow map to obtain the first rendering shadow.
In the embodiment of the disclosure, the light source parameter information is determined according to the current time information, and the preset basic shadow model is subjected to deformation processing based on the light source parameter information to obtain the final shadow map, so that the obtained shadow map is consistent with the current time, that is, the form of the shadow map is changed along with the change of time, and further, the dynamic effect of the shadow is realized.
According to the first aspect, in a possible embodiment, the light source is the sun or the moon; the determining light source parameter information based on the current time information includes:
and determining the light source parameter information based on the current time information and a preset time information and light source parameter information corresponding table.
In the embodiment of the disclosure, the light source parameter information is determined through the preset time information and light source parameter information correspondence table, so that the obtaining efficiency of the light source parameter information can be improved.
According to the first aspect, in a possible implementation, the light source parameter information includes a light source position and an illumination direction; the method for performing deformation processing on a preset basic shadow model based on the light source parameter information to obtain a spatial shadow model comprises the following steps:
determining a tensile deformation ratio according to the light source position;
determining the displacement direction of the vertex of the basic shadow model according to the illumination direction; the vertices comprise a plurality of preset points on the edge of the end face of the base shadow model;
and carrying out deformation processing on the basic shadow model based on the stretching deformation ratio and the displacement direction to obtain the space shadow model.
In the embodiment of the disclosure, the tensile deformation ratio is determined according to the light source position, and the displacement direction of the vertex of the basic shadow model is determined according to the illumination direction, so that the processed spatial shadow model is more suitable for the shadow effect generated by the current light source, and the reality of the shadow rendering effect is improved.
According to the first aspect, in a possible implementation, the higher the light source position is, the smaller the tensile deformation ratio is, and the lower the light source position is, the larger the tensile deformation ratio is; and/or the displacement direction of the vertex which is closer to the illumination direction is convex, and the displacement direction of the vertex which is closer to the light source is concave.
According to the first aspect, in a possible implementation manner, the coordinate transforming the spatial shadow model to obtain a target shadow map includes:
reading depth information of each pixel point of the space shadow model in a screen space;
converting the depth information of each pixel point into position coordinates in an actual space;
and rotating the horizontal coordinate in the position coordinate according to the included angle between the illumination direction and the forward direction of the horizontal direction to obtain the target shadow map.
In the embodiment of the disclosure, the conversion from the space shadow model to the target shadow map is realized by combining the coordinates and the illumination direction, and the obtained shadow map is made to conform to the real shadow effect.
According to the first aspect, in a possible implementation, the light source parameters further include illumination intensity, light source color, and ambient color; the rendering the shadow produced by the shadow object based on the target shadow map comprises:
rendering the shadow generated by the shadow object based on the target shadow map and the illumination intensity, light source color and environment color.
In the embodiment of the disclosure, because the illumination intensity, the light source color and the environment color are combined in the rendering process, the rendered shadow and scene effect are more real and vivid.
In a second aspect, an embodiment of the present disclosure provides a shadow rendering apparatus, including:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring the depth information of a shadow forming object, and the shadow forming object refers to an object generating a shadow;
the judging module is used for judging whether the depth information meets a preset condition or not;
the first rendering module is used for rendering the shadow generated by the imaging object by adopting a first rendering method under the condition that the depth information meets the preset condition to obtain a first rendered shadow;
the second rendering module is used for rendering the shadow generated by the imaging object by adopting a second rendering method under the condition that the depth information does not meet the preset condition to obtain a second rendered shadow; the second rendering method is different from the first rendering method.
In a possible implementation manner, in a case that the depth information is greater than a preset threshold, it is determined that the depth information meets the preset condition; the rendering precision of the first rendering method is smaller than that of the second rendering method.
According to the second aspect, in a possible implementation manner, the determining module is specifically configured to:
determining multi-level details of the imaged object based on the depth information;
and judging whether the depth information meets the preset condition or not based on the depth information and the multi-level details.
According to a second aspect, in a possible implementation, the first rendering module is specifically configured to:
acquiring current time information and a preset basic shadow model; the base shadow model is used to indicate shadows created by the shadow object;
determining light source parameter information based on the current time information;
performing deformation processing on a preset basic shadow model based on the light source parameter information to obtain a space shadow model;
performing coordinate conversion on the space shadow model to obtain a target shadow map;
rendering the shadow generated by the shadow object based on the target shadow map to obtain the first rendering shadow.
According to a second aspect, in a possible embodiment, the light source is the sun or the moon; the first rendering module is specifically configured to:
and determining the light source parameter information based on the current time information and a preset time information and light source parameter information corresponding table.
According to the second aspect, in a possible implementation, the light source parameter information includes a light source position and an illumination direction; the first rendering module is specifically configured to:
determining a tensile deformation ratio according to the light source position;
determining the displacement direction of the vertex of the basic shadow model according to the illumination direction; the vertices comprise a plurality of preset points on the edge of the end face of the base shadow model;
and carrying out deformation processing on the basic shadow model based on the stretching deformation ratio and the displacement direction to obtain the space shadow model.
According to a second aspect, in a possible embodiment, the higher the light source position is, the smaller the tensile deformation ratio is, and the lower the light source position is, the larger the tensile deformation ratio is; and/or the displacement direction of the vertex which is closer to the illumination direction is convex, and the displacement direction of the vertex which is closer to the light source is concave.
According to a second aspect, in a possible implementation, the first rendering module is specifically configured to:
reading depth information of each pixel point of the space shadow model in a screen space;
converting the depth information of each pixel point into position coordinates in an actual space;
and rotating the horizontal coordinate in the position coordinate according to the included angle between the illumination direction and the forward direction of the horizontal direction to obtain the target shadow map.
According to a second aspect, in a possible embodiment, the light source parameters further comprise illumination intensity, light source color and ambient color; the first rendering module is specifically configured to:
rendering the shadow generated by the shadow object based on the target shadow map and the illumination intensity, light source color and environment color.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the shadow rendering method of the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the shadow rendering method according to the first aspect.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a schematic diagram illustrating an execution subject of a shadow rendering method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a shadow rendering method provided by an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating a method for rendering a shadow generated by the shadow object by using a first rendering method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a relationship between a light source and a shadow provided by an embodiment of the present disclosure;
FIG. 5 is a diagram illustrating a spatial rendering model of a shadow provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating a basic shadow model provided by an embodiment of the present disclosure;
fig. 7 shows a schematic diagram of the position of the sun in the canopies provided by the embodiments of the present disclosure;
FIG. 8 is a flowchart illustrating a method for performing a transformation process on a preset base shadow model according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram illustrating a deformation process performed on a preset basic shadow model according to an embodiment of the disclosure;
FIG. 10 is a flowchart illustrating a method for coordinate transformation of the spatial shadow model according to an embodiment of the present disclosure;
FIG. 11 is a schematic diagram illustrating a rendered scene provided by an embodiment of the present disclosure;
FIG. 12 is a schematic diagram illustrating a shadow rendering apparatus according to an embodiment of the present disclosure;
fig. 13 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
In the illumination world, when light is shielded by an opaque object in the transmission process, a dark area is formed on the back of the shielding object, and the dark area is projected onto another object to form a shadow. The shadow can reflect the information such as the spatial relationship between objects in the scene, the position of a light source and the like, so that in the three-dimensional scene rendering, in order to increase the reality and the hierarchy of the scene and enrich the rendering effect of the scene, the real illumination effect needs to be simulated to render the shadow.
Research shows that some general shadow algorithms in the prior art, such as a Shadowmap algorithm, can draw shadows, but when the shadows of a remote object are rendered, rendering precision is reduced, only fuzzy shadow contours can be obtained, and the requirement on the performance of equipment is high.
The present disclosure provides a shadow rendering method, comprising: acquiring depth information of a shadow forming object, wherein the shadow forming object is an object generating a shadow; judging whether the depth information meets a preset condition or not; under the condition that the depth information meets the preset condition, rendering the shadow generated by the shadow forming object by adopting a first rendering method to obtain a first rendered shadow; under the condition that the depth information does not meet the preset condition, rendering the shadow generated by the shadow forming object by adopting a second rendering method to obtain a second rendered shadow; the second rendering method is different from the first rendering method.
In the embodiment of the disclosure, different rendering methods are determined according to whether the depth information of the imaging object meets the preset condition, so that a suitable rendering method can be determined according to the different depth information of the imaging object, and therefore, not only can the rendering effect of the imaging object with the different depth information be ensured, but also the rendering efficiency can be improved while the rendering effect is ensured.
Referring to fig. 1, a schematic diagram of an execution main body of a shadow rendering method according to an embodiment of the present disclosure is shown, where the execution main body of the method is an electronic device 100, where the electronic device 100 may include a terminal and a server. For example, the method may be applied to a terminal, and the terminal may be a smart phone 10, a desktop computer 20, a notebook computer 30, and the like shown in fig. 1, and may also be a smart speaker, a smart watch, a tablet computer, and the like, which are not shown in fig. 1, without limitation. The method may also be applied to the server 40 or may be applied to an implementation environment consisting of the terminal and the server 40. The server 40 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud storage, big data, an artificial intelligence platform, and the like.
In other embodiments, the electronic device 100 may also include an AR (Augmented Reality) device, a VR (Virtual Reality) device, an MR (Mixed Reality) device, and the like. For example, the AR device may be a mobile phone or a tablet computer with an AR function, or may be AR glasses, which is not limited herein.
In some embodiments, the server 40 may communicate with the smart phone 10, the desktop computer 20, and the notebook computer 30 via the network 50. Network 50 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
In addition, the shadow rendering method may be software running in a terminal or a server, such as an application having a shadow rendering function. In some possible implementations, the shadow rendering method may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 2, a flowchart of a shadow rendering method provided by the embodiment of the present disclosure is shown, where the shadow rendering method includes the following steps S101 to S104:
s101, acquiring depth information of a shadow forming object, wherein the shadow forming object is an object generating a shadow.
The shadow object is an object that generates a shadow. For example, the object may be a tree, a house, or any opaque object in a scene (e.g., a game scene), and is not limited herein. The imaging object in the embodiment of the present application is illustrated by taking a tree as an example.
The depth information, i.e., the screen distance, refers to the distance between the imaging object and the screen. It can be understood that, since the imaging object is composed of a plurality of pixel points, acquiring the depth information of the imaging object means acquiring the depth information of each pixel point of the imaging object.
S102, judging whether the depth information meets a preset condition or not; if yes, go to step S103; if not, go to step S104.
In some embodiments, if the depth information is greater than a preset threshold, it is determined that the depth information meets a preset condition, and if the depth information is not greater than the preset threshold, it is determined that the depth information does not meet the preset condition. That is, the shadow objects in the scene are divided according to the screen distance, the shadow of the shadow object closer to the screen is rendered by one rendering method, and the shadow of the shadow object farther from the screen is rendered by another rendering method.
It is understood that the above embodiments are only based on depth information, and in other embodiments, whether the depth information meets the predetermined condition may also relate to Levels of Detail (LOD). The LOD technology is used for determining resource allocation of object rendering according to the positions and the importance of the nodes of the object model in the display environment, reducing the number of faces and the detail of non-important objects and further obtaining high-efficiency rendering operation.
For example, in game development, LOD is a common method for reducing the number of faces (the number of faces or the number of vertices is an important index for weighing the pressure of an image processor), and the primary objective is to achieve balance between game content and performance, so as to ensure that the game content is as rich as possible and the efficiency of game operation is ensured. The LOD can determine the resource allocation of object rendering in the game picture according to the position and the importance degree of the model, and the higher the LOD (LOD0, 1 and 2 …), the less the details, the number of faces and the number of vertices of the model, thereby obtaining efficient rendering operation. Generally, the LOD of a distant view (not high in the degree of detail) in a game screen is high, and the LOD of a near view rich in detail is low. Thus, the level of the LOD is related to the depth information.
Therefore, in other embodiments, determining whether the depth information meets the preset condition may include: determining multi-level detail (LOD) of the imaged object based on the depth information; and judging whether the depth information meets the preset condition or not based on the depth information and the multi-level details. For example, if the current depth information is greater than the preset threshold, it is further determined whether the LOD level of the imaging object is greater than the preset LOD level, for example, when the LOD is greater than 2 levels, it is determined that the current depth information meets the preset condition.
That is, in the present embodiment, when determining whether the depth information satisfies the predetermined condition, the depth information is not only considered, but also the LOD related to the depth information is considered together, so that the determined rendering method can better satisfy the current depth information, and the rendering accuracy and the rendering efficiency are further improved.
S103, rendering the shadow generated by the shadow forming object by adopting a first rendering method to obtain a first rendered shadow.
It can be understood that if the depth information meets the preset condition, it indicates that the shadow to be rendered is farther from the screen, and the first rendering method matched with the condition may be used for rendering. In some embodiments, the rendering precision of the first rendering method is smaller than that of the second rendering method, so that not only the rendering effect of the close-distance object can be ensured, but also the rendering efficiency can be improved.
S104, rendering the shadow generated by the shadow forming object by adopting a second rendering method to obtain a second rendered shadow; the second rendering method is different from the first rendering method.
It can be understood that if the depth information does not meet the preset condition, it indicates that the shadow to be rendered is closer to the screen, and at this time, in order to ensure the rendering effect, a second rendering method with higher rendering accuracy may be used for rendering. The second rendering method may be a commonly used shadow map algorithm, a patch shading method, an extra rendering shadow rt (render texture), or the like.
For the step S103, when the shadow generated by the shadow object is rendered by the first rendering method to obtain a first rendered shadow, the method may include the following steps S1031 to S1035:
s1031, acquiring current time information and a preset basic shadow model; the base shadow model is used to indicate shadows created by the shadow object.
In three-dimensional scene rendering, in order to increase the sense of reality and the sense of hierarchy of a scene and enrich the expression effect of the scene, a real illumination effect needs to be simulated to render shadows, while in an outdoor natural scene, a light source is the sun or the moon, the illumination effect is different along with the difference of time, and the shadow effect at different time is also different, so that in order to enable the rendered shadows to have a dynamic effect along with time, current time information needs to be acquired at first. In some embodiments, the current time information may be obtained by a system clock (such as a calendar and a time) of the electronic device, or may be obtained in response to an input from a user, which is not limited herein.
In addition, for some specific scenes (such as a jungle) that need to render a large number of the same objects, the GPU Instancing can be used to render the shadows in a large batch, wherein the GPU Instancing is used to reduce Draw calls when rendering a large number of the same objects. The limiting condition of the technology is that the shadows of all shadow forming objects need the same model, so that in the embodiment, a general basic shadow model can be set for the shadow forming objects to be rendered, and then the basic shadow model is directly called during rendering, so that the rendering efficiency is improved. Wherein the base shadow model can be regarded as a distribution space rendering model of the shadow of each shadow object.
The determination process of the basic shadow model will be described in detail below with reference to fig. 4, 5, and 6. Referring to FIG. 4, a diagram of the relationship between the shadow and the light source is shown. It can be seen from fig. 4 that neglecting the infinite length projection caused by the light source (sun or moon) approaching the horizontal direction, it can be seen that the shadow Y of the imaging object is a deformed ellipse after being rotated and zoomed in the horizontal direction, because the ground surface is not necessarily flat, and the shadow Y will be distributed at a certain distance in the vertical direction. Thus, as shown in FIG. 5, the final abstracted shadow spatial rendering model is a deformed cylinder. The cylinder is then further processed to abstract away a unitized standard base shadow model, which in some embodiments may be a standard cylinder of diameter and height 1, as shown in fig. 6.
S1032, determining light source parameter information based on the current time information.
For example, a time information and light source parameter information correspondence table may be established in advance, and then the light source parameter information may be determined based on the current time information and a preset time information and light source parameter information correspondence table. Therefore, the efficiency of acquiring the parameter information of the light source can be improved.
In some embodiments, the time information and light source parameter information correspondence table may be obtained by:
(1) respectively acquiring a plurality of different time information and longitude and latitude information;
(2) determining light source parameter information corresponding to the time information based on the time information and the longitude and latitude information;
(3) and constructing a corresponding table of the time and light source parameter information according to the different time information and the light source parameter information respectively corresponding to the different time information.
For example, the direction coordinates Of the sun and/or moon in the sky dome and the illumination intensity information may be calculated based on a tod (time Of day) simulation calculation formula and a calendar and time.
Referring to fig. 7, assuming that the sun S moves on the dome surface of a unit sphere, the dome position R of the sun can be finally calculated by using the time point and latitude and longitude information, and the sun is set to face the center O all the time, so that the sun direction RO can be obtained. The sun position corresponding to the 24-hour time point is obtained, and the time is mapped to the set sun intensity and color change curve, so that the illumination intensity change value of the sun in 24 hours is obtained, and therefore basic data such as the light source position, the illumination direction, the illumination intensity, the illumination color and the like can be obtained, and the data can influence the shadow effect and form. Similarly, the moon direction and the illumination value can be obtained, and then the time and light source parameter information correspondence table can be obtained.
And S1033, performing deformation processing on the preset basic shadow model based on the light source parameter information to obtain a spatial shadow model.
Referring to fig. 8, in some embodiments, the light source parameter information includes a light source position and an illumination direction, and the transforming process performed on the preset basic shadow model based on the light source parameter information to obtain the spatial shadow model may include the following steps S10331 to S10333:
s10331, a tensile deformation ratio is determined according to the light source position.
S10332, determining the displacement direction of the vertex of the basic shadow model according to the illumination direction; the vertices include a plurality of preset points on the edges of the end faces of the base shadow model.
S10333, based on the stretching deformation ratio and the displacement direction, deforming the basic shadow model to obtain the spatial shadow model.
Illustratively, referring to FIG. 9, the tensile deformation ratio may be determined based on the light source location (e.g., the height of the sun in the skyscrapers). Wherein the higher the light source position is, the smaller the tensile deformation ratio is, the lower the light source position is (close to the horizontal line), and the larger the tensile deformation ratio is, the stretchability of the shadow is simulated. Wherein, in fig. 9, a is the top surface of the basic shadow model, and b is the effect after stretching the basic shadow model; the displacement direction of the vertices of the base shadow model is then determined based on the illumination direction (the angle of the light source to the horizontal). Wherein, the closer to the illumination direction, the closer to the light source, the more convex the displacement direction of the vertex is, and the closer to the light source, the more concave the displacement direction of the vertex is, and c in fig. 9 is the effect after the vertex of the basic shadow model is processed, and finally, a horizontally pear-shaped deformed cylinder is obtained.
S1034, performing coordinate conversion on the space shadow model to obtain a target shadow map.
For example, referring to fig. 10, in some embodiments, when the spatial shadow model is subjected to coordinate transformation to obtain the target shadow map, the following steps S10341 to S10343 may be included:
s10341, reading depth information of each pixel point of the spatial shadow model in the screen space.
S10342, convert the depth information of each pixel point into a position coordinate in the actual space.
S10343, rotating the horizontal coordinate in the position coordinate according to the included angle between the illumination direction and the forward direction of the horizontal direction to obtain the target shadow map.
For example, the depth of the pixel in the pear-shaped model obtained in step S10333 in the screen space may be read out first, and the depth may be converted into the position coordinate in the final object space pixel by pixel. Since a standard cylinder is used, the coordinate range (-0.5, 0.5) in the horizontal direction can be directly shifted to (0, 1) as the original UV coordinate. The UV distribution in the 2D plane is then calculated. The horizontal direction shadow direction is assumed to be (1, 0), i.e., toward the X direction. And (3) solving an included angle delta between the horizontal direction of the light source and the positive X direction in the previous step, rotating the horizontal coordinate in the position coordinate by a delta angle from the original direction, and mapping UV scaling according to the vertical height of the light source (the mapping scaling can be controlled by parameters), so that the final sampling UV of the final target shadow map can be obtained.
Where UV coordinates refer to a plane in which all image files are two-dimensional. With a horizontal direction U and a vertical direction V, any pixel on the image can be located by this planar, two-dimensional UV coordinate system.
S1035, rendering the shadow generated by the shadow object based on the target shadow map to obtain the first rendering shadow.
In some embodiments, the light source parameters further include illumination intensity, light source color, and ambient color; the rendering the shadow produced by the shadow object based on the target shadow map comprises: rendering the shadow generated by the shadow object based on the target shadow map and the illumination intensity, light source color and environment color. Therefore, the rendered shadow can better accord with the current natural environment information, the rendered scene is more real and vivid, and the user experience is improved.
It should be noted that, because the precision of the cylindrical decal shadow is low (the first rendering method), and the cylindrical decal shadow is more suitable for long-range computation, a common shadow map shadow (the second rendering method) can be adopted in a short-range view, and the cylindrical decal shadow is switched to after a certain distance is exceeded, so that a plant shadow with high precision at a close place and a long-range super-long visual distance can be obtained finally. In addition, the scheme can be applied to medium and small regular model shadows needing a large amount of rendering, such as dynamic shadows of distant houses.
Fig. 11 is a schematic view of a rendered scene provided in an embodiment of the present application. As shown in fig. 11, a large number of identical shadow forming objects (trees) a are included in the scene, based on the shadow rendering method, a first rendering method is used for rendering trees far from the screen to obtain a shadow B1, and a second rendering method is used for rendering trees near the screen to obtain a shadow B2, that is, shadows of trees with different depth information in the scene are rendered by different rendering methods, so that not only the rendering effect of the scene can be ensured, but also the rendering efficiency can be improved. In addition, because the illumination parameters change along with the change of time, and the shadows are different along with the difference of the illumination parameters, the dynamic shadow effect of vegetation trees changing along with the change of time is realized, so that the problem that the common dynamic showmap algorithm is difficult to display in a mobile platform at an ultra-long distance is solved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same technical concept, the embodiment of the present disclosure further provides a shadow rendering apparatus corresponding to the shadow rendering method, and since the principle of the apparatus in the embodiment of the present disclosure for solving the problem is similar to that of the shadow rendering method in the embodiment of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 12, there is shown a schematic diagram of a shadow rendering apparatus 500 according to an embodiment of the present disclosure, the apparatus including:
an obtaining module 501, configured to obtain depth information of a shadow object, where the shadow object is an object that generates a shadow;
a determining module 502, configured to determine whether the depth information meets a preset condition;
a first rendering module 503, configured to render, by using a first rendering method, a shadow generated by the shadow object to obtain a first rendered shadow when the depth information meets the preset condition;
a second rendering module 504, configured to render, by using a second rendering method, a shadow generated by the shadow object to obtain a second rendered shadow when the depth information does not meet the preset condition; the second rendering method is different from the first rendering method.
In a possible implementation manner, in a case that the depth information is greater than a preset threshold, it is determined that the depth information meets the preset condition; the rendering precision of the first rendering method is smaller than that of the second rendering method.
In a possible implementation manner, the determining module 502 is specifically configured to:
determining multi-level details of the imaged object based on the depth information;
and judging whether the depth information meets the preset condition or not based on the depth information and the multi-level details.
In a possible implementation, the first rendering module 503 is specifically configured to:
acquiring current time information and a preset basic shadow model; the base shadow model is used to indicate shadows created by the shadow object;
determining light source parameter information based on the current time information;
performing deformation processing on a preset basic shadow model based on the light source parameter information to obtain a space shadow model;
performing coordinate conversion on the space shadow model to obtain a target shadow map;
rendering the shadow generated by the shadow object based on the target shadow map to obtain the first rendering shadow.
In one possible embodiment, the light source is the sun or the moon; the first rendering module 503 is specifically configured to:
and determining the light source parameter information based on the current time information and a preset time information and light source parameter information corresponding table.
In a possible embodiment, the light source parameter information includes a light source position and an illumination direction; the first rendering module 503 is specifically configured to:
determining a tensile deformation ratio according to the light source position;
determining the displacement direction of the vertex of the basic shadow model according to the illumination direction; the vertices comprise a plurality of preset points on the edge of the end face of the base shadow model;
and carrying out deformation processing on the basic shadow model based on the stretching deformation ratio and the displacement direction to obtain the space shadow model.
In a possible embodiment, the higher the light source position is, the smaller the tensile deformation ratio is, and the lower the light source position is, the larger the tensile deformation ratio is; and/or the displacement direction of the vertex which is closer to the illumination direction is convex, and the displacement direction of the vertex which is closer to the light source is concave.
In a possible implementation, the first rendering module 503 is specifically configured to:
reading depth information of each pixel point of the space shadow model in a screen space;
converting the depth information of each pixel point into position coordinates in an actual space;
and rotating the horizontal coordinate in the position coordinate according to the included angle between the illumination direction and the forward direction of the horizontal direction to obtain the target shadow map.
In a possible embodiment, the light source parameters further include illumination intensity, light source color, and ambient color; the first rendering module 503 is specifically configured to:
rendering the shadow generated by the shadow object based on the target shadow map and the illumination intensity, light source color and environment color.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 13, a schematic structural diagram of an electronic device 700 provided in the embodiment of the present disclosure includes a processor 701, a memory 702, and a bus 703. The memory 702 is used for storing execution instructions and includes a memory 7021 and an external memory 7022; the memory 7021 is also referred to as an internal memory and temporarily stores operation data in the processor 701 and data exchanged with an external memory 7022 such as a hard disk, and the processor 701 exchanges data with the external memory 7022 via the memory 7021.
In this embodiment, the memory 702 is specifically configured to store application program codes for executing the scheme of the present application, and is controlled by the processor 701 to execute. That is, when the electronic device 700 is operated, the processor 701 and the memory 702 communicate with each other through the bus 703, so that the processor 701 executes the application program code stored in the memory 702, thereby executing the method described in any of the foregoing embodiments.
The Memory 702 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 701 may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 700. In other embodiments of the present application, the electronic device 700 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program performs the steps of the shadow rendering method in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute steps of the shadow rendering method in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A method of shadow rendering, comprising:
acquiring depth information of a shadow forming object, wherein the shadow forming object is an object generating a shadow;
judging whether the depth information meets a preset condition or not;
under the condition that the depth information meets the preset condition, rendering the shadow generated by the shadow forming object by adopting a first rendering method to obtain a first rendered shadow;
under the condition that the depth information does not meet the preset condition, rendering the shadow generated by the shadow forming object by adopting a second rendering method to obtain a second rendered shadow; the second rendering method is different from the first rendering method.
2. The method according to claim 1, wherein in case the depth information is greater than a preset threshold, determining that the depth information meets the preset condition; the rendering precision of the first rendering method is smaller than that of the second rendering method.
3. The method of claim 1, wherein the determining whether the depth information meets a preset condition comprises:
determining multi-level details of the imaged object based on the depth information;
and judging whether the depth information meets the preset condition or not based on the depth information and the multi-level details.
4. The method of claim 1, wherein the rendering the shadow generated by the shadow object by using the first rendering method to obtain a first rendered shadow comprises:
acquiring current time information and a preset basic shadow model; the base shadow model is used to indicate shadows created by the shadow object;
determining light source parameter information based on the current time information;
performing deformation processing on a preset basic shadow model based on the light source parameter information to obtain a space shadow model;
performing coordinate conversion on the space shadow model to obtain a target shadow map;
rendering the shadow generated by the shadow object based on the target shadow map to obtain the first rendering shadow.
5. The method of claim 4, wherein the light source is the sun or moon; the determining light source parameter information based on the current time information includes:
and determining the light source parameter information based on the current time information and a preset time information and light source parameter information corresponding table.
6. The method of claim 4, wherein the light source parameter information comprises a light source position and an illumination direction; the method for performing deformation processing on a preset basic shadow model based on the light source parameter information to obtain a spatial shadow model comprises the following steps:
determining a tensile deformation ratio according to the light source position;
determining the displacement direction of the vertex of the basic shadow model according to the illumination direction; the vertices comprise a plurality of preset points on the edge of the end face of the base shadow model;
and carrying out deformation processing on the basic shadow model based on the stretching deformation ratio and the displacement direction to obtain the space shadow model.
7. The method according to claim 6, wherein the higher the light source position, the smaller the tensile deformation ratio, and the lower the light source position, the larger the tensile deformation ratio; and/or the displacement direction of the vertex which is closer to the illumination direction is convex, and the displacement direction of the vertex which is closer to the light source is concave.
8. The method of claim 4, wherein the coordinate transforming the spatial shadow model to obtain a target shadow map comprises:
reading depth information of each pixel point of the space shadow model in a screen space;
converting the depth information of each pixel point into position coordinates in an actual space;
and rotating the horizontal coordinate in the position coordinate according to the included angle between the illumination direction and the forward direction of the horizontal direction to obtain the target shadow map.
9. The method of claim 4, wherein the light source parameters further include illumination intensity, light source color, and ambient color; the rendering the shadow produced by the shadow object based on the target shadow map comprises:
rendering the shadow generated by the shadow object based on the target shadow map and the illumination intensity, light source color and environment color.
10. A shadow rendering device, comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring the depth information of a shadow forming object, and the shadow forming object refers to an object generating a shadow;
the judging module is used for judging whether the depth information meets a preset condition or not;
the first rendering module is used for rendering the shadow generated by the imaging object by adopting a first rendering method under the condition that the depth information meets the preset condition to obtain a first rendered shadow;
the second rendering module is used for rendering the shadow generated by the imaging object by adopting a second rendering method under the condition that the depth information does not meet the preset condition to obtain a second rendered shadow; the second rendering method is different from the first rendering method.
11. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the shadow rendering method of any of claims 1-9.
12. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, performs the shadow rendering method of any of claims 1-9.
CN202110637669.1A 2021-06-08 2021-06-08 Shadow rendering method and device and electronic equipment Pending CN113240787A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110637669.1A CN113240787A (en) 2021-06-08 2021-06-08 Shadow rendering method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110637669.1A CN113240787A (en) 2021-06-08 2021-06-08 Shadow rendering method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113240787A true CN113240787A (en) 2021-08-10

Family

ID=77137272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110637669.1A Pending CN113240787A (en) 2021-06-08 2021-06-08 Shadow rendering method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113240787A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437340A (en) * 2023-11-13 2024-01-23 北京石境科技有限公司 Batch rendering method and system for factory equipment models
CN117437340B (en) * 2023-11-13 2024-07-16 北京石境科技有限公司 Batch rendering method and system for factory equipment models

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101136108A (en) * 2007-09-26 2008-03-05 腾讯科技(深圳)有限公司 Shadows plotting method and rendering device thereof
CN108010118A (en) * 2017-11-28 2018-05-08 网易(杭州)网络有限公司 Virtual objects processing method, virtual objects processing unit, medium and computing device
CN111105491A (en) * 2019-11-25 2020-05-05 腾讯科技(深圳)有限公司 Scene rendering method and device, computer readable storage medium and computer equipment
CN111988598A (en) * 2020-09-09 2020-11-24 江苏普旭软件信息技术有限公司 Visual image generation method based on far and near view layered rendering

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101136108A (en) * 2007-09-26 2008-03-05 腾讯科技(深圳)有限公司 Shadows plotting method and rendering device thereof
CN108010118A (en) * 2017-11-28 2018-05-08 网易(杭州)网络有限公司 Virtual objects processing method, virtual objects processing unit, medium and computing device
CN111105491A (en) * 2019-11-25 2020-05-05 腾讯科技(深圳)有限公司 Scene rendering method and device, computer readable storage medium and computer equipment
CN111988598A (en) * 2020-09-09 2020-11-24 江苏普旭软件信息技术有限公司 Visual image generation method based on far and near view layered rendering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨超;钟慧湘;李慧盈;李文辉: "基于Shadow_Volume的场景阴影渲染方法", 吉林大学学报(理学版), vol. 52, no. 2, 26 March 2014 (2014-03-26), pages 291 - 294 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437340A (en) * 2023-11-13 2024-01-23 北京石境科技有限公司 Batch rendering method and system for factory equipment models
CN117437340B (en) * 2023-11-13 2024-07-16 北京石境科技有限公司 Batch rendering method and system for factory equipment models

Similar Documents

Publication Publication Date Title
CN109603155B (en) Method and device for acquiring merged map, storage medium, processor and terminal
CN111369655A (en) Rendering method and device and terminal equipment
CN112884873B (en) Method, device, equipment and medium for rendering virtual object in virtual environment
CN112370784A (en) Virtual scene display method, device, equipment and storage medium
WO2022111609A1 (en) Grid encoding method and computer system
CN115512025A (en) Method and device for detecting model rendering performance, electronic device and storage medium
CN113077541B (en) Virtual sky picture rendering method and related equipment
CN117333637B (en) Modeling and rendering method, device and equipment for three-dimensional scene
CN110838167B (en) Model rendering method, device and storage medium
CN117218273A (en) Image rendering method and device
US10754498B2 (en) Hybrid image rendering system
CN115761105A (en) Illumination rendering method and device, electronic equipment and storage medium
CN113240787A (en) Shadow rendering method and device and electronic equipment
CN110766779A (en) Method and device for generating lens halo
CN113592999B (en) Rendering method of virtual luminous body and related equipment
CN114581592A (en) Highlight rendering method and device, computer equipment and storage medium
CN115526976A (en) Virtual scene rendering method and device, storage medium and electronic equipment
CN115845369A (en) Cartoon style rendering method and device, electronic equipment and storage medium
CN115758502A (en) Carving processing method and device of spherical model and computer equipment
CN115965735A (en) Texture map generation method and device
CN109166176A (en) The generation method and device of three-dimensional face images
CN115063330A (en) Hair rendering method and device, electronic equipment and storage medium
CN111862338B (en) Display method and device for simulated eyeglass wearing image
CN114529648A (en) Model display method, device, apparatus, electronic device and storage medium
CN114529656A (en) Shadow map generation method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination