CN114219885A - Real-time shadow rendering method and device for mobile terminal - Google Patents

Real-time shadow rendering method and device for mobile terminal Download PDF

Info

Publication number
CN114219885A
CN114219885A CN202111513153.2A CN202111513153A CN114219885A CN 114219885 A CN114219885 A CN 114219885A CN 202111513153 A CN202111513153 A CN 202111513153A CN 114219885 A CN114219885 A CN 114219885A
Authority
CN
China
Prior art keywords
shadow
rendering
sub
bounding box
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111513153.2A
Other languages
Chinese (zh)
Inventor
扈红柯
李建良
郭子文
何雨泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yunyou Interactive Network Technology Co ltd
Online Tuyoo Beijing Technology Co ltd
Original Assignee
Beijing Yunyou Interactive Network Technology Co ltd
Online Tuyoo Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yunyou Interactive Network Technology Co ltd, Online Tuyoo Beijing Technology Co ltd filed Critical Beijing Yunyou Interactive Network Technology Co ltd
Priority to CN202111513153.2A priority Critical patent/CN114219885A/en
Publication of CN114219885A publication Critical patent/CN114219885A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The method realizes local close-up of objects in a three-dimensional scene, obtains sub-geometric bodies related to self-shadow through traversal of specific target objects, generates minimum bounding boxes of the sub-geometric bodies, eliminates useless geometric bodies in the scene through the mode to generate a compact projection matrix, reduces the space occupation ratio of an invalid space on a shadow map, improves the shadow rendering efficiency, and is suitable for 3D application on a mobile terminal.

Description

Real-time shadow rendering method and device for mobile terminal
Technical Field
The present application relates to the field of computer graphics rendering technologies, and in particular, to a real-time shadow rendering method and apparatus for a mobile terminal, a computing device, and a computer-readable storage medium.
Background
In a 3D application of a mobile terminal, when a specific target, such as a game character, is close-up, high-quality real-time effect rendering is performed on the target. In the prior art, a shadow mapping technology and a cascade shadow technology of a quality enhancement version are generally used, and the rendered shadow effect often has the problems of saw tooth, blur, distortion and the like; meanwhile, the shadow mapping resolution in the mobile terminal cannot be too high, and the shadow quality problem in the specific occasion is further emphasized. Therefore, a high-quality shadow generation technology is needed, and the requirement on hardware resources is not high so as to adapt to the application scene of the mobile terminal.
Disclosure of Invention
In view of the above, embodiments of the present application provide a real-time shadow rendering method and apparatus for a mobile terminal, a computing device and a computer-readable storage medium, so as to solve technical defects in the prior art.
According to a first aspect of embodiments of the present application, there is provided a real-time shadow rendering method for a mobile terminal, including:
determining a target object to be rendered;
traversing the target object to obtain sub-geometric bodies generated from shadows, and generating respective bounding boxes according to the space and size information of each sub-geometric body;
merging bounding boxes of the sub-geometries to form a minimum bounding box;
calculating a transformation matrix of a shadow depth map required by the shadow casting according to the minimum bounding box, the light position and the direction, and rendering to obtain high-precision depth texture of the shadow;
and rendering a shadow effect according to the transformation matrix of the shadow depth map and the high-precision depth texture.
According to a second aspect of embodiments of the present application, there is provided a real-time shadow rendering apparatus for a mobile terminal, including:
a determination module for determining a target object to be rendered;
the generating module is used for traversing the target object, obtaining sub-geometric bodies generated from the shadow, and generating respective bounding boxes according to the space and size information of each sub-geometric body;
a merging module for merging the bounding boxes of the sub-geometries to form a minimum bounding box;
the computing module is used for computing a transformation matrix of a shadow depth map required by the cast shadow according to the minimum bounding box, the light position and the direction, and rendering to obtain high-precision depth texture of the shadow;
and the rendering module is used for rendering the shadow effect according to the transformation matrix of the shadow depth map and the high-precision depth texture.
According to a third aspect of embodiments herein, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the real-time shadow rendering method for a mobile terminal when executing the instructions.
According to a fourth aspect of embodiments herein, there is provided a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the real-time shadow rendering method for a mobile terminal.
In the embodiment of the application, when the target is rendered, the target object is traversed, the sub-geometric bodies related to the self-shadow are obtained, the minimum bounding boxes of the sub-geometric bodies are generated in real time, the projection matrix required to be used when the depth texture is generated is reduced through the method, the space occupation ratio of an invalid space on a shadow map is reduced, meanwhile, the shadow rendering efficiency is improved, and the high-quality quantum shadow rendering effect of the target object in the three-dimensional application scene of the mobile terminal is realized.
Drawings
FIG. 1 is a block diagram of a computing device provided by an embodiment of the present application;
FIG. 2 is a flowchart of a real-time shadow rendering method for a mobile terminal according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a close-up of a target object in a three-dimensional scene of a mobile terminal according to an embodiment of the present application;
FIG. 4 is a schematic diagram of self-shading on a target object according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of forming a minimum bounding box for a sub-geometry of a target object according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of merging multiple target objects according to an embodiment of the present disclosure;
FIG. 7a is a schematic illustration of a shadow rendered from a target object according to the prior art;
FIG. 7b is a schematic diagram of sub-shadows rendered from a target object by a real-time shadow rendering method according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a real-time shadow rendering apparatus for a mobile terminal according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments of the present application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first aspect may be termed a second aspect, and, similarly, a second aspect may be termed a first aspect, without departing from the scope of one or more embodiments of the present application. The word "if," as used herein, may be interpreted as "responsive to a determination," depending on the context.
In the present application, a real-time shadow rendering method and apparatus for a mobile terminal, a computing device and a computer-readable storage medium are provided, which are described in detail in the following embodiments one by one.
FIG. 1 shows a block diagram of a computing device 100 according to an embodiment of the present application. The components of the computing device 100 include, but are not limited to, memory 110 and processor 120. The processor 120 is coupled to the memory 110 via a bus 130 and a database 150 is used to store data.
Computing device 100 also includes access device 140, access device 140 enabling computing device 100 to communicate via one or more networks 160. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 140 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present application, the above-mentioned components of the computing device 100 and other components not shown in fig. 1 may also be connected to each other, for example, by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 1 is for purposes of example only and is not limiting as to the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 100 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), a mobile phone (e.g., smartphone), a wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 100 may also be a mobile or stationary server.
In the prior art, a mainstream game engine generates a corresponding scene shadow by rendering a shadow map of a light space, and is limited by hardware resources and the precision of the shadow map on a mobile terminal, and the rendering quality of the shadow is not high. Although the corresponding cascade shadow can be generated by further dividing the scene space, the cascade shadow technology is not suitable for the local self-shadow and the occasion requiring extremely high shadow precision, and the quality requirement cannot be met. Cascading Shadow (Cascaded Shadow Mapping) is a Shadow technology used for improving Shadow Mapping accuracy, and is an effective method for rendering large scene Shadow, Shadow Mapping with different resolutions is needed in areas with different distances from a light position, the higher the Mapping resolution needed by the light, a light space is divided into a plurality of sub-areas, and each sub-area is used for respectively rendering the Shadow Mapping. However, in the case of local close-up, the shadow effect often has the problems of jaggy, blurring, distortion and the like by adopting the cascade shadow technology.
In the embodiment of the present application, in order to solve the above problem, a real-time shadow rendering method and apparatus for a mobile terminal, a computing device, and a computer-readable storage medium are provided. Wherein the processor 120 may perform the steps of the real-time shadow rendering method for a mobile terminal shown in fig. 2. A flow chart of a real-time shadow rendering method for a mobile terminal is shown in fig. 2, comprising steps 202 to 210.
Step 202: a target object to be rendered is determined.
In a specific embodiment, some specific objects, such as game characters, props, scenes and the like, are usually partially close-up in a game scene, and a high-quality shadow rendering effect is usually required to achieve an excellent user experience when the local close-up is performed, as shown in fig. 3, when the game characters are close-up, elements such as clothes, decorations, hairs and the like of the characters can generate a shadow effect on the body. In this step, at the start of rendering, a target object to be locally featured, such as one or more game characters, equipment, or the like, is determined.
Step 204: and traversing the target object to obtain sub-geometric bodies generated from the shadow, and generating respective bounding boxes according to the space and size information of each sub-geometric body.
In a specific embodiment, the target object in the scene is composed of a plurality of sub-geometries, and as shown in fig. 4, the target object T is composed of sub-geometries a, b, and c. Those skilled in the art should understand that the target object in practical application is usually composed of a plurality of complex sub-geometries, and the example in fig. 4 is only used to illustrate and explain the implementation manner of the present embodiment, and does not limit the target object to the composition manner of simple geometries.
And traversing the target object to obtain a sub-geometry body generated from a shadow, wherein the self-shadow is a shadow effect formed by the self-generated occlusion of the target object, and is shown as the self-shadow in fig. 4.
In a specific embodiment, the sub-geometry generated from the shadow includes both the sub-geometry that has blocked the light and the sub-geometry that the shadow generated from the blocking is projected onto, as shown in fig. 4, the sub-geometry that has blocked the light in the target object T is geometry a, and the sub-geometry that is projected from the shadow is geometry b. Thus, in this embodiment, the sub-geometries obtained by traversing the target object T include: geometries a and b.
And further, generating a respective bounding box according to the information of each sub-geometry obtained by traversing. As shown in fig. 5, the hexahedral bounding box for each sub-geometry is obtained according to the AABB bounding box algorithm. Those skilled in the art will appreciate that the bounding box algorithm includes many types, and the AABB bounding box in FIG. 5 is merely an example and is not exhaustive, and will not be described in detail here.
In another specific embodiment, when there are a plurality of target objects to be locally featured, the plurality of target objects are merged, as shown in fig. 6.
Furthermore, the multiple target objects are combined by using the minimum bounding boxes of the multiple target objects, and the shadow between the objects is converted into the sub-shadow on the single target object in a combined mode.
Step 206: the bounding boxes of the sub-geometries are merged to form a minimum bounding box.
In a specific embodiment, the bounding boxes of the sub-geometries are merged in real time during the rendering process to form the minimum bounding box. As shown in fig. 5, the AABB bounding box is used, and the smallest bounding box is formed like a parallelepiped. Further, those skilled in the art should understand that the sphere, or capsule body or the minimum bounding box of other polyhedrons can be formed according to the specific bounding box algorithm of the sub-geometry, and will not be described herein again.
In the step, most of geometric bodies which are not generated from the shadow on the target object are filtered through the formed minimum bounding box, and the rendering efficiency is improved.
Step 208: and calculating a transformation matrix of a shadow depth map required by the shadow casting according to the minimum bounding box, the light position and the direction, and rendering to obtain the high-precision depth texture of the shadow.
In a specific embodiment, the calculation of the transformation matrix of the shadow depth map mainly comprises the following steps:
step 2082: the depth map spatial transform matrix m _ shadaesmaatrix is set.
m_ShadowSpaceMatrix.SetRow(0,new Vector4(0.5f,0.0f,0.0f,0.5f));
m_ShadowSpaceMatrix.SetRow(1,new Vector4(0.0f,0.5f,0.0f,0.5f));
m_ShadowSpaceMatrix.SetRow(2,new Vector4(0.0f,0.0f,0.5f,0.5f+db));
m_ShadowSpaceMatrix.SetRow(3,new Vector4(0.0f,0.0f,0.0f,1.0f));
The coordinates u, v of the sampled depth map are in the [0,1] interval, so the interval [ -1,1] is mapped to the [0,1] interval that fits the sampled depth map by the calculation of [ -1,1] × 0.5+0.5 in the matrix.
Step 2084: setting a light source space projection matrix m _ ShadowProjMat;
obtaining world coordinates of each node of the minimum bounding box;
var aabbBounds=GetAABBCorners(m_Bounds);
and converting the world coordinates of each node of the minimum bounding box into coordinates of an illumination space, and comparing to obtain the minimum and maximum values in the xyz three directions.
Figure BDA0003406367090000091
And calculating the sizes in the x, y and z directions according to the minimum and maximum values in the three directions, increasing a boundScale coefficient, and dynamically adjusting the generation range of the shadow.
float xSize=(xMax-xMin)/2*boundScale;
float ySize=(yMax-yMin)/2*boundScale;
float zSize=(zMax-zMin);
float nearPlane=0.1f;
And setting parameters such as the size, the far and near planes, the projection matrix and the like of the light source camera.
camera.orthographicSize=ySize;
camera.nearClipPlane=nearPlane;
camera.farClipPlane=zSize+nearPlane;
camera.projectionMatrix=Matrix4x4.Ortho(-xSize,xSize,-ySize,ySize,nearPlane,zSize+nearPlane);
The world space position of the light source camera is calculated and set.
Vector3 cameraPosition=new Vector3((xMax+xMin)/2,Mathf.Lerp(yMin,yMax,yOffset),zMin-nearPlane);
cameraPosition=lightTransform.localToWorldMatrix.MultiplyPoint(cameraPosition);
camera.transform.SetPositionAndRotation(lightTransform.localToW orldMatrix.MultiplyPoint(cameraPosition),lightTransform.rotation);
A projection matrix of the light source space is obtained.
Var m_ShadowProjMat=camera.projectionMatrix;
Step 2086: and setting a world space-to-light source space matrix m _ WorldToCameramatrix.
m_WorldToCameraMatrix=camera.worldToCameraMatrix;
Step 2088: and multiplying the depth map space transformation matrix, the light source space projection matrix and the world space-to-light source space matrix to obtain the transformation matrix of the shadow depth map.
m_ShadowMatrix=m_ShadowSpaceMatrix*m_ShadowProjMat*m_WorldToCameraMatrix;
Furthermore, according to the information such as the position of the light source space camera, the high-precision depth texture of the shadow is directly rendered.
Step 210: and rendering the shadow effect according to the transformation matrix of the shadow depth map and the high-precision depth texture.
In this step, the point coordinates of the world space are converted into the depth texture map space using the transformation matrix of the shadow depth map, and then the high-precision depth texture is sampled to calculate the shadow and rendered.
In a particular embodiment, a shadow filtering algorithm is used to implement the functionality of soft shadows when rendering the shadows. In this embodiment, the coordinates are sampled using a Poisson random distribution offset shadow map, 8 shadow values are sampled, and averaged.
Figure BDA0003406367090000111
Figure BDA0003406367090000121
The soft shadow produced in this way is more soft and realistic.
In another specific embodiment, the soft shadow rendering is achieved by sampling 4 shadow values and averaging.
Figure BDA0003406367090000131
Furthermore, the method also supports the rendering of hard shadow when the shadow is rendered, and 1 shadow value is directly sampled for rendering the hard shadow.
attenuation=SAMPLE_TEXTURE2D_SHADOW(ShadowMap,sampler_ShadowMap,shadowCoord.xyz);
The embodiment of the real-time shadow rendering method for the mobile terminal realizes the local close-up of the object in the three-dimensional scene, obtains the sub-geometric bodies related to the self-shadow through traversing the specific target object, generates the minimum bounding boxes of the sub-geometric bodies, eliminates useless geometric bodies in the scene through the method, generates a compact projection matrix, also reduces the projection matrix required to be used when generating the depth texture, reduces the space occupation ratio of an invalid space on the shadow map, improves the shadow rendering efficiency, and is suitable for 3D application on the mobile terminal. FIG. 7(a) shows a self-shadow effect rendered using conventional CSM techniques, and FIG. 7(b) shows a self-shadow effect rendered using the method of embodiments of the present application, which is significantly better at self-shadow rendering effects of partial close-ups than conventional CSM techniques.
Corresponding to the above method embodiment, the present application further provides an embodiment of a real-time shadow rendering apparatus for a mobile terminal, and fig. 8 shows a schematic structural diagram of a real-time shadow rendering apparatus for a mobile terminal according to an embodiment of the present application. As shown in fig. 8, the apparatus includes:
a determination module for determining a target object to be rendered;
the generating module is used for traversing the target object, obtaining sub-geometric bodies generated from the shadow, and generating respective bounding boxes according to the space and size information of each sub-geometric body;
a merging module for merging the bounding boxes of the sub-geometries to form a minimum bounding box;
the computing module is used for computing a transformation matrix of a shadow depth map required by the cast shadow according to the minimum bounding box, the light position and the direction, and rendering to obtain high-precision depth texture of the shadow;
and the rendering module is used for rendering the shadow effect according to the transformation matrix of the shadow depth map and the high-precision depth texture.
The above is a schematic scheme of a real-time shadow rendering apparatus for a mobile terminal according to the embodiment. It should be noted that the technical solution of the real-time shadow rendering apparatus for a mobile terminal and the technical solution of the real-time shadow rendering method for a mobile terminal belong to the same concept, and details of the technical solution of the real-time shadow rendering apparatus for a mobile terminal, which are not described in detail, can be referred to the description of the technical solution of the real-time shadow rendering method for a mobile terminal.
There is also provided in an embodiment of the present application a computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the real-time shadow rendering method for a mobile terminal when executing the instructions.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the real-time shadow rendering method for a mobile terminal belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the real-time shadow rendering method for a mobile terminal.
An embodiment of the present application also provides a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the real-time shadow rendering method for a mobile terminal as described above.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the real-time shadow rendering method for the mobile terminal, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the real-time shadow rendering method for the mobile terminal.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (10)

1. A real-time shadow rendering method for a mobile terminal, comprising:
determining a target object to be rendered;
traversing the target object to obtain sub-geometric bodies generated from shadows, and generating respective bounding boxes according to the space and size information of each sub-geometric body;
merging bounding boxes of the sub-geometries to form a minimum bounding box;
calculating a transformation matrix of a shadow depth map required by the shadow casting according to the minimum bounding box, the light position and the direction, and rendering to obtain high-precision depth texture of the shadow;
and rendering a shadow effect according to the transformation matrix of the shadow depth map and the high-precision depth texture.
2. The method of claim 1, wherein the sub-geometries generated from shadows comprise sub-geometries that occlude light and sub-geometries onto which shadows due to the occlusion are cast.
3. The method of claim 1, wherein the minimum bounding box includes, but is not limited to, a minimum bounding box formed using the following algorithm: AABB bounding boxes, bounding balls, bounding capsule bodies, directional bounding boxes OBB, fixed directional convex hulls FDH, or other polyhedrons for culling unwanted geometric bodies.
4. The method of claim 1, wherein said calculating a transformation matrix of a shadow depth map required to cast a shadow from the minimum bounding box, light position, and direction comprises:
setting a depth map spatial transformation matrix;
setting a light source space projection matrix;
setting a world space-to-light source space matrix;
and multiplying the depth map space transformation matrix, the light source space projection matrix and the world space-to-light source space matrix to obtain the transformation matrix of the shadow depth map.
5. The method of claim 1, wherein said rendering a high precision depth texture of a shadow comprises:
and setting each parameter of the light source camera according to the node coordinate value of the minimum bounding box, and rendering to obtain the high-precision depth texture of the light source space.
6. The method of claim 1, wherein when there are multiple target objects, the multiple target objects are merged with a minimum bounding box.
7. The method of claim 1, wherein the rendering a shadow effect comprises:
and simultaneously rendering soft shadow and hard shadow effects by adopting a shadow filtering algorithm.
8. A real-time shadow rendering apparatus for a mobile terminal, comprising:
a determination module for determining a target object to be rendered;
the generating module is used for traversing the target object, obtaining sub-geometric bodies generated from the shadow, and generating respective bounding boxes according to the space and size information of each sub-geometric body;
a merging module for merging the bounding boxes of the sub-geometries to form a minimum bounding box;
the computing module is used for computing a transformation matrix of a shadow depth map required by the cast shadow according to the minimum bounding box, the light position and the direction, and rendering to obtain high-precision depth texture of the shadow;
and the rendering module is used for rendering the shadow effect according to the transformation matrix of the shadow depth map and the high-precision depth texture.
9. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-7 when executing the instructions.
10. A computer-readable storage medium storing computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 7.
CN202111513153.2A 2021-12-13 2021-12-13 Real-time shadow rendering method and device for mobile terminal Pending CN114219885A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111513153.2A CN114219885A (en) 2021-12-13 2021-12-13 Real-time shadow rendering method and device for mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111513153.2A CN114219885A (en) 2021-12-13 2021-12-13 Real-time shadow rendering method and device for mobile terminal

Publications (1)

Publication Number Publication Date
CN114219885A true CN114219885A (en) 2022-03-22

Family

ID=80701121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111513153.2A Pending CN114219885A (en) 2021-12-13 2021-12-13 Real-time shadow rendering method and device for mobile terminal

Country Status (1)

Country Link
CN (1) CN114219885A (en)

Similar Documents

Publication Publication Date Title
US10944960B2 (en) Free-viewpoint video generating method and free-viewpoint video generating system
US8665341B2 (en) Methods and apparatus for rendering output images with simulated artistic effects from focused plenoptic camera data
CN110570506B (en) Map resource management method, device, computing equipment and storage medium
CN107886552B (en) Mapping processing method and device
CN109949693B (en) Map drawing method and device, computing equipment and storage medium
CN112652046B (en) Game picture generation method, device, equipment and storage medium
CN115690382B (en) Training method of deep learning model, and method and device for generating panorama
Ji et al. Geometry-aware single-image full-body human relighting
CN110363733B (en) Mixed image generation method and device
CN111724313B (en) Shadow map generation method and device
CN112562067A (en) Method for generating large-batch point cloud data sets
CN111617480A (en) Point cloud rendering method and device
CN114219885A (en) Real-time shadow rendering method and device for mobile terminal
CN114820374A (en) Fuzzy processing method and device
CN111899326A (en) Three-dimensional reconstruction method based on GPU parallel acceleration
CN116452715A (en) Dynamic human hand rendering method, device and storage medium
CN109493376B (en) Image processing method and apparatus, storage medium, and electronic apparatus
CN112203074B (en) Camera translation new viewpoint image generation method and system based on two-step iteration
Peng et al. PDRF: progressively deblurring radiance field for fast scene reconstruction from blurry images
CN114419235A (en) Real-time plane reflection rendering method and device for mobile terminal
Gao et al. EvaSurf: Efficient View-Aware Implicit Textured Surface Reconstruction on Mobile Devices
CN115147280B (en) Training method, image processing method, device and equipment for deep learning model
CN117332840B (en) Training method of nerve radiation field, method and device for acquiring target scene image
CN112929628B (en) Virtual viewpoint synthesis method, device, electronic equipment and storage medium
Peng et al. PDRF: Progressively Deblurring Radiance Field for Fast and Robust Scene Reconstruction from Blurry Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination