CN108038897B - Shadow map generation method and device - Google Patents

Shadow map generation method and device Download PDF

Info

Publication number
CN108038897B
CN108038897B CN201711277706.2A CN201711277706A CN108038897B CN 108038897 B CN108038897 B CN 108038897B CN 201711277706 A CN201711277706 A CN 201711277706A CN 108038897 B CN108038897 B CN 108038897B
Authority
CN
China
Prior art keywords
texture
shadow
view
image
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711277706.2A
Other languages
Chinese (zh)
Other versions
CN108038897A (en
Inventor
吕天胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Pixel Software Technology Co Ltd
Original Assignee
Beijing Pixel Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Pixel Software Technology Co Ltd filed Critical Beijing Pixel Software Technology Co Ltd
Priority to CN201711277706.2A priority Critical patent/CN108038897B/en
Publication of CN108038897A publication Critical patent/CN108038897A/en
Application granted granted Critical
Publication of CN108038897B publication Critical patent/CN108038897B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a shadow map generation method and a shadow map generation device. The method comprises the following steps: creating a texture map according to the number of the preset shadow images and the size of the preset shadow images, and dividing the texture map into a plurality of texture areas with the same size; the method comprises the steps of parallelly dividing a camera space under a current visual angle according to the number of preset shadow images, and generating a view projection matrix corresponding to each view space under the current light direction based on the light direction in a current game scene; rendering and generating a shadow depth image of an object projected to a corresponding view space in the current light direction in a corresponding texture area according to the view projection matrix of each view space in sequence; and carrying out matrix transformation on the projection matrix of each view to obtain a conversion matrix and a texture coordinate range corresponding to each texture area. Shadow images of different levels in the shadow map generated by the method can be sampled simultaneously so as to improve rendering efficiency and level transition effect.

Description

Shadow map generation method and device
Technical Field
The invention relates to the technical field of game scene processing, in particular to a shadow map generating method and device.
Background
In order to simulate the environment change condition as truly as possible in the three-dimensional game, real-time shadow rendering is often required to improve the game image quality.
In the prior art, a commonly used real-time shadow rendering scheme is to generate a plurality of different shadow maps for an object in a game scene, and select different shadow maps for sampling rendering according to different depth values of the object during shadow rendering, so as to achieve the effect of real-time shadow rendering. However, the shadow maps generated by the scheme have low shadow rendering efficiency, the shadow maps of different levels cannot be sampled simultaneously, and the level transition effect is poor for pixel points positioned between the shadow maps of adjacent levels.
Disclosure of Invention
In order to overcome the above disadvantages in the prior art, an object of the present invention is to provide a method and an apparatus for generating a shadow map, where shadow images at different levels of the shadow map generated by the method can be sampled simultaneously, so as to improve the rendering efficiency and level transition effect of real-time shadow rendering.
As a method, a preferred embodiment of the present invention provides a shadow map generation method, including:
creating a texture map according to the number of preset shadow images and the size of the preset images, and dividing the texture map into a plurality of texture regions with the same size, wherein the total number of the texture regions is equal to the number of the preset shadow images;
the method comprises the steps of parallelly dividing a camera space under a current visual angle according to the number of preset shadow images, and generating a view projection matrix corresponding to each view space under the current light direction based on the light direction in a current game scene;
rendering and generating a shadow depth image of an object projected to a corresponding view space in the current light direction in a corresponding texture area according to the view projection matrix of each view space in sequence;
and carrying out matrix transformation on the view projection matrix of each view space to obtain a conversion matrix corresponding to each texture area and a texture coordinate range of each shadow depth image in the texture map so as to generate a corresponding shadow texture mapping.
In terms of an apparatus, a preferred embodiment of the present invention provides a shadow map generating apparatus, the apparatus including:
the texture creating module is used for creating a texture map according to the number of preset shadow images and the size of the preset images and dividing the texture map into a plurality of texture areas with the same size, wherein the total number of the texture areas is equal to the number of the preset shadow images;
the matrix generation module is used for parallelly dividing the camera space under the current visual angle according to the number of preset shadow images and generating a view projection matrix corresponding to each view space under the current light direction based on the light direction in the current game scene;
the shadow rendering module is used for rendering and generating a shadow depth image of an object projected to the corresponding view space in the current light direction in the corresponding texture area according to the view projection matrix of each view space in sequence;
and the matrix transformation module is used for carrying out matrix transformation on the view projection matrix of each view space to obtain a transformation matrix corresponding to each texture area and a texture coordinate range of each shadow depth image in the texture map so as to generate a corresponding shadow texture map.
Compared with the prior art, the shadow map generation method and the shadow map generation device provided by the preferred embodiment of the invention have the following beneficial effects: the shadow maps generated by the shadow map generating method can be sampled simultaneously in the shadow images of different levels, so that the rendering efficiency and the level transition effect of real-time shadow rendering are improved. Firstly, creating a texture map according to the number of preset shadow images and the size of the preset images, and dividing the texture map into a plurality of texture regions with the same size, wherein the total number of the texture regions is equal to the number of the preset shadow images; then, the method carries out parallel segmentation on the camera space under the current visual angle according to the number of preset shadow images, and generates a view projection matrix corresponding to each view space obtained by segmentation under the current light direction based on the light direction in the current game scene; then, rendering and generating a shadow depth image of an object projected to a corresponding view space in the current light direction in a corresponding texture area according to the view projection matrix of each view space in sequence; finally, the method performs matrix transformation on the view projection matrix of each view space to obtain a conversion matrix corresponding to each texture area and a texture coordinate range of each shadow depth image in the texture map so as to generate a corresponding shadow texture map, so that one texture map comprises shadow images of different levels, the shadow images of different levels can be ensured to be sampled simultaneously, and the rendering efficiency and level transition effect of real-time shadow rendering are improved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments are briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of the claims of the present invention, and it is obvious for those skilled in the art that other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a block diagram of a computing device according to a preferred embodiment of the present invention.
FIG. 2 is a flowchart illustrating a shadow map generation method according to a preferred embodiment of the present invention.
FIG. 3 is a block diagram of the shadow map generating apparatus shown in FIG. 1 according to a preferred embodiment of the present invention.
FIG. 4 is a block diagram of the shadow rendering module shown in FIG. 3.
Icon: 10-a computing device; 11-a memory; 12-a processor; 13-a communication unit; 14-a graphics card unit; 100-shadow map generating means; 110-a texture creation module; 120-a matrix generation module; 130-a shadow rendering module; 140-a matrix transformation module; 131-a classification submodule; 132-render submodule.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Fig. 1 is a block diagram of a computing device 10 according to a preferred embodiment of the invention. In the embodiment of the present invention, the computing device 10 can generate the corresponding shadow maps including the shadow images at different levels for the object that needs to be subjected to the real-time shadow rendering in the current game scene, so as to ensure that the shadow images at different levels in the shadow maps can be simultaneously sampled when the object is subjected to the real-time shadow rendering, thereby improving the rendering efficiency and the level transition effect of the real-time shadow rendering. In the present embodiment, the computing device 10 may be, but is not limited to, a Personal Computer (PC), a tablet PC, a Personal Digital Assistant (PDA), a server with an image processing function, or the like.
In this embodiment, the computing device 10 may include a shadow map generating apparatus 100, a memory 11, a processor 12, a communication unit 13, and a graphics card unit 14. The memory 11, the processor 12, the communication unit 13 and the display card unit 14 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The shadow map generating apparatus 100 includes at least one software function module which can be stored in the memory 11 in the form of software or firmware (firmware), and the processor 12 executes various function applications and data processing by running software programs and modules stored in the memory 11.
In this embodiment, the Memory 11 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an electrically Erasable Programmable Read-Only Memory (EEPROM), and the like. The memory 11 is used for storing a program, and the processor 12 executes the program after receiving an execution instruction. Further, the software programs and modules in the memory 11 may also include an operating system, which may include various software components and/or drivers for managing system tasks (e.g., memory management, storage device control, power management, etc.), and may communicate with various hardware or software components to provide an operating environment for other software components.
In this embodiment, the processor 12 may be an integrated circuit chip having signal processing capabilities. The Processor 12 may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In this embodiment, the communication unit 13 is configured to establish a communication connection between the computing device 10 and another external device through a network, and perform data transmission through the network.
In this embodiment, the graphics card unit 14 is used for performing operation processing on the graphics data to relieve the operation pressure of the processor 12. The core component of the Graphics card Unit 14 is a GPU (Graphics Processing Unit), and is configured to convert and drive Graphics data information required by the computing device 10, and control a display to display the Graphics data information.
In this embodiment, the computing device 10 generates shadow maps including shadow images of different levels for objects in the game scene by the shadow map generating apparatus 100 stored in the memory 11, so as to perform efficient shadow rendering and good level transition effect on the objects in the game scene that need real-time shadow rendering by the shadow maps.
It will be appreciated that the configuration shown in FIG. 1 is merely a structural schematic of computing device 10, and that computing device 10 may include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
Fig. 2 is a flowchart illustrating a shadow map generating method according to a preferred embodiment of the invention. In the embodiment of the present invention, the shadow map generation method is applied to the computing device 10, and the specific flow and steps of the shadow map generation method shown in fig. 2 are described in detail below.
In the embodiment of the present invention, the shadow map generating method includes the following steps:
step S210, creating a texture map according to the number of the preset shadow images and the preset image size, and dividing the texture map into a plurality of texture regions with the same size.
In this embodiment, the preset number of shadow images may represent how many shadow images of different levels need to be generated by the computing device 10, and the preset size of the image may represent a length and a width corresponding to each shadow image. Wherein, the computing device 10 may obtain the preset number of shadow images and the preset size of images from other external devices communicating with the computing device 10 through a network; the computing device 10 may also receive the number of shadow images and the size of the shadow images from the game designer by providing an external input device. In this embodiment, the preset number of shadow images and the preset image size may be stored in the memory 11.
In this embodiment, the total number of the divided texture regions on the texture map is equal to the number of the preset shadow images, and the preset image size includes an image width and an image length. The step of creating a texture map according to the number of the preset shadow images and the size of the preset shadow images and dividing the texture map into a plurality of texture regions with the same size comprises the following steps:
calculating the length of the image to be created according to the number of the preset shadow images and the image length in the preset image size;
and correspondingly creating a texture map with matched size according to the length of the graph to be created and the image width in the preset image size, and dividing the texture map into a plurality of texture areas along the length direction of the texture map by taking the image length as an interval.
The texture region is used for drawing corresponding shadow images, so that shadow images of different levels can be included in one texture map.
Step S220, the camera space under the current visual angle is divided in parallel according to the number of the preset shadow images, and a view projection matrix corresponding to each view space under the current light direction is generated through division based on the light direction in the current game scene.
In this embodiment, the camera space is a shooting space that can be observed by the virtual camera under the current view angle, the light direction is a light source irradiation direction in the game scene, the view space is a different range interval obtained by dividing the shooting range of the virtual camera by the computing device 10, and a shadow depth map of an object corresponding to each view space in the corresponding view space is different from shadow depth maps corresponding to other spaces and has different levels. The view projection matrix is a matrix used when the world coordinate of the object is converted into the corresponding view coordinate by the corresponding view space, and different view spaces correspond to different view projection matrices.
In this embodiment, the step of performing parallel segmentation on the camera space under the current view angle according to the number of the preset shadow images includes:
and according to the distance information between each view point corresponding to the camera space and each position in the camera space, performing parallel segmentation on the camera space from near to far according to the preset shadow image number, so that each view space obtained by segmentation is sequentially increased from near to far.
The viewpoint corresponding to the camera space is the virtual camera, the number of the divided view spaces is the same as the number of the preset shadow images, the view space closest to the virtual camera is the first-level view space, the view space adjacent to the first-level view space is the second-level view space, and the like is performed until the last divided view space. Where each view space corresponds to a level of shadow depth image. For example, the first level view space corresponds to a first level shadow image and the second level view space corresponds to a second level shadow image.
In this embodiment, the view projection matrices corresponding to the view spaces are obtained based on the same light direction, and the view projection matrices corresponding to the adjacent view spaces are different due to different view space division positions. The view projection matrix corresponding to the same view space changes with the change of the light direction.
And step S230, rendering and generating a shadow depth image of the object projected to the corresponding view space in the current light direction in the corresponding texture area according to the view projection matrixes of the view spaces in sequence.
In this embodiment, after rendering the shadow depth image corresponding to one view space, the computing device 10 renders the shadow depth image corresponding to the next view space. Wherein, the step of rendering and generating a shadow depth image of an object projected to a corresponding view space in a current light direction in a corresponding texture area in sequence according to the view projection matrix of each view space comprises:
classifying objects projected into each view space in the game scene according to the sequence from near to far;
and according to the corresponding relation between each texture region and each view space, shadow image drawing processing is carried out on the objects classified to the corresponding view space in each texture region in sequence on the basis of the corresponding view projection matrix, so as to obtain a corresponding shadow depth image.
Wherein the computing device 10 classifies objects in the game scene by identifying objects projected in each view space from near to far with a virtual camera as a starting point. And each texture area in the texture map sequentially corresponds to each view space one by one according to the sequence from left to right, and each texture area correspondingly draws the shadow depth image of the object projected to the matched view space according to the corresponding view projection matrix, so that the shadow depth images of different levels are drawn in the same texture map.
Step S240, performing matrix transformation on the view projection matrix of each view space to obtain a transformation matrix corresponding to each texture region and a texture coordinate range of each shadow depth image in the texture map, so as to generate a corresponding shadow texture map.
In this embodiment, the shadow texture map is the texture map in which the transformation matrix corresponding to each texture region and the texture coordinate range of each shadow depth image in the texture map are obtained after all levels of shadow depth images are rendered. After the computing device 10 draws the shadow depth images of each level on the texture map, matrix transformation processing is performed on the view projection matrix of each view space, so that each transformation matrix obtained through transformation can be correspondingly matched with each texture region of the texture map, and it is ensured that pixel points of objects in a game scene can pass through each transformation matrix, and whether texture coordinates corresponding to the pixel points on the texture map fall into the corresponding texture region is judged. For example, after coordinate transformation is performed on the world coordinate of one pixel point through the transformation matrix of the texture region corresponding to the first-level view space, a texture coordinate is obtained, and if the texture coordinate is within the texture coordinate range of the texture region corresponding to the first-level view space, the computing device 10 may perform shadow rendering using the shadow depth image in the texture region; if the texture coordinate is outside the texture coordinate range of the texture region corresponding to the first level view space, the computing device 10 performs coordinate transformation on the world coordinate of the pixel point by using the transformation matrix of the texture region corresponding to the next level view space, and determines whether the transformed texture coordinate is within the texture coordinate range of the texture region corresponding to the used transformation matrix.
In this embodiment, the step of performing matrix transformation on the view projection matrix of each view space to obtain a transformation matrix corresponding to each texture region, and a texture coordinate range of each shadow depth image in the texture map includes:
carrying out scaling processing and displacement processing on the projection matrix of each view according to the number of preset shadow images and a coordinate adjustment strategy to obtain a conversion matrix corresponding to each texture area;
and obtaining the texture coordinate range of the shadow depth image corresponding to each texture area according to the conversion matrix corresponding to each texture area and the distribution condition of each texture area in the texture map.
In this embodiment, the coordinate adjustment policy is used to adjust a coordinate range corresponding to each view projection matrix, and the computing device 10 may adjust the coordinate range corresponding to each view projection matrix into a coordinate range corresponding to each texture region on the texture map through the coordinate adjustment policy, so as to obtain a texture coordinate range of the shadow depth image corresponding to each texture region. For example, if there are three texture regions on one texture map, the texture coordinate range of the first level shadow depth image is a rectangle surrounded by four texture coordinates (0,0), (0.3333,1) and (0,1), the texture coordinate range of the second level shadow depth image is a rectangle surrounded by four texture coordinates (0.3333,0), (0.6666,1) and (0.3333,1), and the texture coordinate range of the third level shadow depth image is a rectangle surrounded by four texture coordinates (0.6666,0), (1,1) and (0.6666, 1).
Fig. 3 is a block diagram of the shadow map generating apparatus 100 shown in fig. 1 according to a preferred embodiment of the present invention. In the embodiment of the present invention, the shadow map generating apparatus 100 includes a texture creating module 110, a matrix generating module 120, a shadow rendering module 130, and a matrix transforming module 140.
The texture creating module 110 is configured to create a texture map according to the number of the preset shadow images and the preset image size, and divide the texture map into a plurality of texture regions with the same size.
In this embodiment, the total number of the texture regions is equal to the number of the preset shadow images, the preset image size includes an image width and an image length, and the step of creating a texture map by the texture creating module 110 according to the number of the preset shadow images and the preset image size and dividing the texture map into a plurality of texture regions with the same size includes:
calculating the length of the image to be created according to the number of the preset shadow images and the image length in the preset image size;
and correspondingly creating a texture map with matched size according to the length of the graph to be created and the image width in the preset image size, and dividing the texture map into a plurality of texture areas along the length direction of the texture map by taking the image length as an interval.
The texture creating module 110 may execute step S210 shown in fig. 2, and the detailed description may refer to the above detailed description of step S210.
The matrix generating module 120 is configured to perform parallel segmentation on the camera space at the current view angle according to the number of preset shadow images, and generate a view projection matrix corresponding to each view space obtained through segmentation in the current light direction based on the light direction in the current game scene.
In this embodiment, the way for the matrix generation module 120 to perform parallel segmentation on the camera space under the current view angle according to the number of the preset shadow images includes:
and according to the distance information between each view point corresponding to the camera space and each position in the camera space, performing parallel segmentation on the camera space from near to far according to the preset shadow image number, so that each view space obtained by segmentation is sequentially increased from near to far.
The matrix generation module 120 may execute step S220 shown in fig. 2, and the detailed description may refer to the above detailed description of step S220.
The shadow rendering module 130 is configured to render and generate a shadow depth image of an object projected to a corresponding view space in the current light direction in a corresponding texture region according to the view projection matrices of the view spaces in sequence.
Optionally, please refer to fig. 4, which is a block diagram illustrating the shadow rendering module 130 shown in fig. 3. In this embodiment, the shadow rendering module 130 may include a classification sub-module 131 and a rendering sub-module 132.
The classification submodule 131 is configured to classify the objects projected into each view space in the game scene in order from near to far.
The rendering submodule 132 is configured to, according to a correspondence between each texture region and each view space, sequentially perform shadow image rendering processing on the objects classified into the corresponding view space in each texture region based on the corresponding view projection matrix, so as to obtain a corresponding shadow depth image.
Referring to fig. 3 again, the matrix transformation module 140 is configured to perform matrix transformation on the view projection matrix of each view space to obtain a transformation matrix corresponding to each texture region and a texture coordinate range of each shadow depth image in the texture map, so as to generate a corresponding shadow map.
In this embodiment, the matrix transformation module 140 performs matrix transformation on the view projection matrix of each view space to obtain a transformation matrix corresponding to each texture region, and the texture coordinate range of each shadow depth image in the texture map includes:
carrying out scaling processing and displacement processing on the projection matrix of each view according to the number of preset shadow images and a coordinate adjustment strategy to obtain a conversion matrix corresponding to each texture area;
and obtaining the texture coordinate range of the shadow depth image corresponding to each texture area according to the conversion matrix corresponding to each texture area and the distribution condition of each texture area in the texture map.
The matrix transformation module 140 may perform step S240 in fig. 2, and the detailed description may refer to the above detailed description of step S240.
In summary, in the shadow map generating method and apparatus provided in the preferred embodiment of the present invention, shadow images at different levels of the shadow map generated by the shadow map generating method can be sampled simultaneously, so as to improve the rendering efficiency and level transition effect of real-time shadow rendering. Firstly, creating a texture map according to the number of preset shadow images and the size of the preset images, and dividing the texture map into a plurality of texture regions with the same size, wherein the total number of the texture regions is equal to the number of the preset shadow images; then, the method carries out parallel segmentation on the camera space under the current visual angle according to the number of preset shadow images, and generates a view projection matrix corresponding to each view space obtained by segmentation under the current light direction based on the light direction in the current game scene; then, rendering and generating a shadow depth image of an object projected to a corresponding view space in the current light direction in a corresponding texture area according to the view projection matrix of each view space in sequence; finally, the method performs matrix transformation on the view projection matrix of each view space to obtain a conversion matrix corresponding to each texture area and a texture coordinate range of each shadow depth image in the texture map so as to generate a corresponding shadow texture map, so that one texture map comprises shadow images of different levels, the shadow images of different levels can be ensured to be sampled simultaneously, and the rendering efficiency and level transition effect of real-time shadow rendering are improved.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of shadow map generation, the method comprising:
creating a texture map according to the number of preset shadow images and the size of the preset images, and dividing the texture map into a plurality of texture regions with the same size, wherein the total number of the texture regions is equal to the number of the preset shadow images;
the method comprises the steps of parallelly dividing a camera space under a current visual angle according to the number of preset shadow images, and generating a view projection matrix corresponding to each view space under the current light direction based on the light direction in a current game scene;
rendering and generating a shadow depth image of an object projected to a corresponding view space in the current light direction in a corresponding texture area according to the view projection matrix of each view space in sequence;
and performing matrix transformation on the view projection matrix of each view space to obtain a conversion matrix corresponding to each texture area and a texture coordinate range of each shadow depth image in the texture map so as to generate a corresponding shadow texture mapping, wherein the shadow texture mapping is the texture map which is drawn with all the shadow depth images and comprises the conversion matrix corresponding to each texture area and the texture coordinate range of each shadow depth image, and the conversion matrix is used for converting world coordinates of object pixel points in a game scene into texture coordinates in the corresponding texture area in the texture map.
2. The method according to claim 1, wherein the predetermined image size comprises an image width and an image length, and the step of creating a texture map according to the number of shadow images and the predetermined image size and dividing the texture map into a plurality of texture regions of the same size comprises:
calculating the length of the image to be created according to the number of the preset shadow images and the image length in the preset image size;
and correspondingly creating a texture map with matched size according to the length of the graph to be created and the image width in the preset image size, and dividing the texture map into a plurality of texture areas along the length direction of the texture map by taking the image length as an interval.
3. The method according to claim 1, wherein the step of performing parallel segmentation on the camera space at the current view angle according to the preset number of shadow images comprises:
and according to the distance information between each view point corresponding to the camera space and each position in the camera space, performing parallel segmentation on the camera space from near to far according to the preset shadow image number, so that each view space obtained by segmentation is sequentially increased from near to far.
4. The method of claim 3, wherein the step of rendering and generating a shadow depth image of the object projected into the corresponding view space under the current ray direction in the corresponding texture region sequentially according to the view projection matrix of each view space comprises:
classifying objects projected into each view space in the game scene according to the sequence from near to far;
and according to the corresponding relation between each texture region and each view space, shadow image drawing processing is carried out on the objects classified to the corresponding view space in each texture region in sequence on the basis of the corresponding view projection matrix, so as to obtain a corresponding shadow depth image.
5. The method according to claim 1, wherein the step of performing matrix transformation on the view projection matrix of each view space to obtain a transformation matrix corresponding to each texture region, and the texture coordinate range of each shadow depth image in the texture map comprises:
carrying out scaling processing and displacement processing on the projection matrix of each view according to the number of preset shadow images and a coordinate adjustment strategy to obtain a conversion matrix corresponding to each texture area;
and obtaining the texture coordinate range of the shadow depth image corresponding to each texture area according to the conversion matrix corresponding to each texture area and the distribution condition of each texture area in the texture map.
6. A shadow map generating apparatus, characterized in that the apparatus comprises:
the texture creating module is used for creating a texture map according to the number of preset shadow images and the size of the preset images and dividing the texture map into a plurality of texture areas with the same size, wherein the total number of the texture areas is equal to the number of the preset shadow images;
the matrix generation module is used for parallelly dividing the camera space under the current visual angle according to the number of preset shadow images and generating a view projection matrix corresponding to each view space under the current light direction based on the light direction in the current game scene;
the shadow rendering module is used for rendering and generating a shadow depth image of an object projected to the corresponding view space in the current light direction in the corresponding texture area according to the view projection matrix of each view space in sequence;
the matrix transformation module is used for carrying out matrix transformation on the view projection matrix of each view space to obtain a transformation matrix corresponding to each texture area and a texture coordinate range of each shadow depth image in the texture map so as to generate a corresponding shadow texture mapping, wherein the shadow texture mapping is the texture map which is drawn with all the shadow depth images and comprises the transformation matrix corresponding to each texture area and the texture coordinate range of each shadow depth image, and the transformation matrix is used for transforming world coordinates of object pixel points in a game scene into texture coordinates in the corresponding texture area in the texture map.
7. The apparatus of claim 6, wherein the predetermined image size comprises an image width and an image length, and the texture creating module creates a texture map according to a predetermined number of shadow images and a predetermined image size, and divides the texture map into a plurality of texture regions of the same size by:
calculating the length of the image to be created according to the number of the preset shadow images and the image length in the preset image size;
and correspondingly creating a texture map with matched size according to the length of the graph to be created and the image width in the preset image size, and dividing the texture map into a plurality of texture areas along the length direction of the texture map by taking the image length as an interval.
8. The apparatus of claim 6, wherein the matrix generation module performs parallel segmentation on the camera space at the current view according to the number of the preset shadow images, and the parallel segmentation comprises:
and according to the distance information between each view point corresponding to the camera space and each position in the camera space, performing parallel segmentation on the camera space from near to far according to the preset shadow image number, so that each view space obtained by segmentation is sequentially increased from near to far.
9. The apparatus of claim 8, wherein the shadow rendering module comprises:
the classification submodule is used for classifying objects projected into each view space in the game scene according to the sequence from near to far;
and the rendering submodule is used for sequentially performing shadow image rendering processing on the objects classified into the corresponding view spaces in each texture region based on the corresponding view projection matrix according to the corresponding relation between each texture region and each view space to obtain corresponding shadow depth images.
10. The apparatus of claim 6, wherein the matrix transformation module performs matrix transformation on the view projection matrix of each view space to obtain a transformation matrix corresponding to each texture region, and the texture coordinate range of each shadow depth image in the texture map comprises:
carrying out scaling processing and displacement processing on the projection matrix of each view according to the number of preset shadow images and a coordinate adjustment strategy to obtain a conversion matrix corresponding to each texture area;
and obtaining the texture coordinate range of the shadow depth image corresponding to each texture area according to the conversion matrix corresponding to each texture area and the distribution condition of each texture area in the texture map.
CN201711277706.2A 2017-12-06 2017-12-06 Shadow map generation method and device Active CN108038897B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711277706.2A CN108038897B (en) 2017-12-06 2017-12-06 Shadow map generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711277706.2A CN108038897B (en) 2017-12-06 2017-12-06 Shadow map generation method and device

Publications (2)

Publication Number Publication Date
CN108038897A CN108038897A (en) 2018-05-15
CN108038897B true CN108038897B (en) 2021-06-04

Family

ID=62095780

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711277706.2A Active CN108038897B (en) 2017-12-06 2017-12-06 Shadow map generation method and device

Country Status (1)

Country Link
CN (1) CN108038897B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109448099B (en) * 2018-09-21 2023-09-22 腾讯科技(深圳)有限公司 Picture rendering method and device, storage medium and electronic device
CN109949401A (en) * 2019-03-14 2019-06-28 成都风际网络科技股份有限公司 A kind of method of the non real-time Shading Rendering of non-static object of mobile platform
CN109993823B (en) * 2019-04-11 2022-11-25 腾讯科技(深圳)有限公司 Shadow rendering method, device, terminal and storage medium
CN110473296B (en) * 2019-08-15 2023-09-26 浙江中国轻纺城网络有限公司 Mapping method and device
CN110585713B (en) * 2019-09-06 2021-10-15 腾讯科技(深圳)有限公司 Method and device for realizing shadow of game scene, electronic equipment and readable medium
CN111131807B (en) * 2019-12-30 2021-11-23 华人运通(上海)云计算科技有限公司 Method and system for simulating and displaying vehicle light projection
CN111243077B (en) * 2020-01-17 2022-08-12 江苏艾佳家居用品有限公司 Real transition shadow realization method based on space pre-exploration
CN113362392B (en) * 2020-03-05 2024-04-23 杭州海康威视数字技术股份有限公司 Visual field generation method, device, computing equipment and storage medium
CN111862295B (en) * 2020-07-17 2024-07-02 完美世界(重庆)互动科技有限公司 Virtual object display method, device, equipment and storage medium
CN112215936A (en) * 2020-10-16 2021-01-12 广州虎牙科技有限公司 Image rendering method and device, electronic equipment and storage medium
CN113269863B (en) * 2021-07-19 2021-09-28 成都索贝视频云计算有限公司 Video image-based foreground object shadow real-time generation method
CN113893533B (en) * 2021-09-30 2024-07-09 网易(杭州)网络有限公司 Display control method and device in game
CN114494384B (en) * 2021-12-27 2023-01-13 北京吉威空间信息股份有限公司 Building shadow analysis method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104103092A (en) * 2014-07-24 2014-10-15 无锡梵天信息技术股份有限公司 Real-time dynamic shadowing realization method based on projector lamp
CN104299257A (en) * 2014-07-18 2015-01-21 无锡梵天信息技术股份有限公司 Outdoor-sunlight-based method for realizing real-time dynamic shadow
CN107274476A (en) * 2017-08-16 2017-10-20 城市生活(北京)资讯有限公司 The generation method and device of a kind of echo

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299257A (en) * 2014-07-18 2015-01-21 无锡梵天信息技术股份有限公司 Outdoor-sunlight-based method for realizing real-time dynamic shadow
CN104103092A (en) * 2014-07-24 2014-10-15 无锡梵天信息技术股份有限公司 Real-time dynamic shadowing realization method based on projector lamp
CN107274476A (en) * 2017-08-16 2017-10-20 城市生活(北京)资讯有限公司 The generation method and device of a kind of echo

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于GPU的光源空间平行分割阴影图算法;马上等;《计算机辅助设计与图形学学报》;20100331;第22卷(第3期);388-395页 *
基于光源空间透视的平行分割阴影图算法;郭钊等;《地理与地理信息科学》;20160130;第32卷(第1期);40-42页 *

Also Published As

Publication number Publication date
CN108038897A (en) 2018-05-15

Similar Documents

Publication Publication Date Title
CN108038897B (en) Shadow map generation method and device
CN111583381B (en) Game resource map rendering method and device and electronic equipment
JP7566270B2 (en) Systems, methods and apparatus for image processing - Patents.com
CN110428504B (en) Text image synthesis method, apparatus, computer device and storage medium
US20230298237A1 (en) Data processing method, apparatus, and device and storage medium
CN109410213A (en) Polygon pel method of cutting out, computer readable storage medium, electronic equipment based on bounding box
US20240212236A1 (en) Polygon Processing Methods and Systems
US20230206567A1 (en) Geometry-aware augmented reality effects with real-time depth map
WO2023179091A1 (en) Three-dimensional model rendering method and apparatus, and device, storage medium and program product
US8941660B2 (en) Image generating apparatus, image generating method, and image generating integrated circuit
CN114375464A (en) Ray tracing dynamic cells in virtual space using bounding volume representations
US10403040B2 (en) Vector graphics rendering techniques
CN113506305B (en) Image enhancement method, semantic segmentation method and device for three-dimensional point cloud data
EP4075381B1 (en) Image processing method and system
CN116630516B (en) 3D characteristic-based 2D rendering ordering method, device, equipment and medium
US11281935B2 (en) 3D object detection from calibrated 2D images
CN112734900A (en) Baking method, baking device, baking equipment and computer-readable storage medium of shadow map
CN116310060A (en) Method, device, equipment and storage medium for rendering data
CN113808246B (en) Method and device for generating map, computer equipment and computer readable storage medium
CN111028357B (en) Soft shadow processing method and device of augmented reality equipment
CN114238677A (en) Multi-view display method, device, equipment and medium
CN112419459A (en) Method, apparatus, computer device and storage medium for baked model AO mapping
KR102713170B1 (en) Geometry-aware augmented reality effects using real-time depth maps
CN112699795B (en) Face recognition method and device, electronic equipment and storage medium
WO2022120800A1 (en) Graphics processing method and apparatus, and device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant