WO2023185317A1 - 虚拟地形的光照渲染方法、装置、介质、设备和程序产品 - Google Patents

虚拟地形的光照渲染方法、装置、介质、设备和程序产品 Download PDF

Info

Publication number
WO2023185317A1
WO2023185317A1 PCT/CN2023/077124 CN2023077124W WO2023185317A1 WO 2023185317 A1 WO2023185317 A1 WO 2023185317A1 CN 2023077124 W CN2023077124 W CN 2023077124W WO 2023185317 A1 WO2023185317 A1 WO 2023185317A1
Authority
WO
WIPO (PCT)
Prior art keywords
probe
probe point
points
target
current
Prior art date
Application number
PCT/CN2023/077124
Other languages
English (en)
French (fr)
Inventor
廖诚
文聪
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023185317A1 publication Critical patent/WO2023185317A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Definitions

  • the present application relates to the field of computers, and in particular to lighting rendering of virtual terrain.
  • the light map method is usually used to perform pixel-by-pixel lighting rendering of the virtual terrain.
  • this method usually takes up a lot of memory and storage space, and also requires a high amount of calculation. support, which leads to the problem of low lighting rendering efficiency of virtual terrain. Therefore, there is a problem of low lighting rendering efficiency of virtual terrain.
  • embodiments of the present application provide a virtual terrain lighting rendering method, device, medium, equipment and program product to at least solve the technical problem of low lighting rendering efficiency of virtual terrain.
  • a method for lighting rendering of virtual terrain including: obtaining a set of candidate probe points for the target virtual terrain sub-block, wherein the probe points in the set of candidate probe points are used for the above-mentioned
  • the target virtual terrain sub-block performs illumination rendering; determine the probe points corresponding to each vertex in the target virtual terrain sub-block in the above candidate probe point set, and obtain a first index relationship set, wherein each in the above first index relationship set An index relationship represents a vertex and a probe point with a corresponding relationship; obtain the spherical harmonic basis coefficient of each probe point in the above candidate probe point set, and determine the spherical harmonic basis coefficient in the above candidate probe point set based on the spherical harmonic basis coefficient of each of the above candidate probe point set.
  • the difference degree of each two probe points according to the above difference degree, the probe points in the above candidate probe point set are merged to obtain the target probe point set, and according to the above first index relationship set, the probe points that have the same characteristics as the probe points before merging are obtained.
  • the vertices of the corresponding relationship establish a corresponding relationship with the merged probe points to obtain a second index relationship set; according to the spherical harmonic basis coefficients of each probe point in the above target probe point set and the above second index relationship set, the virtual target is Terrain sub-blocks are rendered with lighting.
  • a virtual terrain lighting rendering device including: a first acquisition unit, configured to acquire a candidate probe point set of the target virtual terrain sub-block, wherein the candidate probe points The probe points in the set are used to perform lighting rendering on the target virtual terrain sub-block; the determination unit is used to determine the probe points corresponding to each vertex in the target virtual terrain sub-block in the candidate probe point set, and obtain the first index A relationship set, wherein each index relationship in the above-mentioned first index relationship set represents a vertex and a probe point with a corresponding relationship; the second acquisition unit is used to obtain the spherical harmonic basis coefficient of each probe point in the above-mentioned candidate probe point set , and determine the degree of difference of each two probe points in the set of candidate probe points based on the spherical harmonic basis coefficients of each of the above probe points; the merging unit is used to determine the degree of difference between the probe points in the set of candidate probe points based on the above degree of difference
  • a computer program product or computer program includes computer instructions stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the above lighting rendering method of virtual terrain.
  • a storage medium is also provided, the storage medium is used to store a computer program, and the computer program is used to execute the above-mentioned lighting rendering method of virtual terrain.
  • an electronic device including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the above-mentioned steps through the computer program. Lighting rendering method for virtual terrain.
  • a candidate probe point set of the target virtual terrain sub-block is obtained, wherein the probe points in the candidate probe point set are used for lighting rendering of the target virtual terrain sub-block; in the candidate probe point set Determine the probe points corresponding to each vertex in the target virtual terrain sub-block, and obtain the first index relationship set, wherein each index relationship in the above-mentioned first index relationship set represents a vertex and probe point with a corresponding relationship; obtain the above-mentioned The spherical harmonic basis coefficient of each probe point in the candidate probe point set, and based on the spherical harmonic basis coefficient of each of the above-mentioned probe points, determine the difference degree of each two probe points in the above-mentioned candidate probe point set, for the above-mentioned candidate probe point set Merge the probe points in to obtain the target probe point set, and according to the above first index relationship set, establish a corresponding relationship between the vertices that have a corresponding relationship with the probe points before merging and the
  • a large number of probe points are merged to reduce the number of probe points used for lighting rendering, thereby achieving the purpose of reducing the calculation amount of the spherical harmonic basis coefficient of the probe points, thereby achieving the technical effect of improving the lighting rendering efficiency of virtual terrain , thereby solving the technical problem of low lighting rendering efficiency of virtual terrain.
  • Figure 1 is a schematic diagram of the application environment of an optional virtual terrain lighting rendering method according to an embodiment of the present application
  • Figure 2 is a schematic diagram of the process of an optional virtual terrain lighting rendering method according to an embodiment of the present application
  • Figure 3 is one of the schematic diagrams of a lighting rendering method for virtual terrain according to an embodiment of the present application
  • Figure 4 is a second schematic diagram of a lighting rendering method for virtual terrain according to an embodiment of the present application.
  • Figure 5 is a third schematic diagram of a lighting rendering method for virtual terrain according to an embodiment of the present application.
  • Figure 6 is a schematic diagram of the fourth schematic diagram of the lighting rendering method of virtual terrain according to an embodiment of the present application.
  • Figure 7 is a fifth schematic diagram of a lighting rendering method for virtual terrain according to an embodiment of the present application.
  • Figure 8 is a schematic diagram of the sixth schematic diagram of the lighting rendering method of virtual terrain according to an embodiment of the present application.
  • Figure 9 is a seventh schematic diagram of a lighting rendering method for virtual terrain according to an embodiment of the present application.
  • Figure 10 is an eighth schematic diagram of a lighting rendering method for virtual terrain according to an embodiment of the present application.
  • Figure 11 is a ninth schematic diagram of the lighting rendering method for virtual terrain according to an embodiment of the present application.
  • Figure 12 is a schematic diagram of the lighting rendering method of virtual terrain according to an embodiment of the present application.
  • Figure 13 is a schematic diagram of an optional virtual terrain lighting rendering device according to an embodiment of the present application.
  • Figure 14 is a schematic structural diagram of an optional electronic device according to an embodiment of the present application.
  • a lighting rendering method for virtual terrain is provided.
  • the lighting rendering method for virtual terrain described above can be executed by a computer device, for example, it can be applied to In the environment shown in Figure 1.
  • the user equipment 102 is used as an example of a computer equipment for description.
  • the user equipment 102 may include, but is not limited to, a display 108, a processor 106, and a memory 104.
  • Step S102 the user device 102 obtains a lighting rendering request triggered for the target virtual terrain sub-block 1024, where the target virtual terrain sub-block 1024 is a sub-block of the target virtual terrain 1022, and the target virtual terrain 1022 may, but is not limited to, include multiple sub-blocks;
  • Step S104 the user equipment 102 responds to the lighting rendering request and obtains the candidate probe point set of the target virtual terrain sub-block through the memory 104;
  • Steps S106-S112 the user equipment 102 uses the processor 106 to determine the probe points corresponding to each vertex in the target virtual terrain sub-block in the candidate probe point set, obtains the first index relationship set, and obtains each probe point in the candidate probe point set.
  • the spherical harmonic basis coefficient of the point, and based on the spherical harmonic basis coefficient of each probe point determine the difference degree of every two probe points in the candidate probe point set, and based on the difference degree of every two probe points in the candidate probe point set , merge the probe points in the candidate probe point set to obtain the target probe point set, and modify the vertices with corresponding relationships and the probe points before merging in the first index relationship set into the vertices with corresponding relationships and the probe points after the merger.
  • the processor 106 in the device 102 displays the picture corresponding to the lighting rendering result on the display 108, and stores the lighting rendering result in the memory 104.
  • the server can be an independent physical server, a server cluster or a distributed system composed of multiple physical servers, or it can provide cloud computing services.
  • cloud server that is, the server performs steps such as obtaining the first index relationship set, obtaining the second index relationship set, and obtaining the lighting rendering results, thereby reducing the processing pressure of the server.
  • the user equipment 102 includes but is not limited to handheld devices (such as mobile phones), notebook computers, desktop computers, intelligent voice interaction devices, smart home appliances, vehicle-mounted equipment, etc. This application does not limit the specific implementation of the user equipment 102.
  • the virtual terrain lighting rendering method includes:
  • S202 Obtain the candidate probe point set of the target virtual terrain sub-block, where the probe points in the candidate probe point set are used to detect The target virtual terrain sub-block is illuminated and rendered;
  • S204 Determine the probe points corresponding to each vertex in the target virtual terrain sub-block in the candidate probe point set, and obtain a first index relationship set, where each index relationship in the first index relationship set represents a vertex sum with a corresponding relationship. exploration point;
  • S206 Obtain the spherical harmonic basis coefficient of each probe point in the candidate probe point set, and determine the degree of difference of each two probe points in the candidate probe point set based on the spherical harmonic basis coefficient of each probe point;
  • S208 Merge the probe points in the candidate probe point set according to the degree of difference to obtain the target probe point set, and combine the vertices corresponding to the probe points before merging with the merged probe points according to the first index relationship set. Click to establish a corresponding relationship and obtain the second index relationship set;
  • S210 Perform illumination rendering on the target virtual terrain sub-block according to the spherical harmonic basis coefficients of each probe point in the target probe point set and the second index relationship set.
  • the above-mentioned virtual terrain lighting rendering method can be, but is not limited to, applied to terrain rendering scenes in three-dimensional (3Dimensions, 3D) games.
  • a preprocessing method is proposed for special objects such as terrain.
  • Illumination rendering method in which the terrain has very broad characteristics, and its illumination data has a high degree of repetition within a large range, and then uses the good properties of spherical harmonic illumination, and solves a smaller amount through difference comparison.
  • the spherical harmonic basis coefficient is used to reduce the calculation amount in the lighting rendering process, and objects attached to the surface of the object can reuse the above lighting rendering method, which can further improve the efficiency of lighting rendering.
  • the target virtual terrain sub-block can be understood as any sub-block among multiple sub-blocks of the target virtual terrain, wherein the target virtual terrain may be, but is not limited to, a whole in appearance, but in During logical processing and rendering, the structure will be divided into several small blocks.
  • the structure of these small blocks can be understood as multiple sub-blocks of the above-mentioned target virtual terrain, and multiple sub-blocks can be implemented in parallel or linearly through the lighting rendering method of the above-mentioned virtual terrain. Lighting rendering to achieve overall lighting rendering of the target virtual terrain; in addition, the shape of the target virtual terrain sub-block can be a triangle, rectangle, circle, trapezoid or other polygon.
  • a probe point can be understood as a three-dimensional space point in space used to collect lighting information, and the three-dimensional space point is also used to perform lighting rendering of the target virtual terrain sub-block.
  • the target virtual terrain sub-block may be, but is not limited to, divided into multiple triangles for processing, and each triangle may, but is not limited to, correspond to multiple vertices, wherein one vertex may be associated with multiple probes. Points, one probe point can also be associated with multiple vertices; and the process of obtaining the set of candidate probe points for the target virtual terrain sub-block can also be, but is not limited to, understood as obtaining the candidate probe points for each triangle into which the target virtual terrain sub-block is divided. point;
  • triangle 302 is one of the plurality of triangles into which the target virtual terrain sub-block is divided.
  • O is The center of mass of triangle 302
  • a, b, and c are respectively the midpoints of line segments AO, BO, and CO of triangle 302; further, as shown in (b) in Figure 3, place a, b, and c along the normal direction of triangle 302 Offset by a preset unit, and then three candidate probe points a ⁇ , b ⁇ , c ⁇ are obtained; similarly, refer to the acquisition method of the candidate probe points of the triangle 302 mentioned above, and obtain each of the target virtual terrain sub-blocks divided into Candidate probe points of a triangle;
  • the areas of the triangles into which the target virtual terrain sub-blocks are divided may be different, and for different areas, the methods of obtaining candidate probe points of the triangles may also be different, for example, to improve performance.
  • a first number of candidate probe points can be generated (such as candidate probe points obtained by shifting the centroid of the triangle along the normal direction of the triangle)
  • a second number of candidate probe points may be generated, where the second number is greater than the first number.
  • the method of determining the probe points corresponding to each vertex in the target virtual terrain sub-block in the candidate probe point set may include obtaining each probe point in the target virtual terrain sub-block from the candidate probe point set. All probe points corresponding to the vertices, or when all the above probe points are obtained, all probe points are screened to obtain some probe points, thereby providing better quality probe points for subsequent lighting rendering operations, thereby improving The overall efficiency of lighting rendering, where the screening method can include random screening, conditional screening, etc.;
  • 10 probe points all probe points
  • target probe points whose closeness reaches the target threshold are screened out from these 10 probe points, where the closeness can be but is not limited to probe points The closeness of the correspondence between vertices.
  • the spherical harmonic basis coefficient can be the coefficient of the basis function in spherical harmonic illumination, or it can be understood that the illumination is first sampled into N coefficients, and then the above-mentioned spherical harmonic basis coefficient is used during rendering. The lighting sampled above is restored to complete the rendering.
  • the difference degree of each two probe points can be understood as the difference degree of the spherical harmonic basis coefficient of each two probe points.
  • the spherical harmonic basis coefficient of probe point A is A1
  • the spherical harmonic basis coefficient of probe point B is A1.
  • the spherical harmonic basis coefficient of is B1, then the difference between probe point A and probe point B can be understood as
  • the degree of difference of the corresponding target parameters, such as the spherical harmonic basis coefficient of probe point A is A1, the spatial position parameter of probe point A is A2, the target parameter of probe point A is A1 ⁇ A2, and the spherical harmonic basis coefficient of probe point B is B1, the spatial position parameter of probe point B (can be understood as the spatial position information on the target virtual terrain sub-block) is B2, and the target parameter of probe point B is B1 ⁇ B2, then the difference between probe point A and probe point B It can be understood as
  • the spatial position parameter here is only a distance description.
  • the method of merging probe points in the candidate probe point set may include merging at least two probe points into at least one probe point, wherein the above-mentioned at least two probe points may not include the above-mentioned At least one probe point, such as merging probe point A and probe point B (at least two probe points) into probe point C (at least one probe point), and the index relationship between probe point A and probe point B (the first index relationship The index relationship in the set) is modified to probe point C, or probe point C has the index relationship of probe point A and probe point B; or, the above-mentioned at least two probe points can include the above-mentioned at least one probe point, such as probe point A and Probe point B (at least two probe points) is merged into probe point A (at least one probe point), and the index relationship of probe point B (the index relationship in the first index relationship set) is modified to probe point A, or probe Point A has both the original index relationship of probe point A and the index relationship of probe point B.
  • the target virtual terrain sub-block is illuminated and rendered according to the spherical harmonic basis coefficients of each probe point in the target probe point set and the second index relationship set, where in the second index relationship set
  • the index relationship can be used to determine the index (correspondence) relationship between each probe point in the target probe point set and each vertex in the target virtual terrain sub-block, and based on the index relationship, the spherical harmonic basis coefficient pair of each probe point is used The corresponding vertices are rendered.
  • the probe points in the target probe point set are transferred to a roaster for baking processing, and in the roaster, the target probe point set is
  • the probe points are converted into probe points in world space, and the basic functions provided by the baker are used to determine the light reception conditions of the probe points in the target probe point set, and then the target probe points are obtained.
  • Target the spherical harmonic basis coefficients of the probe points in the target probe point set to obtain the spherical harmonic basis coefficients of each probe point in the target probe point set; in addition, when the target virtual terrain sub-block is one of several terrain sub-blocks in the target scene In this case, in order to improve data processing efficiency, the probe points of all terrain sub-blocks in the target scene can be passed to the baker;
  • the probe points in the target probe point set are probe points in a single terrain sub-block space
  • the probe points in a single terrain sub-block space do not involve probe points in other terrain sub-block spaces.
  • probe points in the target probe point set may appear. Abnormal conditions inside other terrain sub-blocks, and the probe points under this abnormal condition are invalid probe points;
  • the target data of the probe points in the target probe point set can be recorded, where the target data includes at least one of the following: the shortest distance from the probe point to the target virtual terrain sub-block, the distance to which the probe point can be associated Other probe points; then during the baking process, if a probe point is invalid due to the abnormal situation, first find another valid probe point within the nearest distance; if it cannot be found, then traverse all related and actually valid probe points The probe points are weighted and averaged (inversely proportional to the square of the distance).
  • virtual terrain usually does not need to display more detailed object models, but is mostly displayed in the form of distant objects, and the distant objects often have a high degree of repetition, such as gravel in desert terrain and grassland terrain. Grass and trees, etc. It can be seen that virtual terrain has at least the following characteristics: first, it does not require a more sophisticated rendering method, and second, the objects that need to be rendered are highly repetitive. Furthermore, the above-mentioned characteristics of the virtual terrain are used to merge a large number of probe points through the degree of difference to reduce the calculation amount of the spherical harmonic basis coefficients of the probe points, thereby achieving the technical effect of improving the lighting rendering efficiency of the virtual terrain.
  • the target virtual terrain sub-block 404 is determined from the target virtual terrain 402, and a candidate exploration point set of the target virtual terrain sub-block 404 is obtained, wherein the candidate exploration point set is
  • the probe points are shown in (a) in Figure 4, which are used to perform lighting rendering on the target virtual terrain sub-block 404; further, as shown in (b) in Figure 4, a preliminary test is performed on the probe points in the candidate probe point set.
  • each index relationship in the first index relationship set represents a vertex and a probe point with a corresponding relationship
  • the spherical harmonic basis coefficient of each probe point in the candidate probe point set is determined, and the difference degree of each two probe points in the candidate probe point set is determined based on the spherical harmonic basis coefficient of each probe point.
  • the probe points in the second index relationship set are shown in (c) in Figure 4.
  • a candidate probe point set of the target virtual terrain sub-block is obtained, where the probe points in the candidate probe point set are used to perform illumination rendering of the target virtual terrain sub-block; determined in the candidate probe point set The probe points corresponding to each vertex in the target virtual terrain sub-block are obtained to obtain the first index relationship set, where each index relationship in the first index relationship set represents a vertex and probe point with a corresponding relationship; the candidate probe point set is obtained The spherical harmonic basis coefficient of each probe point is determined, and based on the spherical harmonic basis coefficient of each probe point, the difference degree of each two probe points in the candidate probe point set is determined, and the probe points in the candidate probe point set are merged to obtain The target probe point set, and according to the first index relationship set, establish a corresponding relationship between the vertices that have a corresponding relationship with the probe points before merging and the merged probe points, and obtain a second index relationship set.
  • the spherical harmonic basis coefficients of each probe point and the second index relationship set are used to determine the target virtual terrain sub- Blocks are used for lighting rendering, and the highly repetitive characteristics of the virtual terrain are used to merge a large number of probe points through the degree of difference, thus achieving the purpose of reducing the calculation amount of the spherical harmonic basis coefficients of the probe points, thus achieving the improvement of Technical effects on lighting rendering efficiency of virtual terrains.
  • the probe points in the candidate probe point set are merged to obtain the target probe point set, and based on the first index
  • the relationship set establishes a corresponding relationship between the vertices that have a corresponding relationship with the probe points before merging and the probe points after the merger, and obtains a second index relationship set, including:
  • the two probe points to be merged include the first current probe point and the second current probe point.
  • the first current probe point is the probe point to be merged into the second current probe point;
  • the number of probe points in the candidate probe point set may be, but is not limited to, limited, or the number of probe points in the candidate probe point set may be limited to A small fixed value (preset quantity threshold), or a value below the fixed value.
  • the merging process will continue. For example, the result of the first merging process is to obtain 10 probe points, but the preset number If the threshold is 5, then the second merging process is performed based on the results of the first merging process (10 probe points); assuming that the result of the second merging process is 7 probe points, it still does not meet the requirement of being less than or equal to the preset number. If the threshold condition is met, then the third merging process will be performed based on the results of the second merging process (7 probe points); assuming that the result of the third merging process is 5 probe points, which meets the threshold of less than or equal to the preset number. condition, the target probe point set is obtained, and the vertices with corresponding relationships and the probe points before merging in the first index relationship set are modified into the vertices with corresponding relationships and the probe points after merging, to obtain the second index relationship set.
  • S506 Obtain the spherical harmonic basis coefficient of each probe point in the candidate probe point set, and determine the degree of difference of each two probe points in the candidate probe point set based on the spherical harmonic basis coefficient of each probe point;
  • S508 Determine two probe points to be merged based on the difference between each two probe points in the current probe point set, where the two probe points to be merged include the first current probe point and the second current probe point.
  • the first current probe point is the probe point to be merged into the second current probe point;
  • step S512 Determine whether the number of probe points in the candidate probe point set is less than or equal to the preset number threshold. If yes, execute step S514. If not, execute step S508, in which the current probe point set is initialized as the candidate probe point set. ;
  • S516 Perform illumination rendering on the target virtual terrain sub-block according to the spherical harmonic basis coefficients of each probe point in the target probe point set and the second index relationship set.
  • the following steps are repeatedly performed until the number of probe points in the candidate probe point set is less than or equal to the preset number threshold, wherein the current probe point set is initialized as the candidate probe point set: According to the current probe point The difference degree of each two probe points in the set determines the two probe points to be merged.
  • the two probe points to be merged include the first current probe point and the second current probe point.
  • the first current probe point is the probe point to be merged.
  • Merge the probe point into the second current probe point delete the first current probe point in the current probe point set, search for the vertex that has a corresponding relationship with the first current probe point in the first index relationship set, and add the found vertex
  • the corresponding relationship in the first index relationship set is modified from the corresponding relationship with the first current probe point to the corresponding relationship with the second current probe point, thereby achieving the effect of improving lighting rendering efficiency.
  • the two probe points to be merged are determined based on the difference between each two probe points in the current probe point set, including:
  • S2 Find two probe points whose difference is less than or equal to the preset difference threshold in the current probe point set. When two probe points whose difference is less than or equal to the preset difference threshold are found, the difference will be Two probe points that are less than or equal to the preset difference threshold are determined as the two probe points to be merged.
  • the method of selecting two probe points to be merged may be, but is not limited to, the method of determining the minimum difference in the current probe point set, or searching for the minimum difference in the current probe point set. Or equal to the preset difference threshold.
  • determine the degree of difference of each two probe points in the candidate probe point set based on the spherical harmonic basis coefficient of each probe point including:
  • each two probe points include the third current probe point and the fourth current probe point.
  • the third current probe point may be a probe point to be merged into the fourth current probe point.
  • S3 Determine the degree of difference between the third current probe point and the fourth current probe point based on the target difference value.
  • the third current probe point is probe point A and the fourth current probe point is probe point B. Further, the third current probe point and the fourth current probe point can be calculated with reference to the following formula (1).
  • Target difference of probe points is assumed that the third current probe point A and the fourth current probe point is probe point B.
  • SH l,m (A) and SH l,m (B) are the spherical harmonic basis coefficients of probe point A and probe point B respectively, and the subscripts l and m are both general expressions of spherical harmonic basis.
  • the difference between the third current probe point and the fourth current probe point is determined based on the target difference, including:
  • S3 Determine the degree of difference between the third current probe point and the fourth current probe point based on the target difference, the normal vector of each triangle, and the preset weight corresponding to each triangle.
  • the third current probe point is probe point A and the fourth current probe point is probe point B. Further, on the basis of obtaining the target difference between probe point A and probe point B, Refer to the following formula (2) to calculate the difference between the third current probe point and the fourth current probe point:
  • SH l,m (A) and SH l,m (B) are the spherical harmonic basis coefficients of probe point A and probe point B respectively.
  • the subscripts l and m are both general expressions of spherical harmonic basis, and n is the probe point.
  • the number of triangles associated with A, N i is the normal vector of the triangle associated with probe point A (such as the direction vector of the normal), and W i is the weight;
  • third-order spherical harmonics may be used, but are not limited to, so l is from 0 to 2. Since the area of each triangle of the terrain is not very different, the influence of the area can be, but is not limited to, ignored.
  • the above formula (2) is the formula for a single color channel. In actual applications, there are 3 RGB channels, so the average of the differences of the three channels is also required. The formulas for other channels can, but are not limited to, refer to the above formula (2) in the same way.
  • the difference between the third current probe point and the fourth current probe point is determined based on the target difference, the normal vector of each triangle, and the preset weight corresponding to each triangle, including:
  • the preset difference function is used to determine the target difference value, the normal vector of each triangle and the preset weight corresponding to each triangle. The degree of difference between the third current probe point and the fourth current probe point;
  • the preset difference function is used to determine the third current probe point according to the target difference value, the normal vector of each triangle and the preset weight corresponding to each triangle.
  • the initial difference between the third current probe point and the fourth current probe point; the difference between the third current probe point and the fourth current probe point is determined to be equal to the product of the initial difference and the preset constant, where the preset constant is greater than 1 .
  • the above degree of difference can also be corrected in the manner shown in the following formula (3), but is not limited to:
  • C can be, but is not limited to, a constant greater than 1.
  • the initial difference is directly or indirectly corrected through a preset difference function.
  • the effect obtained by merging probe points can be directly based on the initial difference degree.
  • the boundary seam between the virtual terrain sub-block 602 and the virtual terrain sub-block 604 is relatively obvious; through the preset difference function, the initial difference is directly or indirectly corrected, and then The effect obtained by merging probe points is shown in (b) of Figure 6 .
  • the boundary between the virtual terrain sub-block 602 and the virtual terrain sub-block 604 is smoother.
  • the third current probe point when the third current probe point is not located at the boundary of the target virtual terrain sub-block, through the preset difference function, according to the target difference value, the normal vector of each triangle and each The preset weight corresponding to the triangle determines the difference between the third current probe point and the fourth current probe point;
  • the third current probe point is located at the boundary of the target virtual terrain sub-block, through the preset difference function, according to The target difference, the normal vector of each triangle and the preset weight corresponding to each triangle determine the initial difference between the third current probe point and the fourth current probe point;
  • the degree of difference is determined to be equal to the product of the initial degree of difference and the preset constant, where the preset constant is greater than 1, which achieves the effect of improving the smoothness of the virtual terrain sub-block boundary.
  • the target virtual terrain sub-block is illuminated and rendered based on the spherical harmonic basis coefficients of each probe point in the target probe point set and the second index relationship set, including:
  • S3 determine the spherical harmonic basis coefficient of each vertex in the target virtual terrain sub-block according to the spherical harmonic basis coefficient of the probe point corresponding to each vertex in the target virtual terrain sub-block;
  • S4 Perform illumination rendering on the target virtual terrain sub-block according to the spherical harmonic basis coefficients of each vertex in the target virtual terrain sub-block.
  • the probe points in the model space where the target virtual terrain sub-block is located can be, but are not limited to, converted into probe points in the target space, and the basic functions provided by the baker are used to calculate each probe point.
  • the spherical harmonic basis coefficient of the illumination is finally obtained; then the spherical harmonic basis coefficients of all elements in the baked target virtual terrain sub-block are saved as maps (the spherical harmonic basis coefficients of all elements are saved as one or more map, or the spherical harmonic basis coefficient of an element is saved as a map, etc.); at runtime, the spherical harmonic map is sampled according to the pre-saved index and weight data, and the obtained coefficients are dot producted with the basis function corresponding to the normal to obtain Lighting information is used to complete lighting rendering of the target virtual terrain sub-block.
  • the spherical harmonic basis coefficients may, but are not limited to, include 3-stage spherical harmonic basis coefficients, wherein the second-order and third-order coefficients of the spherical harmonic basis coefficients may be reduced by, but are not limited to, first-order coefficients. Obtained by unified treatment;
  • the optional spherical harmonic basis coefficients of the three stages are shown in the formula 702 in Figure 7, where SH l, m are the spherical harmonic basis coefficients, and the subscripts l and m are both general expressions of the spherical harmonic basis, N is the number of triangles associated with the probe point, w(i) is the weight, and L(i) is the incident illumination in a certain direction; as shown in formula 704 in Figure 7, the suffix part of the spherical harmonic basis function is less than 1, so the high-order
  • the spherical harmonic basis coefficients of the (2nd and 3rd order) can be normalized by the spherical harmonic basis coefficients of the 1st order.
  • the data format of the spherical harmonic basis coefficients can be, but is not limited to, encoded in the following manner: first, the format of the texture can be, but is not limited to, Uint4, This format is supported by hardware and is more convenient for encoding. Furthermore, each set of spherical harmonic basis coefficients usually needs to occupy 2 pixels, and then the low-order spherical harmonic basis coefficients (first spherical harmonic basis sub-coefficients) and The high-order spherical harmonic basis coefficients (second spherical harmonic basis sub-coefficients) are divided into two different pixels, which makes it easier to do lod.
  • nearby objects need to be completed and high-order Spherical harmonic calculations are performed, while distant objects only need to perform low-order spherical harmonic calculations.
  • distant objects are sampled only once; further, the high-order spherical harmonic basis coefficients can be split into another texture, but are not limited to, so that distant objects only need to load half of the texture;
  • the first spherical harmonic fundamental sub-coefficient includes the 1st and 2nd-order coefficients
  • the second spherical harmonic fundamental sub-coefficient includes the 3rd-order coefficient
  • the basis coefficient is divided into two pixels of 16 bytes.
  • the first spherical harmonic basis sub-coefficient of the RGB 3 channels is divided into the first pixel 802
  • the second spherical harmonic basis sub-coefficient of the RGB 3 channels is divided into the second pixel 804;
  • the 16byte storage space is divided into three parts.
  • the first part is 6byte, which is used to allocate the first-order spherical harmonic basis coefficient of the RGB 3 channels;
  • the second part is 9byte, which is used to allocate the RGB 3 channels.
  • the third part, 1byte is a reserved byte, which can be used to save shadow data to achieve a relatively rough shadow effect based on probe points;
  • the 16byte storage space is divided into two parts.
  • the first part is 15byte, which is used to allocate the third-order spherical harmonic basis coefficients of the RGB 3 channels;
  • the second part is 1byte, which is used as reserved bytes. , can be used to save shadow data to achieve a relatively rough shadow effect based on probe points.
  • the target virtual terrain is determined based on the spherical harmonic basis coefficient of the probe point corresponding to each vertex in the target virtual terrain sub-block.
  • the spherical harmonic basis coefficients of each vertex in the sub-block include:
  • the situation where the current vertex has a corresponding relationship with one or more probe points in the target probe point set can be distinguished, but is not limited to, and specifically divided into situations where the current vertex and the target probe point set are in a corresponding relationship.
  • the spherical harmonic basis coefficient of the current vertex is determined to be equal to the spherical harmonic basis coefficient of a probe point; when the current vertex has a corresponding relationship with multiple probe points in the target probe point set , determine the spherical harmonic basis coefficient of the current vertex as equal to the weighted sum of the spherical harmonic basis coefficients of multiple probe points.
  • the aforementioned probe points corresponding to each vertex in the target virtual terrain sub-block are determined in the candidate probe point set to obtain the first index relationship set, which includes:
  • S2 Find the multiple probe points closest to each vertex in the candidate probe point set, set corresponding weights for the multiple probe points, and assign each vertex to the corresponding multiple probe points and the weights corresponding to the multiple probe points. It is recorded in the first index relationship set as an index relationship.
  • the virtual terrain uses virtual terrain sub-blocks as structural units, the coverage area of the virtual terrain sub-blocks is usually It will still be relatively large.
  • the attribute structure of vertices is limited. For example, assuming that the attribute space of each vertex is 32 bit (not fixed), then in the 32-bit attribute space, you can It is not limited to allocating the attribute space of at least two probe points, such as allocating the attribute space of probe point A9bit, allocating the attribute space of probe point B9bit, and the remaining 14 bits.
  • the weights corresponding to multiple probe points may be, but are not limited to, used to calculate the weighted sum of the spherical harmonic basis coefficients of multiple probe points.
  • how to establish the index relationship between vertices and probe points can be, but is not limited to, searching for a probe point closest to each vertex in the set of candidate probe points, and comparing each vertex with the corresponding found probe point. Points are recorded in the first index relationship set as index relationships; or multiple probe points closest to each vertex are searched in the candidate probe point set, corresponding weights are set for the multiple probe points, and each vertex is matched with the corresponding search point The multiple probe points obtained and the corresponding weights of the multiple probe points are recorded in the first index relationship set as index relationships.
  • obtain a set of candidate probe points for the target virtual terrain sub-block including:
  • the method of filtering out invalid probe points may, but is not limited to, include filtering out probes located in invalid areas of the target virtual terrain sub-block (such as the interior of the target virtual terrain sub-block, the backlight, etc.). points, filter out probe points whose correlation degree with the target virtual terrain sub-block is lower than the effective threshold, etc.
  • the side view of the target virtual terrain sub-block 1002 shown in Figure 10 for all candidate probe points of the target virtual terrain sub-block 1002, such as probe point e, probe point d, check whether the candidate probe points are Located inside the target virtual terrain sub-block 1002, specifically d is a probe point located outside the target virtual terrain sub-block 1002, and e is a probe point located inside the target virtual terrain sub-block 1002; in addition, for the target virtual terrain sub-block 1002, The illumination information of the probe points inside block 1002 is invalid and is also deleted from the set of candidate probe points.
  • the original probe point set of the target virtual terrain sub-block is obtained, where the target virtual terrain sub-block is divided into a set of triangles, and the original probe point set includes one or more probe points corresponding to each triangle in the set of triangles. probe points; filter out invalid probe points in the original probe point set to obtain a candidate probe point set, thereby achieving the effect of improving the execution efficiency of lighting rendering.
  • the above-mentioned virtual terrain lighting rendering method is applied to the lighting rendering scene of 3D games to improve the game image quality and realism.
  • the above-mentioned virtual terrain lighting rendering method is applied to the lighting rendering scene of 3D games to improve the game image quality and realism.
  • models with curved surface shapes in space Have better results;
  • Step S1102 obtain terrain sub-blocks composed of several triangles
  • Step S1104 generate all candidate probe points
  • Step S1106 Remove invalid candidate probe points to obtain the remaining valid probe points
  • Step S1108-1 calculate the index and weight of the probe point associated with the terrain vertex
  • Step S1108-2 Perform illumination calculation on all probe points to obtain the spherical harmonic basis coefficients
  • Step S1110 merge probe point combinations whose difference is less than the threshold
  • Step S1112 determine whether the number of remaining probe points is greater than the preset value, if so, execute step S1114, if not, execute step S1116;
  • Step S1114 find the combination of probe points with the smallest difference and merge them;
  • Step S1116 obtain the final probe point list.
  • each probe point in the model space can be, but is not limited to, different, and the vertex color is related to the maximum weight associated with it.
  • the probe points are the same, and the vertex line segments can but are not limited to be used to represent the normal direction; further based on the calculated probe points, several probe points and weights associated with each vertex on the model are calculated, and the calculated probe point index and The weights are saved in the model vertex data; furthermore, the scene is passed to the baker for baking.
  • the scene consists of several models, and the same model may have multiple instances. That is, convert the probe points in the model space to the world space, use the basic functions provided by the baker to calculate the light reception of the probe points, and finally obtain the spherical harmonic basis coefficient of the illumination;
  • the spherical harmonic basis coefficients of all baked virtual terrain sub-blocks are saved as maps (the spherical harmonic basis coefficients of all virtual terrain sub-blocks are saved as a map, or a virtual terrain sub-block)
  • the spherical harmonic basis coefficients are saved as one map, or the spherical harmonic basis coefficients of multiple virtual terrain sub-blocks are saved as one map, or the spherical harmonic basis coefficients of multiple virtual terrain sub-blocks are saved as multiple maps), as shown in Figure 12
  • the spherical harmonic basis coefficients of the virtual terrain sub-block 1202, the virtual terrain sub-block 1204 and the virtual terrain sub-block 1206 are saved as the target map 1208.
  • a certain compression algorithm is used to assemble the coefficients into several textures; during runtime, the spherical harmonic texture is sampled based on the index and weight data saved at the vertices, and the dot product of the obtained coefficients and the basis function corresponding to the normal is lighting information.
  • a virtual terrain lighting rendering device for implementing the above virtual terrain lighting rendering method is also provided. As shown in Figure 13, the device includes:
  • the first acquisition unit 1302 is used to obtain a candidate probe point set of the target virtual terrain sub-block, where the probe points in the candidate probe point set are used for lighting rendering of the target virtual terrain sub-block;
  • the determination unit 1304 is configured to determine the probe points corresponding to each vertex in the target virtual terrain sub-block in the candidate probe point set, and obtain a first index relationship set, wherein each index relationship in the first index relationship set represents a corresponding The culmination and exploration points of relationships;
  • the second acquisition unit 1306 is used to obtain the spherical harmonic basis coefficient of each probe point in the candidate probe point set, and determine the difference degree of each two probe points in the candidate probe point set based on the spherical harmonic basis coefficient of each probe point. ;
  • the merging unit 1308 is used to merge the probe points in the candidate probe point set according to the degree of difference to obtain the target probe point set, and according to the first index relationship set, combine the vertices that have a corresponding relationship with the probe points before merging with the probe points before merging.
  • the merged exploration points establish corresponding relationships and obtain the second index relationship set;
  • the rendering unit 1310 is configured to perform illumination rendering on the target virtual terrain sub-block according to the spherical harmonic basis coefficients of each probe point in the target probe point set and the second index relationship set.
  • a candidate probe point set of the target virtual terrain sub-block is obtained, where the probe points in the candidate probe point set are used to perform illumination rendering of the target virtual terrain sub-block; determined in the candidate probe point set The probe points corresponding to each vertex in the target virtual terrain sub-block are obtained to obtain the first index relationship set, where each index relationship in the first index relationship set represents a vertex and probe point with a corresponding relationship; the candidate probe point set is obtained The spherical harmonic basis coefficient of each probe point is determined, and based on the spherical harmonic basis coefficient of each probe point, the difference degree of each two probe points in the candidate probe point set is determined, and the probe points in the candidate probe point set are merged to obtain The target probe point set, and according to the first index relationship set, establish a corresponding relationship between the vertices that have a corresponding relationship with the probe points before merging and the merged probe points, and obtain a second index relationship set.
  • the spherical harmonic basis coefficients of each probe point and the second index relationship set are used to perform illumination rendering on the target virtual terrain sub-block.
  • the highly repetitive characteristics of the virtual terrain are used to merge a large number of probe points through the degree of difference, and then The purpose of reducing the calculation amount of the spherical harmonic basis coefficient of the probe point is achieved, thereby achieving the technical effect of improving the lighting rendering efficiency of the virtual terrain.
  • the merging unit 1308 includes:
  • the repetition module is used to repeatedly perform the following steps until the number of probe points in the candidate probe point set is less than or equal to the preset number threshold, wherein the current probe point set is initialized as the candidate probe point set:
  • the first determination module is used to determine two probe points to be merged based on the difference degree of each two probe points in the current probe point set, where the two probe points to be merged include the first current probe point and the second probe point.
  • Current probe point, the first current probe point is the probe point to be merged into the second current probe point;
  • the search module is used to delete the first current probe point in the current probe point set, search for the vertex that has a corresponding relationship with the first current probe point in the first index relationship set, and compare the found vertex with the second current probe point. Establish corresponding relationships.
  • the first determination module includes:
  • the first determination sub-module is used to determine the two probe points with the smallest difference in the current set of probe points, and determine the two probe points with the smallest difference as the two probe points to be merged; or
  • the second determination sub-module is used to find two probe points whose difference is less than or equal to the preset difference threshold in the current set of probe points. After finding two probe points whose difference is less than or equal to the preset difference threshold, In this case, the two probe points whose difference is less than or equal to the preset difference threshold are determined as the two probe points to be merged.
  • the determining unit 1304 includes:
  • An execution module configured to perform the following steps on every two probe points in the candidate probe point set, where, when performing the following steps, each two probe points include a third current probe point and a fourth current probe point;
  • the first acquisition module is used to obtain the target difference obtained by subtracting the spherical harmonic basis coefficient of the third current probe point from the spherical harmonic basis coefficient of the fourth current probe point;
  • the second determination module is used to determine the degree of difference between the third current probe point and the fourth current probe point according to the target difference value.
  • the target virtual terrain sub-block is divided into a group of triangles, and the second determination module includes:
  • the first acquisition sub-module is configured to acquire a triangle set associated with the third current probe point according to the set of triangles, where the triangle set includes triangles where vertices that have a corresponding relationship with the third current probe point are located;
  • the second acquisition submodule is used to obtain the normal vector of each triangle in the triangle set and the preset weight corresponding to each triangle;
  • the third determination sub-module is used to determine the degree of difference between the third current probe point and the fourth current probe point based on the target difference, the normal vector of each triangle, and the preset weight corresponding to each triangle.
  • the third sub-module is determined, including:
  • the first determination subunit is used to, when the third current probe point is not located at the boundary of the target virtual terrain sub-block, use the preset difference function to determine the target difference value, the normal vector of each triangle, and the corresponding The preset weight determines the degree of difference between the third current probe point and the fourth current probe point;
  • the second determination subunit is used to, when the third current probe point is located at the boundary of the target virtual terrain sub-block, use the preset difference function to determine the target difference value, the normal vector of each triangle and the corresponding value of each triangle.
  • the preset weight determines the initial difference between the third current probe point and the fourth current probe point; the third determination subunit is used to determine the difference between the third current probe point and the fourth current probe point to be equal to the initial difference.
  • the rendering unit 1310 includes:
  • the saving module is used to save the spherical harmonic basis coefficients of each probe point in the target probe point set as a target map
  • the third determination module is used to determine the spherical harmonic basis coefficients of the probe points corresponding to each vertex in the target virtual terrain sub-block from the target map according to the second index relationship set when the target virtual terrain sub-block needs to be rendered;
  • the fourth determination module is used to determine the spherical harmonic basis coefficient of each vertex in the target virtual terrain sub-block according to the spherical harmonic basis coefficient of the probe point corresponding to each vertex in the target virtual terrain sub-block;
  • the rendering module is used to perform illumination rendering on the target virtual terrain sub-block according to the spherical harmonic basis coefficients of each vertex in the target virtual terrain sub-block.
  • the fourth determination module includes:
  • the fourth determination sub-module is used to determine the spherical harmonic basis coefficient of the current vertex to be equal to the spherical harmonic basis coefficient of a probe point when the current vertex has a corresponding relationship with a probe point in the target probe point set; or
  • the fifth determination submodule is used to determine the spherical harmonic basis coefficient of the current vertex to be equal to the weighted sum of the spherical harmonic basis coefficients of the multiple probe points when the current vertex has a corresponding relationship with multiple probe points in the target probe point set. and.
  • the determining unit 1304 includes:
  • the first recording module is used to find a probe point closest to each vertex in the candidate probe point set, and record each vertex and the corresponding found probe point as an index relationship in the first index relationship set;
  • the second recording module is used to find multiple probe points closest to each vertex in the candidate probe point set, set corresponding weights for the multiple probe points, and compare each vertex with the corresponding multiple found probe points, And the weights corresponding to multiple probe points are recorded in the first index relationship set as index relationships.
  • the first acquisition unit 1302 includes:
  • the second acquisition module is used to obtain the original probe point set of the target virtual terrain sub-block, where the target virtual terrain sub-block is divided into a set of triangles, and the original probe point set includes one or more probe points corresponding to each triangle in the set of triangles. exploration point;
  • the third acquisition module is used to filter out invalid probe points in the original probe point set and obtain a candidate probe point set.
  • an electronic device for implementing the above-mentioned lighting rendering method of virtual terrain is also provided.
  • the electronic device includes a memory 1402 and a processor 1404.
  • the memory 1402 A computer program is stored, and the processor 1404 is configured to execute the steps in any of the above method embodiments through the computer program.
  • the above-mentioned electronic device may be located in at least one network device among multiple network devices of the computer network.
  • the above-mentioned processor may be configured to perform the following steps through a computer program:
  • S3 Obtain the spherical harmonic basis coefficient of each probe point in the candidate probe point set, and determine the degree of difference of each two probe points in the candidate probe point set based on the spherical harmonic basis coefficient of each probe point;
  • the probe points in the candidate probe point set are merged to obtain the target probe point set, and based on the An index relationship set, which establishes a corresponding relationship between the vertices that have a corresponding relationship with the probe points before merging and the probe points after the merger, and obtains a second index relationship set;
  • S5 Perform illumination rendering on the target virtual terrain sub-block according to the spherical harmonic basis coefficients of each probe point in the target probe point set and the second index relationship set.
  • the structure shown in Figure 14 is only illustrative, and the electronic device can also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a handheld computer, and a mobile Internet device (Mobile Internet Devices, MID), PAD and other terminal equipment.
  • Figure 14 does not limit the structure of the above electronic device.
  • the electronic device may also include more or fewer components (such as network interfaces, etc.) than shown in FIG. 14 , or have a different configuration than shown in FIG. 14 .
  • the memory 1402 can be used to store software programs and modules, such as the program instructions/modules corresponding to the virtual terrain lighting rendering method and device in the embodiment of the present application.
  • the processor 1404 runs the software programs and modules stored in the memory 1402, Thereby executing various functional applications and data processing, that is, realizing the above-mentioned lighting rendering method of virtual terrain.
  • Memory 1402 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 1402 may further include memory located remotely relative to the processor 1404, and these remote memories may be connected to the terminal through a network.
  • the memory 1402 may be, but is not limited to, used to store a set of candidate probe points, a first index relationship set, and a second index relationship set.
  • the memory 1402 may include, but is not limited to, the first acquisition unit 1302, the determination unit 1304, the second acquisition unit 1306, the merging unit 1308 and the rendering unit in the lighting rendering device of the virtual terrain. Unit 1314.
  • it may also include but is not limited to other module units in the above-mentioned virtual terrain lighting rendering device, which will not be described again in this example.
  • the above-mentioned transmission device 1406 is used to receive or send data via a network.
  • Specific examples of the above-mentioned network may include wired networks and wireless networks.
  • the transmission device 1406 includes a network adapter (Network Interface Controller, NIC), which can be connected to other network devices and routers through network cables to communicate with the Internet or a local area network.
  • the transmission device 1406 is a radio frequency (Radio Frequency, RF) module, which is used to communicate with the Internet wirelessly.
  • RF Radio Frequency
  • the above-mentioned electronic device also includes: a display 1408 for displaying the above-mentioned candidate probe point set, the first index relationship set and the second index relationship set; and a connection bus 1410 for connecting various module components in the above-mentioned electronic device.
  • the above-mentioned terminal device or server may be a node in a distributed system, wherein the distributed system may be a blockchain system, and the blockchain system may be composed of multiple nodes communicating through a network.
  • a distributed system formed by formal connections.
  • nodes can form a peer-to-peer (Peer To Peer, referred to as P2P) network, and any form of computing equipment, such as servers, terminals and other electronic devices, can become a node in the blockchain system by joining the peer-to-peer network.
  • P2P peer To Peer
  • a computer program product includes a computer program/instructions containing program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network via the communications component, and/or installed from removable media.
  • various functions provided by the embodiments of the present application are executed.
  • the computer system includes a central processing unit (Central Processing Unit, CPU), which can be loaded into the random access memory (Random Access Memory, RAM) according to the program stored in the read-only memory (Read-Only Memory, ROM) or from the storage part. program to perform various appropriate actions and processes. In random access memory, various programs and data required for system operation are also stored.
  • the central processing unit, the read-only memory and the random access memory are connected to each other through a bus.
  • the input/output interface I/O interface
  • the following components are connected to the input/output interface: the input part including keyboard, mouse, etc.; including the output part such as cathode ray tube (CRT), liquid crystal display (LCD), etc., and speakers; including hard disk The storage part, etc.; and the communication part including network interface cards such as LAN cards, modems, etc.
  • the communication section performs communication processing via a network such as the Internet.
  • Drivers are also connected to input/output interfaces as required.
  • Removable media such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, etc., are installed on the drive as needed, so that the computer program read therefrom is installed into the storage section as needed.
  • the processes described in the respective method flow charts may be implemented as computer software programs.
  • embodiments of the present application include a computer program product including a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communications component, and/or installed from removable media.
  • various functions defined in the system of the present application are executed.
  • a computer-readable storage medium is provided.
  • a processor of a computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, causing the computer device to execute the above various tasks. Select the method provided in the implementation.
  • Embodiments of the present application also provide a computer program product including a computer program, which when run on a computer causes the computer to execute the method provided in the above embodiments.
  • the storage media can include: flash disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • the integrated units in the above embodiments are implemented in the form of software functional units and sold or used as independent products, they can be stored in the above computer-readable storage medium.
  • the technical solution of the present application is essentially or contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, It includes several instructions to cause one or more computer devices (which can be personal computers, servers or network devices, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
  • the disclosed client can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • multiple units or components may be combined or may be Integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the units or modules may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Remote Sensing (AREA)
  • Image Generation (AREA)

Abstract

本申请公开了一种虚拟地形的光照渲染方法、装置和存储介质及电子设备。其中,该方法包括:获取目标虚拟地形子块的候选探点集合;在候选探点集合中确定目标虚拟地形子块中的各个顶点对应的探点,得到第一索引关系集合;获取候选探点集合中的各个探点的球谐基系数,并对候选探点集合中的探点进行合并,得到目标探点集合,并得到第二索引关系集合,根据目标探点集合中的各个探点的球谐基系数以及第二索引关系集合,对目标虚拟地形子块进行光照渲染。本申请解决了虚拟地形的光照渲染效率较低的技术问题。

Description

虚拟地形的光照渲染方法、装置、介质、设备和程序产品
本申请要求于2022年04月02日提交中国专利局、申请号为202210344253.5、申请名称为“虚拟地形的光照渲染方法、装置和存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,具体而言,涉及虚拟地形的光照渲染。
背景技术
在虚拟地形的光照渲染场景中,通常会利用光照图(lightmap)的方式,对虚拟地形进行逐像素的光照渲染,但该方式通常会占用大量的内存和存储空间,也需较高的计算量支持,进而导致虚拟地形的光照渲染效率较低的问题出现。因此,存在虚拟地形的光照渲染效率较低的问题。
针对上述的问题,目前尚未提出有效的解决方案。
发明内容
有鉴于此,本申请实施例提供了一种虚拟地形的光照渲染方法、装置、介质、设备和程序产品,以至少解决虚拟地形的光照渲染效率较低的技术问题。
根据本申请实施例的一个方面,提供了一种虚拟地形的光照渲染方法,包括:获取目标虚拟地形子块的候选探点集合,其中,上述侯选探点集合中的探点用于对上述目标虚拟地形子块进行光照渲染;在上述候选探点集合中确定上述目标虚拟地形子块中的各个顶点对应的探点,得到第一索引关系集合,其中,上述第一索引关系集合中的每个索引关系表示具有对应关系的顶点和探点;获取上述候选探点集合中的各个探点的球谐基系数,并根据上述各个探点的球谐基系数,确定上述候选探点集合中的每两个探点的差异度;根据上述差异度,对上述候选探点集合中的探点进行合并,得到目标探点集合,并根据上述第一索引关系集合,将与合并前的探点具有对应关系的顶点,与合并后的探点建立对应关系,得到第二索引关系集合;根据上述目标探点集合中的各个探点的球谐基系数以及上述第二索引关系集合,对上述目标虚拟地形子块进行光照渲染。
根据本申请实施例的另一方面,还提供了一种虚拟地形的光照渲染装置,包括:第一获取单元,用于获取目标虚拟地形子块的候选探点集合,其中,上述侯选探点集合中的探点用于对上述目标虚拟地形子块进行光照渲染;确定单元,用于在上述候选探点集合中确定上述目标虚拟地形子块中的各个顶点对应的探点,得到第一索引关系集合,其中,上述第一索引关系集合中的每个索引关系表示具有对应关系的顶点和探点;第二获取单元,用于获取上述候选探点集合中的各个探点的球谐基系数,并根据上述各个探点的球谐基系数,确定上述候选探点集合中的每两个探点的差异度;合并单元,用于根据上述差异度,对上述候选探点集合中的探点进行合并,得到目标探点集合,并根据上述第一索引关系集合,将与合并前的探点具有对应关系的顶点,与合并后的探点建立对应关系,得到第二索引关系集合;渲染单元,用于根据上述目标探点集合中的各个探点的球谐基系数以及上述第二索引关系集合,对上述目标虚拟地形子块进行光照渲染。
根据本申请实施例的又一个方面,提供一种计算机程序产品或计算机程序,该计算机 程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行如以上虚拟地形的光照渲染方法。
根据本申请实施例的又一方面,还提供一种存储介质,所述存储介质用于存储计算机程序,所述计算机程序用于执行上述的虚拟地形的光照渲染方法。
根据本申请实施例的又一方面,还提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,上述处理器通过计算机程序执行上述的虚拟地形的光照渲染方法。
在本申请实施例中,获取目标虚拟地形子块的候选探点集合,其中,上述侯选探点集合中的探点用于对上述目标虚拟地形子块进行光照渲染;在上述候选探点集合中确定上述目标虚拟地形子块中的各个顶点对应的探点,得到第一索引关系集合,其中,上述第一索引关系集合中的每个索引关系表示具有对应关系的顶点和探点;获取上述候选探点集合中的各个探点的球谐基系数,并根据上述各个探点的球谐基系数,确定上述候选探点集合中的每两个探点的差异度,对上述候选探点集合中的探点进行合并,得到目标探点集合,并根据上述第一索引关系集合,将与合并前的探点具有对应关系的顶点,与合并后的探点建立对应关系,得到第二索引关系集合,根据目标探点集合中的各个探点的球谐基系数以及第二索引关系集合,对目标虚拟地形子块进行光照渲染,利用虚拟地形所具有的高重复度的特性,通过差异度对大量的探点进行合并处理,以减少用于光照渲染的探点的数量,进而达到了降低探点的球谐基系数的计算量的目的,从而实现了提高虚拟地形的光照渲染效率的技术效果,进而解决了虚拟地形的光照渲染效率较低的技术问题。
附图说明
图1是根据本申请实施例的一种可选的虚拟地形的光照渲染方法的应用环境的示意图;
图2是根据本申请实施例的一种可选的虚拟地形的光照渲染方法的流程的示意图;
图3是根据本申请实施例的虚拟地形的光照渲染方法的示意图之一;
图4是根据本申请实施例的虚拟地形的光照渲染方法的示意图之二;
图5是根据本申请实施例的虚拟地形的光照渲染方法的示意图之三;
图6是根据本申请实施例的虚拟地形的光照渲染方法的示意图之四;
图7是根据本申请实施例的虚拟地形的光照渲染方法的示意图之五;
图8是根据本申请实施例的虚拟地形的光照渲染方法的示意图之六;
图9是根据本申请实施例的虚拟地形的光照渲染方法的示意图之七;
图10是根据本申请实施例的虚拟地形的光照渲染方法的示意图之八;
图11是根据本申请实施例的虚拟地形的光照渲染方法的示意图之九;
图12是根据本申请实施例的虚拟地形的光照渲染方法的示意图之十;
图13是根据本申请实施例的一种可选的虚拟地形的光照渲染装置的示意图;
图14是根据本申请实施例的一种可选的电子设备的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图, 对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
根据本申请实施例的一个方面,提供了一种虚拟地形的光照渲染方法,可选地,作为一种可选的实施方式,上述虚拟地形的光照渲染方法可以由计算机设备执行,例如可应用于如图1所示的环境中。其中,以用户设备102作为计算机设备的一个示例进行说明,该用户设备102上可以但不限于包括显示器108、处理器106及存储器104。
具体过程可如下步骤:
步骤S102,用户设备102获取对目标虚拟地形子块1024触发的光照渲染请求,其中,目标虚拟地形子块1024为目标虚拟地形1022的子块,目标虚拟地形1022可以但不限于包括多个子块;
步骤S104,用户设备102响应光照渲染请求,通过存储器104获取目标虚拟地形子块的候选探点集合;
步骤S106-S112,用户设备102通过处理器106在候选探点集合中确定目标虚拟地形子块中的各个顶点对应的探点,得到第一索引关系集合,并获取候选探点集合中的各个探点的球谐基系数,并根据各个探点的球谐基系数,确定候选探点集合中的每两个探点的差异度,以及根据候选探点集合中的每两个探点的差异度,对候选探点集合中的探点进行合并,得到目标探点集合,并将第一索引关系集合中具有对应关系的顶点和合并前的探点修改为具有对应关系的顶点和合并后的探点,得到第二索引关系集合;并进一步根据目标探点集合中的各个探点的球谐基系数以及第二索引关系集合,对目标虚拟地形子块进行光照渲染,得到光照渲染的结果;用户设备102中的处理器106将光照渲染的结果对应的画面显示在显示器108中,并将上述光照渲染的结果在存储器104中。
除图1示出的示例之外,上述步骤还可以由服务器辅助完成,服务器可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云计算服务的云服务器。即由服务器执行第一索引关系集合的获取、第二索引关系集合的获取、光照渲染的结果的获取等步骤,从而减轻服务器的处理压力。该用户设备102包括但不限于手持设备(如手机)、笔记本电脑、台式电脑、智能语音交互设备、智能家电、车载设备等,本申请并不限制用户设备102的具体实现方式。
可选地,作为一种可选的实施方式,如图2所示,虚拟地形的光照渲染方法包括:
S202,获取目标虚拟地形子块的候选探点集合,其中,侯选探点集合中的探点用于对 目标虚拟地形子块进行光照渲染;
S204,在候选探点集合中确定目标虚拟地形子块中的各个顶点对应的探点,得到第一索引关系集合,其中,第一索引关系集合中的每个索引关系表示具有对应关系的顶点和探点;
S206,获取候选探点集合中的各个探点的球谐基系数,并根据各个探点的球谐基系数,确定候选探点集合中的每两个探点的差异度;
S208,根据差异度,对候选探点集合中的探点进行合并,得到目标探点集合,并根据第一索引关系集合,将与合并前的探点具有对应关系的顶点,与合并后的探点建立对应关系,得到第二索引关系集合;
S210,根据目标探点集合中的各个探点的球谐基系数以及第二索引关系集合,对目标虚拟地形子块进行光照渲染。
可选地,在本实施例中,上述虚拟地形的光照渲染方法可以但不限于应用在三维(3Dimensions,简称3D)游戏中的地形渲染场景中,针对地形这种特殊的物体提出一种预处理的光照渲染方式,其中,地形具有很广阔的特点,在较大的范围内其光照数据具有很高的重复度,进而利用球谐光照良好的性质,并通过差异度比较的方式求解出更少量的球谐基系数,以降低光照渲染过程中的计算量,且附着在物体表面的物体可复用上述光照渲染方式,可进一步提高光照渲染的效率。
可选地,在本实施例中,目标虚拟地形子块可以理解为目标虚拟地形的多个子块中的任一子块,其中,目标虚拟地形在表现上可以但不限于是一个整体,但在逻辑处理和渲染时会被分成若干小块的结构,该若干小块的结构可以理解为上述目标虚拟地形的多个子块,且多个子块都可以通过上述虚拟地形的光照渲染方法并行或线性实现光照渲染,以实现对目标虚拟地形的整体光照渲染;此外,目标虚拟地形子块的形状可以为三角形、矩形、圆形、梯形或其他多边形。
可选地,在本实施例中,探点可以理解为在空间中用于采集光照信息的三维空间点,且该三维空间点还用于对目标虚拟地形子块进行光照渲染。
可选地,在本实施例中,目标虚拟地形子块可以但不限于被划分为多个三角形进行处理,且每个三角形可以但不限于对应多个顶点,其中,一个顶点可以关联多个探点,一个探点也可以关联多个顶点;而获取目标虚拟地形子块的候选探点集合的过程,也可以但不限理解为获取目标虚拟地形子块被划分为的每个三角形的候选探点;
进一步举例说明,如图3所示,三角形302为目标虚拟地形子块被划分为的多个三角形中的一个三角形,以该三角形302为例,如图3中的(a)所示,O为三角形302的质心,a、b、c分别为三角形302的线段AO、BO、CO的中点;进一步如图3中的(b)所示,将a,b,c沿三角形302的法线方向偏移一个预设单位,进而得到了3个候选探点a·、b·、c·;同理,参考上述三角形302的候选探点的获取方式,获取目标虚拟地形子块被划分为的每个三角形的候选探点;
此外,在本实施例中,目标虚拟地形子块被划分为的三角形的面积可以存在区别,且针对不同的面积,对于三角形的候选探点的获取方式也可以存在不同,如出于提高性能的 考虑,对面积小于或等于目标阈值的三角形,可以生成第一数量的候选探点(如三角形的质心沿三角形的法线方向做偏移所得到的候选探点),而面积大于目标阈值的三角形,可以生成第二数量的候选探点,其中,第二数量大于第一数量。
可选地,在本实施例中,在候选探点集合中确定目标虚拟地形子块中的各个顶点对应的探点的方式可以包括从上述候选探点集合中获取目标虚拟地形子块中的各个顶点对应的全部探点,或在获取到上述全部探点的情况下,对该全部探点进行筛选,以得到部分探点,进而为后续的光照渲染操作提供更为优质的探点,从而提高光照渲染的整体效率,其中,筛选方式可以包括随机筛选、条件筛选等;
进一步举例说明,可选地例如已得到10个探点(全部探点),从该10个探点筛选出紧密度达到目标阈值的目标探点,其中,该紧密度可以但不限为探点与顶点之间的对应关系的紧密度。
可选地,在本实施例中,球谐基系数可以为球谐光照中基函数的系数,或可以理解为先将光照采样成N个系数,然后在渲染的时候用上述球谐基系数对上述采样到的光照进行还原,以完成渲染。
可选地,在本实施例中,每两个探点的差异度可以理解为每两个探点的球谐基系数的差异度,如探点A的球谐基系数为A1,探点B的球谐基系数为B1,则探点A与探点B的差异度可以理解为|A1-B1|;每两个探点的差异度还可以理解为每两个探点的球谐基系数对应的目标参数的差异度,如探点A的球谐基系数为A1、探点A的空间位置参数为A2、探点A的目标参数为A1×A2,探点B的球谐基系数为B1、探点B的空间位置参数(可以理解为在目标虚拟地形子块上的空间位置信息)为B2、探点B的目标参数为B1×B2,则探点A与探点B的差异度可以理解为|A1×A2-B1×B2|,其中,目标参数可以立即为任一与球谐基系数相关、或通过球谐基系数计算得到的参数,此处的空间位置参数仅为距离说明,并不做限定。
可选地,在本实施例中,对候选探点集合中的探点进行合并的方式可以包括将至少两个探点合并成至少一个探点,其中,上述至少两个探点可以不包括上述至少一个探点,如将探点A以及探点B(至少两个探点)合并成探点C(至少一个探点),且将探点A以及探点B的索引关系(第一索引关系集合中的索引关系)修改至探点C,或者说探点C兼备探点A以及探点B的索引关系;或,上述至少两个探点可以包括上述至少一个探点,如探点A以及探点B(至少两个探点)合并成探点A(至少一个探点),且将探点B的索引关系(第一索引关系集合中的索引关系)修改至探点A,或者说探点A兼备探点A原有的索引关系以及探点B的索引关系。
可选地,在本实施例中,根据目标探点集合中的各个探点的球谐基系数以及第二索引关系集合,对目标虚拟地形子块进行光照渲染,其中,第二索引关系集合中的索引关系可以用于确定目标探点集合中的各个探点与目标虚拟地形子块中的各个顶点之间的索引(对应)关系,并基于该索引关系利用各个探点的球谐基系数对各自对应的顶点进行渲染。
可选地,在本实施例中,在得到目标探点集合之后,将目标探点集合中的探点传递至烘培器中进行烘培处理,在该烘培器中将目标探点集合中的探点转化为世界空间下的探点,并利用烘培器所提供的基础功能以确定出目标探点集合中的探点的受光情况,进而得到目 标探点集合中的探点的球谐基系数,获取目标探点集合中的各个探点的球谐基系数;此外,在目标虚拟地形子块为目标场景中的若干地形子块之一的情况下,为提高数据处理效率,可以将目标场景中所有地形子块的探点传递至烘培器;
可选地,在本实施例中,由于目标探点集合中的探点为单一地形子块空间下的探点,那么在目标虚拟地形子块为目标场景中的若干地形子块之一的情况下,单一地形子块空间下的探点并不涉及其他地形子块空间下的探点,进而当对目标场景中所有地形子块进行烘培时,就可能出现目标探点集合中的探点处于其他地形子块的内部的异常情况,且该异常情况下的探点属于无效探点;
进一步针对该异常情况,可以对目标探点集合中的探点的目标数据进行记录,其中,目标数据包括以下至少之一:探点到目标虚拟地形子块的最近距离、探点所能关联的其他探点;进而在烘焙处理时,如果某个探点因该异常情况导致无效,首先在最近距离范围内找有效的另一探点;如果找不到,则遍历所有能关联的且实际有效的探点,(与距离的平方成反比)进行加权平均。
需要说明的是,虚拟地形通常不需要显示较为精细的物体模型,而是多以远景物体的方式进行展现,且该远景物体往往拥有较高的重复度,如沙漠地形中的沙砾、草地地形中的草木等,由此可见虚拟地形至少存在以下特性:一不需要较为精细的渲染方式,二所需渲染的物体重复较高。进而利用虚拟地形所具有的上述特性,通过差异度对大量的探点进行合并处理,以降低探点的球谐基系数的计算量,进而实现了提高虚拟地形的光照渲染效率的技术效果。
进一步举例说明,可选的例如图4所示,从目标虚拟地形402中确定出目标虚拟地形子块404,并获取目标虚拟地形子块404的候选探点集合,其中,侯选探点集合中的探点如图4中的(a)所示,用于对目标虚拟地形子块404进行光照渲染;进一步如图4中的(b)所示,对候选探点集合中的探点进行初步筛选,进而确定目标虚拟地形子块404中的各个顶点对应的探点,得到第一索引关系集合,其中,第一索引关系集合中的每个索引关系表示具有对应关系的顶点和探点;获取候选探点集合中的各个探点的球谐基系数,并根据各个探点的球谐基系数,确定候选探点集合中的每两个探点的差异度,对候选探点集合中的探点进行合并,得到目标探点集合,并将第一索引关系集合中具有对应关系的顶点和合并前的探点修改为具有对应关系的顶点和合并后的探点,得到第二索引关系集合,其中,第二索引关系集合中的探点如图4中的(c)所示。
通过本申请提供的实施例,获取目标虚拟地形子块的候选探点集合,其中,侯选探点集合中的探点用于对目标虚拟地形子块进行光照渲染;在候选探点集合中确定目标虚拟地形子块中的各个顶点对应的探点,得到第一索引关系集合,其中,第一索引关系集合中的每个索引关系表示具有对应关系的顶点和探点;获取候选探点集合中的各个探点的球谐基系数,并根据各个探点的球谐基系数,确定候选探点集合中的每两个探点的差异度,对候选探点集合中的探点进行合并,得到目标探点集合,并根据第一索引关系集合,将与合并前的探点具有对应关系的顶点,与合并后的探点建立对应关系,得到第二索引关系集合,根据目标探点集合中的各个探点的球谐基系数以及第二索引关系集合,对目标虚拟地形子 块进行光照渲染,利用虚拟地形所具有的高重复度的特性,通过差异度对大量的探点进行合并处理,进而达到了降低探点的球谐基系数的计算量的目的,从而实现了提高虚拟地形的光照渲染效率的技术效果。
作为一种可选的方案,前述S208,根据候选探点集合中的每两个探点的差异度,对候选探点集合中的探点进行合并,得到目标探点集合,并根据第一索引关系集合,将与合并前的探点具有对应关系的顶点,与合并后的探点建立对应关系,得到第二索引关系集合,包括:
重复执行以下步骤,直到候选探点集合中的探点的数量小于或等于预设数量阈值,其中,当前探点集合被初始化为候选探点集合:
S1,根据当前探点集合中的每两个探点的差异度,确定待合并的两个探点,其中,待合并的两个探点包括第一当前探点和第二当前探点,第一当前探点是待合并到第二当前探点的探点;
S2,在当前探点集合中删除第一当前探点,在第一索引关系集合中查找与第一当前探点具有对应关系的顶点,并将查找到的顶点与第二当前探点建立对应关系。
可选地,在本实施例中,为提高光照渲染的效率,可以但不限于对候选探点集合中的探点的数量进行限定,或者说将候选探点集合中的探点的数量限定为一个较小的固定值(预设数量阈值),或固定值以下的值。
需要说明的是,在候选探点集合中的探点的数量大于预设数量阈值的情况下,将持续进行合并处理,如第一次合并处理的结果是得到10个探点,但预设数量阈值为5,则基于该第一次合并处理的结果(10个探点)进行第二次合并处理;假设第二次合并处理的结果为7个探点,仍不满足小于或等于预设数量阈值的条件,则基于该第二次合并处理的结果(7个探点)进行第三次合并处理;假设第三次合并处理的结果为5个探点,满足了小于或等于预设数量阈值的条件,则得到目标探点集合,并将第一索引关系集合中具有对应关系的顶点和合并前的探点修改为具有对应关系的顶点和合并后的探点,得到第二索引关系集合。
进一步举例说明,可选的例如图5所示,具体步骤如下:
S502,获取目标虚拟地形子块的候选探点集合;
S504,在候选探点集合中确定目标虚拟地形子块中的各个顶点对应的探点,得到第一索引关系集合;
S506,获取候选探点集合中的各个探点的球谐基系数,并根据各个探点的球谐基系数,确定候选探点集合中的每两个探点的差异度;
S508,根据当前探点集合中的每两个探点的差异度,确定待合并的两个探点,其中,待合并的两个探点包括第一当前探点和第二当前探点,第一当前探点是待合并到第二当前探点的探点;
S510,在当前探点集合中删除第一当前探点,在第一索引关系集合中查找与第一当前探点具有对应关系的顶点,并将查找到的顶点与第二当前探点建立对应关系;
S512,判断候选探点集合中的探点的数量是否小于或等于预设数量阈值,若是,则执行步骤S514,若否,则执行步骤S508,其中,当前探点集合被初始化为候选探点集合;
S514,得到目标探点集合,并将第一索引关系集合中具有对应关系的顶点和合并前的探点修改为具有对应关系的顶点和合并后的探点,得到第二索引关系集合;
S516,根据目标探点集合中的各个探点的球谐基系数以及第二索引关系集合,对目标虚拟地形子块进行光照渲染。
通过本申请提供的实施例,重复执行以下步骤,直到候选探点集合中的探点的数量小于或等于预设数量阈值,其中,当前探点集合被初始化为候选探点集合:根据当前探点集合中的每两个探点的差异度,确定待合并的两个探点,其中,待合并的两个探点包括第一当前探点和第二当前探点,第一当前探点是待合并到第二当前探点的探点;在当前探点集合中删除第一当前探点,在第一索引关系集合中查找与第一当前探点具有对应关系的顶点,并将查找到的顶点在第一索引关系集合中的对应关系从与第一当前探点具有的对应关系修改为与第二当前探点具有的对应关系,实现了提高光照渲染效率的效果。
作为一种可选的方案,前述根据当前探点集合中的每两个探点的差异度,确定待合并的两个探点,包括:
S1,在当前探点集合中确定差异度最小的两个探点,将差异度最小的两个探点确定为待合并的两个探点;或者
S2,在当前探点集合中查找差异度小于或等于预设差异度阈值的两个探点,在查找到差异度小于或等于预设差异度阈值的两个探点的情况下,将差异度小于或等于预设差异度阈值的两个探点确定为待合并的两个探点。
可选地,在本实施例中,选取待合并的两个探点的方式,可以但不限于采用在当前探点集合中确定差异度最小的方式,或在当前探点集合中查找差异度小于或等于预设差异度阈值的方式。
作为一种可选的方案,根据各个探点的球谐基系数,确定候选探点集合中的每两个探点的差异度,包括:
S1,对候选探点集合中的每两个探点执行以下步骤,其中,在执行以下步骤时,每两个探点包括第三当前探点和第四当前探点。
第三当前探点可以是待合并到第四当前探点的探点。
S2,获取第四当前探点的球谐基系数减去第三当前探点的球谐基系数所得到的目标差值;
S3,根据目标差值,确定第三当前探点与第四当前探点的差异度。
可选地,在本实施例中,假设第三当前探点为探点A,第四当前探点为探点B,进一步可参考下述公式(1)计算第三当前探点与第四当前探点的目标差值:
ΔSHl,m=SHl,m(B)-SHl,m(A)   (1)
其中,SHl,m(A)和SHl,m(B)分别为探点A和探点B的球谐基系数,下标l、m皆为球谐基的通用表示。
作为一种可选的方案,目标虚拟地形子块被划分成一组三角形,则前述根据目标差值,确定第三当前探点与第四当前探点的差异度,包括:
S1,根据所述一组三角形,获取第三当前探点关联的三角形集合,其中,三角形集合 包括与第三当前探点具有对应关系的顶点所在的三角形;
S2,获取三角形集合中的每个三角形的法线向量以及每个三角形对应的预设权重;
S3,根据目标差值、每个三角形的法线向量以及每个三角形对应的预设权重,确定第三当前探点与第四当前探点的差异度。
可选地,在本实施例中,假设第三当前探点为探点A,第四当前探点为探点B,进一步在获取到探点A和探点B的目标差值的基础上,参考下述公式(2)以计算第三当前探点与第四当前探点的差异度:
其中,SHl,m(A)和SHl,m(B)分别为探点A和探点B的球谐基系数,下标l、m皆为球谐基的通用表示,n为探点A所关联的三角形的数量,Ni为探点A所关联的三角形的法线向量(如法线的方向向量),Wi为权重;
可选地,在上述本实施例中,采用的可以但不限于为三阶球谐,所以l是从0到2。由于地形的每块三角形面积相差不大,所以可以但不限于忽略了面积的影响。上面的公式(2)是单个颜色通道的公式,实际应用中有RGB 3个通道,所以还需求三个通道的差异度的平均值。其他通道的公式可以但不限于同理参考上述公式(2)。
作为一种可选的方案,前述根据目标差值、每个三角形的法线向量以及每个三角形对应的预设权重,确定第三当前探点与第四当前探点的差异度,包括:
S1,在第三当前探点不位于目标虚拟地形子块的边界时,通过预设的差异度函数,根据目标差值、每个三角形的法线向量以及每个三角形对应的预设权重,确定第三当前探点与第四当前探点的差异度;
S2,在第三当前探点位于目标虚拟地形子块的边界时,通过预设的差异度函数,根据目标差值、每个三角形的法线向量以及每个三角形对应的预设权重,确定第三当前探点与第四当前探点的初始差异度;将第三当前探点与第四当前探点的差异度确定为等于初始差异度与预设常数的乘积,其中,预设常数大于1。
可选地,在本实施例中,如果直接进行探点合并,很有可能会导致目标虚拟地形子块与其他子块的边界处出现接缝等异常现象,进而为了让虚拟地形子块的边界变得平滑,在通过上述公式(2)计算得到差异度的基础上,还可以但不限于采用下述公式(3)所示的方式来修正上述差异度:
ΔAB=ΔAB*C  (3)
其中,C可以但不限于是一个大于1的常数。
需要说明的是,为了让虚拟地形子块的边界融合的条件更加苛刻,通过预设的差异度函数,直接或间接修正初始差异度。
进一步举例说明,可选的例如直接依据初始差异度以合并处理探点的方式所得到的效 果如图6中的(a)所示,虚拟地形子块602与虚拟地形子块604之间的边界接缝较为明显;通过预设的差异度函数,直接或间接修正初始差异度后,再合并处理探点所得到的效果如图6中的(b)所示,虚拟地形子块602与虚拟地形子块604之间的边界更为平滑。
通过本申请提供的实施例,在第三当前探点不位于目标虚拟地形子块的边界的情况下,通过预设的差异度函数,根据目标差值、每个三角形的法线向量以及每个三角形对应的预设权重,确定第三当前探点与第四当前探点的差异度;在第三当前探点位于目标虚拟地形子块的边界的情况下,通过预设的差异度函数,根据目标差值、每个三角形的法线向量以及每个三角形对应的预设权重,确定第三当前探点与第四当前探点的初始差异度;将第三当前探点与第四当前探点的差异度确定为等于初始差异度与预设常数的乘积,其中,预设常数大于1,实现了提高虚拟地形子块边界的平滑度的效果。
作为一种可选的方案,根据目标探点集合中的各个探点的球谐基系数以及第二索引关系集合,对目标虚拟地形子块进行光照渲染,包括:
S1,将目标探点集合中的各个探点的球谐基系数保存为目标贴图;
S2,在需要对目标虚拟地形子块进行渲染时,根据第二索引关系集合,从目标贴图中确定目标虚拟地形子块中的各个顶点对应的探点的球谐基系数;
S3,根据目标虚拟地形子块中的各个顶点对应的探点的球谐基系数,确定目标虚拟地形子块中的各个顶点的球谐基系数;
S4,根据目标虚拟地形子块中的各个顶点的球谐基系数,对目标虚拟地形子块进行光照渲染。
可选地,在本实施例中,可以但不限于将目标虚拟地形子块所在的模型空间下的探点转化到目标空间下的探点,并利用烘焙器所提供的基础功能计算出各个探点的受光情况,最终得到光照的球谐基系数;再将烘焙出来的目标虚拟地形子块中的所有元素的球谐基系数保存为贴图(所有元素的球谐基系数保存为一个或多个贴图、或一个元素的球谐基系数保存为一个贴图等);运行时根据预先保存的索引和权重数据,采样球谐贴图,得到的系数与法线所对应的基函数做点积,以得到光照信息,完成对目标虚拟地形子块进行光照渲染。
可选地,在本实施例中,球谐基系数可以但不限于包括3阶段的球谐基系数,其中,球谐基系数的二阶和三阶系数可以但不限于通过一阶系数做归一化处理得到;
进一步举例说明,可选地3阶段的球谐基系数如图7中的公式702所示,其中,SHl,m为球谐基系数,下标l、m皆为球谐基的通用表示,N为探点所关联的三角形数量,w(i)为权重,L(i)为某方向入射光照;如图7中的公式704所示,球谐基函数的后缀部分小于1,所以高阶(2阶和3阶)的球谐基系数可由1阶的球谐基系数做归一化处理。
可选地,在本实施例中,考虑到采样效率、LOD、以及整合其他功能,球谐基系数的数据格式可以但不限于采用如下方式进行编码:首先贴图的格式可以但不限于是Uint4,这种格式是硬件所能支持的,比较方便编码的进行;进一步对每组球谐基系数,通常需要占用2个像素,进而将低阶的球谐基系数(第一球谐基子系数)与高阶的球谐基系数(第二球谐基子系数)分到两个不同的像素,进而更方便的做lod,如近处的物体(近景虚拟地形子块)需要进行完整的、高阶的球谐计算,而远处的物体只需进行低阶的球谐计算即可,这 样远处物体的采样只有1次;更进一步,还可以但不限于将高阶的球谐基系数拆分到另一张贴图,于是远处的物体就只用加载一半的贴图量;
进一步举例说明,可选地如图8所示,第一球谐基子系数包括第1和2阶系数,第二球谐基子系数包括第3阶系数,进而将RGB 3个通道的各阶球谐基系数,分至16byte的两个像素,如将RGB 3个通道的第一球谐基子系数分至第一像素802,将RGB 3个通道的第二球谐基子系数分至第二像素804;
具体的,对于第一像素802,16byte的存储空间被分为三部分,第一部分6byte,用于分配RGB 3个通道的1阶球谐基系数;第二部分9byte,用于分配RGB 3个通道的2阶球谐基系数;第三部分1byte,为预留的字节,可用于保存阴影数据,以实现一个基于探点的相对粗糙的阴影效果;
再者,对于第二像素804,16byte的存储空间被分为两部分,第一部分15byte,用于分配RGB 3个通道的3阶球谐基系数;第二部分1byte,用为预留的字节,可用于保存阴影数据,以实现一个基于探点的相对粗糙的阴影效果。
作为一种可选的方案,针对所述目标虚拟地形子块中的各个顶点中的当前顶点,前述根据目标虚拟地形子块中的各个顶点对应的探点的球谐基系数,确定目标虚拟地形子块中的各个顶点的球谐基系数,包括:
S1,在当前顶点与目标探点集合中的一个探点具有对应关系时,将当前顶点的球谐基系数确定为等于一个探点的球谐基系数;或者
S2,在当前顶点与目标探点集合中的多个探点具有对应关系时,将当前顶点的球谐基系数确定为等于多个探点的球谐基系数的加权之和。
可选地,在本实施例中,可以但不限于区分当前顶点与目标探点集合中的一个或多个探点具有对应关系的情况,并具体分为在当前顶点与目标探点集合中的一个探点具有对应关系的情况下,将当前顶点的球谐基系数确定为等于一个探点的球谐基系数;在当前顶点与目标探点集合中的多个探点具有对应关系的情况下,将当前顶点的球谐基系数确定为等于多个探点的球谐基系数的加权之和。
作为一种可选的方案,前述在候选探点集合中确定目标虚拟地形子块中的各个顶点对应的探点,得到第一索引关系集合,包括:
S1,在候选探点集合中查找与各个顶点距离最近的一个探点,并将各个顶点与对应的查找到的探点作为索引关系记录在第一索引关系集合中;或者
S2,在候选探点集合中查找与各个顶点距离最近的多个探点,为多个探点设置对应的权重,并将各个顶点与对应的多个探点、以及多个探点对应的权重作为索引关系记录在第一索引关系集合中。
可选地,在本实施例中,可以但不限于在顶点的属性里面增加相关探点的索引和权重,如虽然虚拟地形以虚拟地形子块为结构单位,但虚拟地形子块的覆盖面积通常依然会比较大,进而为了光照渲染兼顾效果与效率,对顶点的属性结构进行了限定,如假设每个顶点的属性空间为32bit(并不固定),则在该32bit的属性空间中,可以但不限于分配至少两个探点的属性空间,如分配探点A9bit的属性空间、分配探点B9bit的属性空间,余下的14bit,其 中,7bit用于分配权重,再预留7bit的属性空间,用于保存顶点的阴影;进一步保留顶点的阴影所考虑的是虚拟地形通常是光照渲染远景的物体地形子块,如此阴影所能得到的渲染效果通常也满足用户的视觉要求,还节省了存储空间,并降低光照渲染的计算量,进而提高光照渲染的效率,阴影效果如图9所示,在目标虚拟地形902的渲染结果中,物体904被渲染处阴影906的效果。
可选地,在本实施例中,多个探点对应的权重可以但不限用于计算多个探点的球谐基系数的加权之和。
需要说明的是,对于如何建立顶点与探点之间的索引关系,可以但不限于在候选探点集合中查找与各个顶点距离最近的一个探点,并将各个顶点与对应的查找到的探点作为索引关系记录在第一索引关系集合中;或者在候选探点集合中查找与各个顶点距离最近的多个探点,为多个探点设置对应的权重,并将各个顶点与对应的查找到的多个探点、以及多个探点对应的权重作为索引关系记录在第一索引关系集合中。
作为一种可选的方案,获取目标虚拟地形子块的候选探点集合,包括:
S1,获取目标虚拟地形子块的原始探点集合,其中,目标虚拟地形子块被划分成一组三角形,原始探点集合包括一组三角形中的各三角形对应的一个或多个探点;
S2,在原始探点集合中过滤掉无效的探点,得到候选探点集合。
可选地,在本实施例中,过滤掉无效的探点的方式可以但不限于包括过滤掉位于目标虚拟地形子块的无效区域(如目标虚拟地形子块的内部、背光部等)的探点、滤掉与目标虚拟地形子块之间的关联程度低于有效阈值的探点等。
需要说明的是,对原始探点集合中的探点进行过滤,可得到相对优质的探点,更利于后续光照渲染的执行。
进一步举例说明,可选的例如图10所示目标虚拟地形子块1002的侧切图,对目标虚拟地形子块1002的所有候选探点,如探点e、探点d,检查上述候选探点是否位于目标虚拟地形子块1002的内部,具体的d为一个位于目标虚拟地形子块1002外部的探点,e则为一个位于目标虚拟地形子块1002内部的探点;此外,对于目标虚拟地形子块1002内部的探点的光照信息无效,也从候选探点集合中删除。
通过本申请提供的实施例,获取目标虚拟地形子块的原始探点集合,其中,目标虚拟地形子块被划分成一组三角形,原始探点集合包括一组三角形中的各三角形对应的一个或多个探点;在原始探点集合中过滤掉无效的探点,得到候选探点集合,实现了提高光照渲染的执行效率的效果。
作为一种可选的方案,为方便理解,将上述虚拟地形的光照渲染方法应用在3D游戏的光照渲染场景中,以提升游戏画质表现与真实感,同时对于在空间上具有曲面形态的模型有更好的效果;
进一步举例说明,可选地如图11所示,上述虚拟地形的光照渲染方法应用在3D游戏的光照渲染场景中的步骤如下述内容所示:
步骤S1102,获取由若干三角形组成的地形子块;
步骤S1104,生成所有的候选探点;
步骤S1106,去掉无效的候选探点,得到余下有效探点;
步骤S1108-1,计算地形顶点所关联的探点的索引和权重;
步骤S1108-2,对所有探点进行光照计算,得到球谐基系数;
步骤S1110,合并差异小于阈值的探点组合
步骤S1112,判断余下的探点数量是否大于预设值,若是,则执行步骤S1114,若否,则执行步骤S1116;
步骤S1114,找到差异最小的探点组合,并合并;
步骤S1116,得到最终的探点列表。
可选地,在本实施例中,在模型空间,自动计算出若干个探点,其中,模型空间中每个探点的颜色可以但不限于都不一样,且顶点颜色与所关联的最大权重探点相同,顶点线段可以但不限用于表示法线方向;进一步根据计算出来的探点,计算出模型上每个顶点所关联的若干探点以及权重,并将计算出来的探点索引和权重保存在模型顶点数据中;再者,将场景传递到烘焙器烘焙,其中,场景由若干模型组成,同样的模型可能存在多个实例。即,将模型空间下的探点转化到世界空间,利用烘焙器所提供的基础功能计算出探点的受光情况,最终得到光照的球谐基系数;
可选地,在本实施例中,在烘焙出来的所有虚拟地形子块的球谐基系数保存为贴图(所有虚拟地形子块的球谐基系数都保存为一个贴图、或一个虚拟地形子块的球谐基系数保存为一个贴图、或多个虚拟地形子块的球谐基系数保存为一个贴图、或多个虚拟地形子块的球谐基系数保存为多个贴图),进一步如图12所示,将虚拟地形子块1202、虚拟地形子块1204以及虚拟地形子块1206的球谐基系数保存为目标贴图1208。具体的,采用一定的压缩算法,将系数组装到若干张贴图里面;运行时根据顶点保存的索引和权重数据,采样球谐贴图,得到的系数与法线所对应的基函数做点积便是光照信息。
可以理解的是,在本申请的具体实施方式中,涉及到用户信息等相关的数据,当本申请以上实施例运用到具体产品或技术中时,需要获得用户许可或者同意,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
根据本申请实施例的另一个方面,还提供了一种用于实施上述虚拟地形的光照渲染方法的虚拟地形的光照渲染装置。如图13所示,该装置包括:
第一获取单元1302,用于获取目标虚拟地形子块的候选探点集合,其中,侯选探点集合中的探点用于对目标虚拟地形子块进行光照渲染;
确定单元1304,用于在候选探点集合中确定目标虚拟地形子块中的各个顶点对应的探点,得到第一索引关系集合,其中,第一索引关系集合中的每个索引关系表示具有对应关系的顶点和探点;
第二获取单元1306,用于获取候选探点集合中的各个探点的球谐基系数,并根据各个探点的球谐基系数,确定候选探点集合中的每两个探点的差异度;
合并单元1308,用于根据差异度,对候选探点集合中的探点进行合并,得到目标探点集合,并根据第一索引关系集合,将与合并前的探点具有对应关系的顶点,与合并后的探点建立对应关系,得到第二索引关系集合;
渲染单元1310,用于根据目标探点集合中的各个探点的球谐基系数以及第二索引关系集合,对目标虚拟地形子块进行光照渲染。
通过本申请提供的实施例,获取目标虚拟地形子块的候选探点集合,其中,侯选探点集合中的探点用于对目标虚拟地形子块进行光照渲染;在候选探点集合中确定目标虚拟地形子块中的各个顶点对应的探点,得到第一索引关系集合,其中,第一索引关系集合中的每个索引关系表示具有对应关系的顶点和探点;获取候选探点集合中的各个探点的球谐基系数,并根据各个探点的球谐基系数,确定候选探点集合中的每两个探点的差异度,对候选探点集合中的探点进行合并,得到目标探点集合,并根据第一索引关系集合,将与合并前的探点具有对应关系的顶点,与合并后的探点建立对应关系,得到第二索引关系集合,根据目标探点集合中的各个探点的球谐基系数以及第二索引关系集合,对目标虚拟地形子块进行光照渲染,利用虚拟地形所具有的高重复度的特性,通过差异度对大量的探点进行合并处理,进而达到了降低探点的球谐基系数的计算量的目的,从而实现了提高虚拟地形的光照渲染效率的技术效果。
作为一种可选的方案,合并单元1308,包括:
重复模块,用于重复执行以下步骤,直到候选探点集合中的探点的数量小于或等于预设数量阈值,其中,当前探点集合被初始化为候选探点集合:
第一确定模块,用于根据当前探点集合中的每两个探点的差异度,确定待合并的两个探点,其中,待合并的两个探点包括第一当前探点和第二当前探点,第一当前探点是待合并到第二当前探点的探点;
查找模块,用于在当前探点集合中删除第一当前探点,在第一索引关系集合中查找与第一当前探点具有对应关系的顶点,并将查找到的顶点与第二当前探点建立对应关系。
具体实施例可以参考上述虚拟地形的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,第一确定模块,包括:
第一确定子模块,用于在当前探点集合中确定差异度最小的两个探点,将差异度最小的两个探点确定为待合并的两个探点;或者
第二确定子模块,用于在当前探点集合中查找差异度小于或等于预设差异度阈值的两个探点,在查找到差异度小于或等于预设差异度阈值的两个探点的情况下,将差异度小于或等于预设差异度阈值的两个探点确定为待合并的两个探点。
具体实施例可以参考上述虚拟地形的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,确定单元1304,包括:
执行模块,用于对候选探点集合中的每两个探点执行以下步骤,其中,在执行以下步骤时,每两个探点包括第三当前探点和第四当前探点;
第一获取模块,用于获取第四当前探点的球谐基系数减去第三当前探点的球谐基系数所得到的目标差值;
第二确定模块,用于根据目标差值,确定第三当前探点与第四当前探点的差异度。
具体实施例可以参考上述虚拟地形的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,所述目标虚拟地形子块被划分成一组三角形,第二确定模块,包括:
第一获取子模块,用于根据所述一组三角形,获取第三当前探点关联的三角形集合,其中,三角形集合包括与第三当前探点具有对应关系的顶点所在的三角形;
第二获取子模块,用于获取三角形集合中的每个三角形的法线向量以及每个三角形对应的预设权重;
第三确定子模块,用于根据目标差值、每个三角形的法线向量以及每个三角形对应的预设权重,确定第三当前探点与第四当前探点的差异度。
具体实施例可以参考上述虚拟地形的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,第三确定子模块,包括:
第一确定子单元,用于在第三当前探点不位于目标虚拟地形子块的边界时,通过预设的差异度函数,根据目标差值、每个三角形的法线向量以及每个三角形对应的预设权重,确定第三当前探点与第四当前探点的差异度;
第二确定子单元,用于在第三当前探点位于目标虚拟地形子块的边界时,通过预设的差异度函数,根据目标差值、每个三角形的法线向量以及每个三角形对应的预设权重,确定第三当前探点与第四当前探点的初始差异度;第三确定子单元,用于将第三当前探点与第四当前探点的差异度确定为等于初始差异度与预设常数的乘积,其中,预设常数大于1。
具体实施例可以参考上述虚拟地形的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,渲染单元1310,包括:
保存模块,用于将目标探点集合中的各个探点的球谐基系数保存为目标贴图;
第三确定模块,用于在需要对目标虚拟地形子块进行渲染时,根据第二索引关系集合,从目标贴图中确定目标虚拟地形子块中的各个顶点对应的探点的球谐基系数;
第四确定模块,用于根据目标虚拟地形子块中的各个顶点对应的探点的球谐基系数,确定目标虚拟地形子块中的各个顶点的球谐基系数;
渲染模块,用于根据目标虚拟地形子块中的各个顶点的球谐基系数,对目标虚拟地形子块进行光照渲染。
具体实施例可以参考上述虚拟地形的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,针对所述目标虚拟地形子块中的各个顶点中的当前顶点,第四确定模块,包括:
第四确定子模块,用于在当前顶点与目标探点集合中的一个探点具有对应关系时,将当前顶点的球谐基系数确定为等于一个探点的球谐基系数;或者
第五确定子模块,用于在当前顶点与目标探点集合中的多个探点具有对应关系时,将当前顶点的球谐基系数确定为等于多个探点的球谐基系数的加权之和。
具体实施例可以参考上述虚拟地形的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,确定单元1304,包括:
第一记录模块,用于在候选探点集合中查找与各个顶点距离最近的一个探点,并将各个顶点与对应的查找到的探点作为索引关系记录在第一索引关系集合中;或者
第二记录模块,用于在候选探点集合中查找与各个顶点距离最近的多个探点,为多个探点设置对应的权重,并将各个顶点与对应的查找到的多个探点、以及多个探点对应的权重作为索引关系记录在第一索引关系集合中。
具体实施例可以参考上述虚拟地形的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,第一获取单元1302,包括:
第二获取模块,用于获取目标虚拟地形子块的原始探点集合,其中,目标虚拟地形子块被划分成一组三角形,原始探点集合包括一组三角形中的各三角形对应的一个或多个探点;
第三获取模块,用于在原始探点集合中过滤掉无效的探点,得到候选探点集合。
具体实施例可以参考上述虚拟地形的光照渲染方法中所示示例,本示例中在此不再赘述。
根据本申请实施例的又一个方面,还提供了一种用于实施上述虚拟地形的光照渲染方法的电子设备,如图14所示,该电子设备包括存储器1402和处理器1404,该存储器1402中存储有计算机程序,该处理器1404被设置为通过计算机程序执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述电子设备可以位于计算机网络的多个网络设备中的至少一个网络设备。
可选地,在本实施例中,上述处理器可以被设置为通过计算机程序执行以下步骤:
S1,获取目标虚拟地形子块的候选探点集合,其中,侯选探点集合中的探点用于对目标虚拟地形子块进行光照渲染;
S2,在候选探点集合中确定目标虚拟地形子块中的各个顶点对应的探点,得到第一索引关系集合,其中,第一索引关系集合中的每个索引关系表示具有对应关系的顶点和探点;
S3,获取候选探点集合中的各个探点的球谐基系数,并根据各个探点的球谐基系数,确定候选探点集合中的每两个探点的差异度;
S4,根据差异度,对候选探点集合中的探点进行合并,得到目标探点集合,并根据第 一索引关系集合,将与合并前的探点具有对应关系的顶点,与合并后的探点建立对应关系,得到第二索引关系集合;
S5,根据目标探点集合中的各个探点的球谐基系数以及第二索引关系集合,对目标虚拟地形子块进行光照渲染。
可选地,本领域普通技术人员可以理解,图14所示的结构仅为示意,电子设备也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图14其并不对上述电子设备的结构造成限定。例如,电子设备还可包括比图14中所示更多或者更少的组件(如网络接口等),或者具有与图14所示不同的配置。
其中,存储器1402可用于存储软件程序以及模块,如本申请实施例中的虚拟地形的光照渲染方法和装置对应的程序指令/模块,处理器1404通过运行存储在存储器1402内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的虚拟地形的光照渲染方法。存储器1402可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器1402可进一步包括相对于处理器1404远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。其中,存储器1402具体可以但不限于用于存储候选探点集合、第一索引关系集合以及第二索引关系集合。作为一种示例,如图14所示,上述存储器1402中可以但不限于包括上述虚拟地形的光照渲染装置中的第一获取单元1302、确定单元1304、第二获取单元1306、合并单元1308及渲染单元1314。此外,还可以包括但不限于上述虚拟地形的光照渲染装置中的其他模块单元,本示例中不再赘述。
可选地,上述的传输装置1406用于经由一个网络接收或者发送数据。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置1406包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置1406为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
此外,上述电子设备还包括:显示器1408,用于显示上述候选探点集合、第一索引关系集合以及第二索引关系集合;和连接总线1410,用于连接上述电子设备中的各个模块部件。
在其他实施例中,上述终端设备或者服务器可以是一个分布式系统中的一个节点,其中,该分布式系统可以为区块链系统,该区块链系统可以是由该多个节点通过网络通信的形式连接形成的分布式系统。其中,节点之间可以组成点对点(Peer To Peer,简称P2P)网络,任意形式的计算设备,比如服务器、终端等电子设备都可以通过加入该点对点网络而成为该区块链系统中的一个节点。
根据本申请的一个方面,提供了一种计算机程序产品,该计算机程序产品包括计算机程序/指令,该计算机程序/指令包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分从网络上被下载和安装,和/或从可拆卸介质被安装。 在该计算机程序被中央处理器执行时,执行本申请实施例提供的各种功能。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
需要说明的是,电子设备的计算机系统仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
计算机系统包括中央处理器(Central Processing Unit,CPU),其可以根据存储在只读存储器(Read-Only Memory,ROM)中的程序或者从存储部分加载到随机访问存储器(Random Access Memory,RAM)中的程序而执行各种适当的动作和处理。在随机访问存储器中,还存储有系统操作所需的各种程序和数据。中央处理器、在只读存储器以及随机访问存储器通过总线彼此相连。输入/输出接口(Input/Output接口,即I/O接口)也连接至总线。
以下部件连接至输入/输出接口:包括键盘、鼠标等的输入部分;包括诸如阴极射线管(Cathode Ray Tube,CRT)、液晶显示器(Liquid Crystal Display,LCD)等以及扬声器等的输出部分;包括硬盘等的存储部分;以及包括诸如局域网卡、调制解调器等的网络接口卡的通信部分。通信部分经由诸如因特网的网络执行通信处理。驱动器也根据需要连接至输入/输出接口。可拆卸介质,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器上,以便于从其上读出的计算机程序根据需要被安装入存储部分。
特别地,根据本申请的实施例,各个方法流程图中所描述的过程可以被实现为计算机软件程序。例如,本申请的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分从网络上被下载和安装,和/或从可拆卸介质被安装。在该计算机程序被中央处理器执行时,执行本申请的系统中限定的各种功能。
根据本申请的一个方面,提供了一种计算机可读存储介质,计算机设备的处理器从计算机可读存储介质读取该计算机程序,处理器执行该计算机程序,使得该计算机设备执行上述各种可选实现方式中提供的方法。
本申请实施例还提供了一种包括计算机程序的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例提供的方法。
可选地,在本实施例中,本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (14)

  1. 一种虚拟地形的光照渲染方法,所述方法由计算机设备执行,所述方法包括:
    获取目标虚拟地形子块的候选探点集合,其中,所述侯选探点集合中的探点用于对所述目标虚拟地形子块进行光照渲染;
    在所述候选探点集合中确定所述目标虚拟地形子块中的各个顶点对应的探点,得到第一索引关系集合,其中,所述第一索引关系集合中的每个索引关系表示具有对应关系的顶点和探点;
    获取所述候选探点集合中的各个探点的球谐基系数,并根据所述各个探点的球谐基系数,确定所述候选探点集合中的每两个探点的差异度;
    根据所述差异度,对所述候选探点集合中的探点进行合并,得到目标探点集合,并根据所述第一索引关系集合,将与合并前的探点具有对应关系的顶点,与合并后的探点建立对应关系,得到第二索引关系集合;
    根据所述目标探点集合中的各个探点的球谐基系数以及所述第二索引关系集合,对所述目标虚拟地形子块进行光照渲染。
  2. 根据权利要求1所述的方法,所述根据所述差异度,对所述候选探点集合中的探点进行合并,得到目标探点集合,并根据所述第一索引关系集合,将与合并前的探点具有对应关系的顶点,与合并后的探点建立对应关系,得到第二索引关系集合,包括:
    重复执行以下步骤,直到所述候选探点集合中的探点的数量小于或等于预设数量阈值,其中,当前探点集合被初始化为所述候选探点集合:
    根据所述当前探点集合中的每两个探点的差异度,确定待合并的两个探点,其中,所述待合并的两个探点包括第一当前探点和第二当前探点,所述第一当前探点是待合并到所述第二当前探点的探点;
    在所述当前探点集合中删除所述第一当前探点,在所述第一索引关系集合中查找与所述第一当前探点具有对应关系的顶点,并将查找到的顶点与所述第二当前探点建立对应关系。
  3. 根据权利要求2所述的方法,所述根据所述当前探点集合中的每两个探点的差异度,确定待合并的两个探点,包括:
    在所述当前探点集合中确定所述差异度最小的两个探点,将所述差异度最小的两个探点确定为所述待合并的两个探点;或者
    在所述当前探点集合中查找所述差异度小于或等于预设差异度阈值的两个探点,在查找到所述差异度小于或等于预设差异度阈值的两个探点的情况下,将所述差异度小于或等于所述预设差异度阈值的两个探点确定为所述待合并的两个探点。
  4. 根据权利要求1所述的方法,所述根据所述各个探点的球谐基系数,确定所述候选探点集合中的每两个探点的差异度,包括:
    对所述候选探点集合中的每两个探点执行以下步骤,其中,在执行以下步骤时,所述每两个探点包括第三当前探点和第四当前探点;
    获取所述第四当前探点的球谐基系数减去所述第三当前探点的球谐基系数所得到的目 标差值;
    根据所述目标差值,确定所述第三当前探点与所述第四当前探点的差异度。
  5. 根据权利要求4所述的方法,所述目标虚拟地形子块被划分成一组三角形,所述根据所述目标差值,确定所述第三当前探点与所述第四当前探点的差异度,包括:
    根据所述一组三角形,获取所述第三当前探点关联的三角形集合,其中,所述三角形集合包括与所述第三当前探点具有对应关系的顶点所在的三角形;
    获取所述三角形集合中的每个三角形的法线向量以及所述每个三角形对应的预设权重;
    根据所述目标差值、所述每个三角形的法线向量以及所述每个三角形对应的预设权重,确定所述第三当前探点与所述第四当前探点的差异度。
  6. 根据权利要求5所述的方法,所述根据所述目标差值、所述每个三角形的法线向量以及所述每个三角形对应的预设权重,确定所述第三当前探点与所述第四当前探点的差异度,包括:
    在所述第三当前探点不位于所述目标虚拟地形子块的边界时,通过预设的差异度函数,根据所述目标差值、所述每个三角形的法线向量以及所述每个三角形对应的预设权重,确定所述第三当前探点与所述第四当前探点的差异度;
    在所述第三当前探点位于所述目标虚拟地形子块的边界时,通过预设的差异度函数,根据所述目标差值、所述每个三角形的法线向量以及所述每个三角形对应的预设权重,确定所述第三当前探点与所述第四当前探点的初始差异度;将所述第三当前探点与所述第四当前探点的差异度确定为等于所述初始差异度与预设常数的乘积,其中,所述预设常数大于1。
  7. 根据权利要求1所述的方法,所述根据所述目标探点集合中的各个探点的球谐基系数以及所述第二索引关系集合,对所述目标虚拟地形子块进行光照渲染,包括:
    将所述目标探点集合中的各个探点的球谐基系数保存为目标贴图;
    在需要对所述目标虚拟地形子块进行渲染时,根据所述第二索引关系集合,从所述目标贴图中确定所述目标虚拟地形子块中的各个顶点对应的探点的球谐基系数;
    根据所述目标虚拟地形子块中的各个顶点对应的探点的球谐基系数,确定所述目标虚拟地形子块中的各个顶点的球谐基系数;
    根据所述目标虚拟地形子块中的各个顶点的球谐基系数,对所述目标虚拟地形子块进行光照渲染。
  8. 根据权利要求7所述的方法,针对所述目标虚拟地形子块中的各个顶点中的当前顶点,所述根据所述目标虚拟地形子块中的各个顶点对应的探点的球谐基系数,确定所述目标虚拟地形子块中的各个顶点的球谐基系数,包括:
    在所述当前顶点与所述目标探点集合中的一个探点具有对应关系时,将所述当前顶点的球谐基系数确定为等于所述一个探点的球谐基系数;或者
    在所述当前顶点与所述目标探点集合中的多个探点具有对应关系时,将所述当前顶点的球谐基系数确定为等于所述多个探点的球谐基系数的加权之和。
  9. 根据权利要求1至8中任一项所述的方法,在所述候选探点集合中确定所述目标虚拟 地形子块中的各个顶点对应的探点,得到第一索引关系集合,包括:
    在所述候选探点集合中查找与所述各个顶点距离最近的一个探点,并将所述各个顶点与对应的查找到的探点作为索引关系记录在所述第一索引关系集合中;或者
    在所述候选探点集合中查找与所述各个顶点距离最近的多个探点,为所述多个探点设置对应的权重,并将所述各个顶点与对应的所述多个探点、以及所述多个探点对应的权重作为索引关系记录在所述第一索引关系集合中。
  10. 根据权利要求1至8中任一项所述的方法,所述获取目标虚拟地形子块的候选探点集合,包括:
    获取所述目标虚拟地形子块的原始探点集合,其中,所述目标虚拟地形子块被划分成一组三角形,所述原始探点集合包括所述一组三角形中的各三角形对应的一个或多个探点;
    在所述原始探点集合中过滤掉无效的探点,得到所述候选探点集合。
  11. 一种虚拟地形的光照渲染装置,包括:
    第一获取单元,用于获取目标虚拟地形子块的候选探点集合,其中,所述侯选探点集合中的探点用于对所述目标虚拟地形子块进行光照渲染;
    确定单元,用于在所述候选探点集合中确定所述目标虚拟地形子块中的各个顶点对应的探点,得到第一索引关系集合,其中,所述第一索引关系集合中的每个索引关系表示具有对应关系的顶点和探点;
    第二获取单元,用于获取所述候选探点集合中的各个探点的球谐基系数,并根据所述各个探点的球谐基系数,确定所述候选探点集合中的每两个探点的差异度;
    合并单元,用于根据所述差异度,对所述候选探点集合中的探点进行合并,得到目标探点集合,并根据所述第一索引关系集合,将与合并前的探点具有对应关系的顶点,与合并后的探点建立对应关系,得到第二索引关系集合;
    渲染单元,用于根据所述目标探点集合中的各个探点的球谐基系数以及所述第二索引关系集合,对所述目标虚拟地形子块进行光照渲染。
  12. 一种计算机可读的存储介质,所述计算机可读的存储介质包括存储的计算机程序,其中,所述计算机程序运行时执行所述权利要求1至10任一项中所述的方法。
  13. 一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现权利要求1至10任一项中所述方法的步骤。
  14. 一种电子设备,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为通过所述计算机程序执行所述权利要求1至10任一项中所述的方法。
PCT/CN2023/077124 2022-04-02 2023-02-20 虚拟地形的光照渲染方法、装置、介质、设备和程序产品 WO2023185317A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210344253.5 2022-04-02
CN202210344253.5A CN116934946A (zh) 2022-04-02 2022-04-02 虚拟地形的光照渲染方法、装置和存储介质及电子设备

Publications (1)

Publication Number Publication Date
WO2023185317A1 true WO2023185317A1 (zh) 2023-10-05

Family

ID=88199152

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/077124 WO2023185317A1 (zh) 2022-04-02 2023-02-20 虚拟地形的光照渲染方法、装置、介质、设备和程序产品

Country Status (2)

Country Link
CN (1) CN116934946A (zh)
WO (1) WO2023185317A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989624A (zh) * 2015-02-11 2016-10-05 华为技术有限公司 用于绘制全局光照场景的方法和装置
US20180093183A1 (en) * 2016-10-04 2018-04-05 Square Enix, Ltd. Methods, systems and computer-readable media for diffuse global illumination using probes
CN111744183A (zh) * 2020-07-02 2020-10-09 网易(杭州)网络有限公司 游戏中的光照采样方法、装置以及计算机设备
CN113034657A (zh) * 2021-03-30 2021-06-25 完美世界(北京)软件科技发展有限公司 游戏场景中光照信息的渲染方法、装置及设备
WO2022167537A1 (en) * 2021-02-08 2022-08-11 Reactive Reality Ag Method and computer program product for producing a 3d representation of an object

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989624A (zh) * 2015-02-11 2016-10-05 华为技术有限公司 用于绘制全局光照场景的方法和装置
US20180093183A1 (en) * 2016-10-04 2018-04-05 Square Enix, Ltd. Methods, systems and computer-readable media for diffuse global illumination using probes
CN111744183A (zh) * 2020-07-02 2020-10-09 网易(杭州)网络有限公司 游戏中的光照采样方法、装置以及计算机设备
WO2022167537A1 (en) * 2021-02-08 2022-08-11 Reactive Reality Ag Method and computer program product for producing a 3d representation of an object
CN113034657A (zh) * 2021-03-30 2021-06-25 完美世界(北京)软件科技发展有限公司 游戏场景中光照信息的渲染方法、装置及设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GUO JIE, PAN JIN-GUI: "Research on Real-time Rendering under Complex Area Lighting", JOURNAL OF SYSTEM SIMULATION, GAI-KAN BIANJIBU , BEIJING, CN, vol. 24, no. 1, 31 January 2012 (2012-01-31), CN , pages 6 - 11, XP009549019, ISSN: 1004-731X, DOI: 10.16182/j.cnki.joss.2012.01.013 *

Also Published As

Publication number Publication date
CN116934946A (zh) 2023-10-24

Similar Documents

Publication Publication Date Title
CN111681167B (zh) 画质调整方法和装置、存储介质及电子设备
TWI674790B (zh) 一種影像資料的編碼、解碼方法及裝置
CN109461199B (zh) 画面渲染方法和装置、存储介质及电子装置
CN111145090A (zh) 一种点云属性编码方法、解码方法、编码设备及解码设备
CN110189246B (zh) 图像风格化生成方法、装置及电子设备
CN110944160B (zh) 一种图像处理方法及电子设备
CN112675545B (zh) 地表仿真画面的显示方法和装置、存储介质及电子设备
WO2023029893A1 (zh) 纹理映射方法、装置、设备及存储介质
US20230125255A1 (en) Image-based lighting effect processing method and apparatus, and device, and storage medium
WO2023169095A1 (zh) 数据处理方法、装置、设备以及介质
WO2019001015A1 (zh) 一种图像数据的编码、解码方法及装置
CN110390712B (zh) 图像渲染方法及装置、三维图像构建方法及装置
CN114286172B (zh) 数据处理方法及装置
WO2023185317A1 (zh) 虚拟地形的光照渲染方法、装置、介质、设备和程序产品
WO2021098306A1 (zh) 一种物品比对方法和装置
CN113205601A (zh) 漫游路径生成方法、装置、存储介质及电子设备
CN113064689A (zh) 场景识别方法和装置、存储介质及电子设备
WO2023185287A1 (zh) 虚拟模型的光照渲染方法、装置和存储介质及电子设备
CN115908687A (zh) 渲染网络的训练、渲染方法、装置及电子设备
CN114782249A (zh) 一种图像的超分辨率重建方法、装置、设备以及存储介质
CN112164066B (zh) 一种遥感图像分层分割方法、装置、终端及存储介质
CN110807114B (zh) 用于图片展示的方法、装置、终端及存储介质
CN113613011A (zh) 一种光场图像压缩方法、装置、电子设备及存储介质
CN113034416A (zh) 图像处理方法及装置、电子设备及存储介质
CN114554089B (zh) 视频处理方法、装置、设备、存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23777696

Country of ref document: EP

Kind code of ref document: A1