WO2023185287A1 - 虚拟模型的光照渲染方法、装置和存储介质及电子设备 - Google Patents

虚拟模型的光照渲染方法、装置和存储介质及电子设备 Download PDF

Info

Publication number
WO2023185287A1
WO2023185287A1 PCT/CN2023/075919 CN2023075919W WO2023185287A1 WO 2023185287 A1 WO2023185287 A1 WO 2023185287A1 CN 2023075919 W CN2023075919 W CN 2023075919W WO 2023185287 A1 WO2023185287 A1 WO 2023185287A1
Authority
WO
WIPO (PCT)
Prior art keywords
probe point
probe
triangle
current
virtual model
Prior art date
Application number
PCT/CN2023/075919
Other languages
English (en)
French (fr)
Inventor
廖诚
文聪
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023185287A1 publication Critical patent/WO2023185287A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Definitions

  • the present application relates to the field of computers, and specifically to a lighting rendering method, device, storage medium and electronic equipment for a virtual model.
  • the light map method is usually used to perform pixel-by-pixel lighting rendering of the virtual model.
  • this method usually takes up a lot of memory and storage space, and also requires a high amount of calculation. support, which leads to the problem of low lighting rendering efficiency of virtual models. Therefore, there is a problem that the lighting rendering efficiency of the virtual model is low.
  • Embodiments of the present application provide a method, device, storage medium and electronic device for lighting rendering of a virtual model, so as to at least solve the technical problem of low lighting rendering efficiency of the virtual model.
  • a lighting rendering method for a virtual model is provided.
  • the method is executed by an electronic device and includes: obtaining a set of candidate probe points for the virtual model to be rendered, wherein: The probe points are used to perform lighting rendering on the above-mentioned virtual model to be rendered; obtain the occlusion degree of the above-mentioned virtual model to be rendered and each probe point in the above-mentioned candidate probe point set; according to the above-mentioned virtual model to be rendered and each probe point in the above-mentioned candidate probe point set The blocking degree of the probe points in the above candidate probe point set is screened to obtain the target probe point set; the spherical harmonic basis coefficient of each probe point in the above target probe point set is obtained, and based on each probe point in the above target probe point set The spherical harmonic basis coefficient is used to perform lighting rendering on the above virtual model to be rendered.
  • a lighting rendering device for a virtual model is also provided.
  • the device is deployed on an electronic device and includes: a first acquisition unit for acquiring a set of candidate probe points of the virtual model to be rendered, Among them, the probe points in the candidate probe point set are used to perform lighting rendering on the virtual model to be rendered; the second acquisition unit is used to obtain the obstruction degree of the virtual model to be rendered and each probe point in the candidate probe point set.
  • the screening unit is used to screen the probe points in the candidate probe point set according to the obstruction degree of the virtual model to be rendered and each probe point in the candidate probe point set to obtain the target probe point set;
  • the third acquisition unit It is used to obtain the spherical harmonic basis coefficient of each probe point in the target probe point set, and perform lighting rendering on the virtual model to be rendered according to the spherical harmonic basis coefficient of each probe point in the target probe point set.
  • a computer-readable storage medium includes a stored computer program, wherein the computer program can be executed by an electronic device when running as described above.
  • the lighting rendering method of the model is provided.
  • a computer program product includes a computer program, and the computer program is stored in a computer-readable storage medium.
  • the processor of an electronic device can be The computer program is read from the storage medium, and the processor executes the computer program, so that the electronic device performs the lighting rendering method of the virtual model as above.
  • an electronic device including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the above-mentioned steps through the computer program.
  • Lighting rendering method for virtual models including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the above-mentioned steps through the computer program. Lighting rendering method for virtual models.
  • a set of candidate probe points for the virtual model to be rendered is obtained, where the probe points in the set of candidate probe points are used for lighting rendering of the virtual model to be rendered; the virtual model to be rendered and the virtual model to be rendered are obtained.
  • the purpose of calculating the amount of calculation when performing lighting rendering on the virtual model to be rendered is to achieve the technical effect of improving the lighting rendering efficiency of the virtual model, thereby solving the technical problem of
  • Figure 1 is a schematic diagram of the application environment of an optional virtual model lighting rendering method according to an embodiment of the present application
  • Figure 2 is a schematic diagram of the process of an optional virtual model lighting rendering method according to an embodiment of the present application
  • Figure 3 is a schematic diagram of an optional lighting rendering method for a virtual model according to an embodiment of the present application
  • Figure 4 is a schematic diagram of another optional virtual model lighting rendering method according to an embodiment of the present application.
  • Figure 5 is a schematic diagram of another optional virtual model lighting rendering method according to an embodiment of the present application.
  • Figure 6 is a schematic diagram of another optional virtual model lighting rendering method according to an embodiment of the present application.
  • Figure 7 is a schematic diagram of another optional virtual model lighting rendering method according to an embodiment of the present application.
  • Figure 8 is a schematic diagram of another optional virtual model lighting rendering method according to an embodiment of the present application.
  • Figure 9 is a schematic diagram of another optional virtual model lighting rendering method according to an embodiment of the present application.
  • Figure 10 is a schematic diagram of another optional virtual model lighting rendering method according to an embodiment of the present application.
  • Figure 11 is a schematic diagram of another optional lighting rendering method for a virtual model according to an embodiment of the present application.
  • Figure 12 is a schematic diagram of another optional virtual model lighting rendering method according to an embodiment of the present application.
  • Figure 13 is a schematic diagram of another optional virtual model lighting rendering method according to an embodiment of the present application.
  • Figure 14 is a schematic diagram of another optional virtual model lighting rendering method according to an embodiment of the present application.
  • Figure 15 is a schematic diagram of another optional virtual model lighting rendering method according to an embodiment of the present application.
  • Figure 16 is a schematic diagram of another optional virtual model lighting rendering method according to an embodiment of the present application.
  • Figure 17 is a schematic diagram of another optional virtual model lighting rendering method according to an embodiment of the present application.
  • Figure 18 is a schematic diagram of an optional virtual model lighting rendering device according to an embodiment of the present application.
  • Figure 19 is a schematic structural diagram of an optional electronic device according to an embodiment of the present application.
  • a lighting rendering method of a virtual model is provided.
  • the lighting rendering method of the virtual model may be, but is not limited to, applied to In the environment shown in Figure 1.
  • the method provided by the embodiment of the present application can be executed by an electronic device, where the electronic device may include, but is not limited to, a user device 102 , and the user device 102 may include, but is not limited to, a display 108 , a processor 106 and a memory 104 .
  • the user device 102 obtains a lighting rendering request for the virtual model to be rendered
  • the user device 102 responds to the above lighting rendering request, obtains the candidate probe point set of the virtual model to be rendered, and further obtains the occlusion degree of each probe point in the virtual model to be rendered and the candidate probe point set; through the processor 106
  • the probe points in the candidate probe point set are screened to obtain the target probe point set, and then the spherical harmonic basis coefficients of each probe point in the target probe point set are obtained, and based on the spherical harmonic basis coefficients of each probe point in the target probe point set , perform lighting rendering on the virtual model to be rendered, display the result of the lighting rendering on the display 108 , and store the result of the lighting rendering in the memory 104 .
  • the above steps can also be completed with the assistance of the server, that is, the server performs steps such as obtaining the candidate probe point set of the virtual model to be rendered, obtaining the occlusion degree, filtering the probe points, and lighting rendering.
  • the server performs steps such as obtaining the candidate probe point set of the virtual model to be rendered, obtaining the occlusion degree, filtering the probe points, and lighting rendering.
  • the user equipment 102 includes but is not limited to handheld devices (such as mobile phones), notebook computers, desktop computers, vehicle-mounted equipment, etc. This application does not limit the specific implementation of the user equipment 102.
  • the lighting rendering method of the virtual model includes:
  • S202 Obtain a set of candidate probe points for the virtual model to be rendered, where the probe points in the set of candidate probe points are used for lighting rendering of the virtual model to be rendered;
  • S206 Screen the probe points in the candidate probe point set according to the obstruction degree of each probe point in the virtual model to be rendered and the candidate probe point set, and obtain the target probe point set;
  • S208 Obtain the spherical harmonic basis coefficient of each probe point in the target probe point set, and perform lighting rendering on the virtual model to be rendered based on the spherical harmonic basis coefficient of each probe point in the target probe point set.
  • the lighting rendering method of the virtual model can be, but is not limited to, applied in a model rendering scene in a three-dimensional (3 Dimensions, referred to as 3D) game to find a small amount of important information in the model space.
  • 3D three-dimensional
  • the lighting rendering method of the virtual model described above is used to filter the set of candidate probe points of the virtual model to be rendered 302 to obtain a set of target probe points.
  • multiple Probe points different shadows can be, but are not limited to, used to represent probe points with different weights
  • each probe point in the target probe point set can be, but is not limited to, the same as the virtual virtual object to be rendered.
  • each vertex of the model 302 there is an association between each vertex of the model 302 (the same shadow can be, but is not limited to, used to represent the association between a vertex and a probe point, and when a vertex is associated with multiple probe points, the shadow of the vertex is the same as the above-mentioned multiple probe points.
  • the shadow of the probe point with the largest weight among the probe points is the same); further obtain the spherical harmonic basis coefficient of each probe point in the target probe point set, and based on the spherical harmonic basis coefficient of each probe point in the target probe point set and the target probe point set
  • the virtual model to be rendered is illuminated and rendered.
  • the probe point can be understood as, but is not limited to, a three-dimensional space point used to collect lighting information in space, and the three-dimensional space point is also used for lighting rendering of the virtual model to be rendered.
  • the virtual model to be rendered may be, but is not limited to, divided into multiple triangles for processing, and each triangle may be, but is not limited to, corresponding to multiple vertices, where one vertex may be, but is not limited to, associated with multiple probe points,
  • a probe point can also be, but is not limited to, associated with multiple vertices; and the process of obtaining the set of candidate probe points of the virtual model to be rendered can also be, but is not limited to, understood as obtaining the candidate probe points of each triangle into which the virtual model to be rendered is divided. .
  • the triangle 402 is one of the plurality of triangles into which the virtual model to be rendered is divided.
  • O is the triangle 402
  • the center of mass of , a, b, and c are respectively the midpoints of the line segments AO, BO, and CO of the triangle 404; further, as shown in (b) in Figure 4, offset a, b, and c along the normal direction of the triangle 402 A preset unit, and then three candidate probe points a', b', c' are obtained; similarly, refer to the method of obtaining the candidate probe points of the triangle 402 above to obtain the information of each triangle into which the virtual model to be rendered is divided.
  • Candidate exploration points are examples of the candidate probe points of the triangle 402 above.
  • the virtual model to be rendered may be, but is not limited to, divided into triangles whose areas may be, but are not limited to, different, and for different areas, the method of obtaining candidate probe points of the triangle may also be, but is not limited to, There are differences.
  • a first number of candidate probe points such as those obtained by shifting the center of mass of the triangle along the normal direction of the triangle
  • triangles with an area greater than the target threshold may, but are not limited to, generate a second number of candidate probe points, where the second number is greater than the first number.
  • the obstruction degree of the virtual model to be rendered and each probe point in the candidate probe point set can be, but is not limited to, understanding the obstruction degree of the virtual model to be rendered to each probe point in the candidate probe point set.
  • the obstruction degree is 0 , if there is obstruction, the degree of obstruction is greater than 0, And the specific degree of obstruction is positively correlated with the degree of obstruction; or on the contrary, if there is no obstruction, the degree of obstruction is 1, if there is obstruction, the degree of obstruction is less than 1 and greater than 0, and the specific degree of obstruction is related to the degree of obstruction. degree is inversely related.
  • the obstruction degree between the virtual model to be rendered and each probe point in the set of candidate probe points can be, but is not limited to, understood as the relationship between each triangle in the plurality of triangles and the candidate probe points.
  • the obstruction degree of each probe point in the point set For example, the virtual model to be rendered is divided into N triangles, and the candidate probe point set includes M probe points. Then each triangle in the above multiple triangles is different from each probe point in the candidate probe point set.
  • the obstruction degree of a point may include, but is not limited to, (N ⁇ M) obstruction degrees.
  • the method of filtering the probe points in the candidate probe point set may be but is not limited to determining the probe points whose blocking degree is greater than or equal to the blocking threshold in the candidate probe point set. is the probe point in the target probe point set.
  • the spherical harmonic basis coefficients may be, but are not limited to, the coefficients of the basis functions in spherical harmonic illumination, or may be, but are not limited to, understood as sampling the illumination into N coefficients first, and then During rendering, the above-mentioned spherical harmonic basis coefficients are used to restore the above-sampled illumination to complete the rendering.
  • the candidate probe point set On the basis of screening the probe points, the amount of calculation required to obtain the spherical harmonic basis coefficients of each probe point in the target probe point set is reduced to a certain extent, thereby improving the efficiency of lighting rendering of the virtual model to be rendered.
  • the probe points in the target probe point set are transferred to a roaster for baking, and the target probe points are baked in the roaster.
  • the probe points in the probe point set are converted into probe points in the world space, and the basic functions provided by the baker are used to determine the light receiving conditions of the probe points in the target probe point set, and then the probe points in the target probe point set are obtained.
  • the spherical harmonic basis coefficient of the point is used to obtain the spherical harmonic basis coefficient of each probe point in the target probe point set; in addition, when the virtual model to be rendered is one of several models in the target scene, in order to improve the data processing efficiency, you can But it is not limited to passing the probe points of all models in the target scene to the baker.
  • the probe points in the target probe point set are probe points in a single model space
  • the probe points in a single model space do not involve probe points in other model spaces.
  • the probe points in the target probe point set are inside other models. An abnormal situation occurs, and the detection point under the abnormal situation is an invalid detection point. As shown in Figure 5, the detection point A of the virtual model 504 is blocked by the virtual model 506 in the same target scene 502, causing the detection point A to become an invalid detection point. point;
  • the relevant data of the probe points in the target probe point set can be recorded, but is not limited to, where the relevant data includes at least one of the following: the closest distance from the probe point to the virtual model to be rendered, and the ability of the probe points to be associated with each other. other exploration points.
  • the relevant data includes at least one of the following: the closest distance from the probe point to the virtual model to be rendered, and the ability of the probe points to be associated with each other. other exploration points.
  • the relevant data includes at least one of the following: the closest distance from the probe point to the virtual model to be rendered, and the ability of the probe points to be associated with each other. other exploration points.
  • the relevant data includes at least one of the following: the closest distance from the probe point to the virtual model to be rendered, and the ability of the probe points to be associated with each other. other exploration points.
  • the candidate probe point set of the virtual model to be rendered is obtained, where the probe points in the candidate probe point set are used for lighting rendering of the virtual model to be rendered; the virtual model to be rendered and each probe point in the candidate probe point set are obtained
  • the occlusion degree of the point; according to the occlusion degree of each probe point in the virtual model to be rendered and the candidate probe point set, the candidate probe point set is Screen the probe points to obtain the target probe point set; obtain the spherical harmonic basis coefficient of each probe point in the target probe point set, and perform lighting rendering of the virtual model to be rendered based on the spherical harmonic basis coefficient of each probe point in the target probe point set , using the blocking degree of each probe point in the virtual model to be rendered and the candidate probe point set to screen the probe points in the candidate probe point set, and obtain a smaller number of important probe points to reduce the spherical harmonic basis passing through the probe points.
  • the coefficient is the calculation amount when performing lighting rendering on the virtual model to be rendered, thereby achieving the technical effect
  • candidate probe points of the virtual model 602 to be rendered are obtained to form a candidate probe point set, as shown in (a) of Figure 6, where the candidate probe points in the candidate probe point set are Used for lighting rendering of the virtual model to be rendered 302; further obtaining the occlusion degree of the virtual model to be rendered 602 and each candidate probe point in the candidate probe point set, and based on the virtual model to be rendered 602 and each candidate probe point in the candidate probe point set
  • the blocking degree of the candidate probe points is screened to obtain a smaller number of target probe points to form a target probe point set, as shown in (b) in Figure 6, where each target probe point Associated with each triangle into which the virtual model 602 to be rendered is divided (dashed lines are used to represent the association); further obtain the spherical harmonic basis coefficients of each target probe point in the target probe point set, and calculate Based on the spherical harmonic basis coefficient of the probe point, the virtual model to be rendered 602 is illuminated and rendered to obtain the virtual model to be rendered
  • a smaller number of important probe points can be obtained by filtering the probe points in the candidate probe point set by using the blocking degree of the virtual model to be rendered and each probe point in the candidate probe point set. It achieves the purpose of reducing the amount of calculation when performing lighting rendering on the virtual model to be rendered through the spherical harmonic basis coefficients of the probe points, thus achieving the technical effect of improving the lighting rendering efficiency of the virtual model.
  • the probe points in the candidate probe point set are screened to obtain the target probe point set, including:
  • each triangle in a set of triangles is regarded as the current triangle, and based on the blocking degree between the current triangle and each probe point in the candidate probe point set, the correlation degree between the current triangle and each probe point in the candidate probe point set is determined, where, to be The rendered virtual model is divided into a set of triangles;
  • S2 Screen the probe points in the candidate probe point set according to the correlation between each current triangle and each probe point in the candidate probe point set, and obtain the target probe point set.
  • the obstruction degree of the current triangle and each probe point in the candidate probe point set may be, but is not limited to, the correlation degree between the current triangle and each probe point in the candidate probe point set. There is a positive correlation.
  • the implementation of S1 may be: obtaining the distance value between the current triangle and each probe point in the candidate probe point set; The distance value between the probe points and the obstruction degree of each probe point in the current triangle and the candidate probe point set determine the correlation degree between the current triangle and each probe point in the candidate probe point set. For example, obtain the distance value between the current triangle and each probe point in the candidate probe point set and the product result of the obstruction degree of the current triangle and each probe point in the candidate probe point set, and determine the product result as the current triangle and candidate probe point The correlation degree of each probe point in the set.
  • the method of filtering the probe points in the candidate probe point set may be but is not limited to determining the probe points whose correlation degree in the candidate probe point set is greater than or equal to the correlation threshold. is the probe point in the target probe point set.
  • the correlation degree between each triangle and each probe point in the candidate probe point set is determined, so as to filter the probe points in the candidate probe point set to obtain the target probe point set, thereby improving the performance of the virtual model.
  • the effect of lighting rendering efficiency is determined, so as to filter the probe points in the candidate probe point set to obtain the target probe point set, thereby improving the performance of the virtual model.
  • S1 determine the projection area of the current triangle projected onto each detection area in a group of detection areas, where a group of detection areas includes detection areas corresponding to each detection point in the candidate detection point set;
  • S2 Determine the correlation between the current triangle and each probe point in the candidate probe point set based on the projected area of the current triangle onto each detection area in a set of detection areas and the obstruction degree between the current triangle and each probe point in the candidate probe point set. .
  • the detection area can be, but is not limited to, a planar area centered on the probe point, such as a circle, a rectangle, a polygon, etc.; or, the detection area can be, but is not limited to, a planar area centered on the probe point. It is a three-dimensional unit body centered on the probe point, such as sphere, rectangle, polygon, etc.
  • the correlation degree can be, but is not limited to, the integration result of the projected area and the blocking degree, such as the product result, the summation result, etc.; the correlation degree can also be, but is not limited to, the projected area.
  • the integration result of obstruction degree and other parameters where the other parameters may but are not limited to include at least one of the following: the distance between the current triangle and each probe point in the candidate probe point set, the distance between the current triangle and each probe point in the candidate probe point set, The angle between points, etc.
  • the virtual model 702 to be rendered is divided into multiple triangles (a group of triangles) to determine the triangle 704 (the current triangle in a group of triangles) and the probe point 706 (the set of candidate probe points).
  • the probe point 706 the set of candidate probe points.
  • the degree of obstruction of point 706 determines the degree of correlation between triangle 704 and probe point 706 .
  • the correlation degree between the current triangle and each probe point in the candidate probe point set is determined, thereby achieving the effect of improving the accuracy of obtaining the correlation degree.
  • the probe points in the candidate probe point set are screened to obtain the target probe point set, including:
  • S1 use each probe point in the current probe point set as the current probe point to be processed, and obtain the coverage of the current probe point to be processed and a group of triangles, where the coverage of the current probe point to be processed and a group of triangles is the current probe point to be processed.
  • the coverage between the probe point to be processed and a current triangle is determined based on the correlation between a current triangle and the probe point to be processed. of;
  • the coverage between the current probe point to be processed and a group of triangles may, but is not limited to, include direct coverage or indirect coverage, where direct coverage may, but is not limited to, be understood as based on the current probe point to be processed.
  • the coverage obtained by directly calculating the correlation between a point and a group of triangles, or the correlation between the current probe point to be processed and a group of triangles can be understood as the coverage between the current probe point to be processed and a group of triangles; indirect coverage
  • the degree can be understood as, but is not limited to, the coverage calculated based on the correlation degree between the current probe point to be processed and a group of triangles and other parameters, where the other parameters can include, but are not limited to, at least one of the following: the current probe point to be processed and The obstruction degree of a group of triangles, the correlation degree of each two probe points in the current probe point set, the structural parameters of each two probe points in the current probe point set and the structure composed of a group of triangles, etc.
  • each triangle in a set of triangles is associated with a predetermined number of probe points, which can be, but is not limited to, understood as filtering the probe points in the set of candidate probe points.
  • the probe points in the target probe point set are constantly updated until each triangle in a set of triangles is associated with a predetermined number of probe points, such as adding, deleting, and replacing probe points in the target probe point set, etc.;
  • triangle A in a group of triangles suppose that based on the coverage of the probe point with the largest coverage and each triangle in the group of triangles, it is determined that the probe point with the largest coverage is associated with triangle A in the group of triangles. , then first determine the number of probe points associated with triangle A. If the number of probe points associated with triangle A does not reach 2, retain the position of the probe point with the largest coverage in the target probe point set, and establish the probe point set. The relationship between the probe point with the largest coverage and triangle A;
  • the probe point with the greatest coverage is used to replace the probe points associated with triangle A (that is, the number of probe points associated with triangle A remains 2, but the probe points with lower importance are replaced by the probe points with lower importance.
  • Replace the probe point with the largest coverage if not, retain the probe point associated with triangle A, and may, but is not limited to, delete the position of the probe point with the largest coverage in the target probe point set (that is, if the obtained probe point If the probe point with the largest coverage is not associated with any triangle in a set of triangles, you can, but are not limited to, delete the probe point with the largest coverage from the target probe point set, or select the probe point with the largest coverage.
  • the probe points in the target probe point set are associated with triangles in a set of triangles based on the coverage between each probe point and each triangle in the set of triangles, and the probe points in the target probe point set are The probe points are also updated based on the coverage between each probe point and each triangle in a set of triangles, or the probe points in the target set of probe points can be, but are not limited to, understood to be associated with triangles in a set of triangles. exploration point.
  • the relevant steps of obtaining the target probe point set may be executed in, but are not limited to, the following programming languages:
  • the candidate probe point set is screened, thereby achieving the effect of improving the screening efficiency of the probe points in the candidate probe point set.
  • a group of triangles includes a first triangle that is not associated with a probe point, obtain the first coverage between the current probe point to be processed and each first triangle, where the distance between the current probe point to be processed and a first triangle is The first coverage is obtained based on the correlation between a first triangle and the current probe point to be processed.
  • the coverage between the current probe point to be processed and a group of triangles is the third relationship between the current probe point to be processed and each first triangle.
  • the sum of coverage, the coverage of the current probe point to be processed and a group of triangles includes the first coverage of the current probe point to be processed and each first triangle;
  • a set of triangles includes a second triangle with associated probe points, obtain the second coverage between the current probe point to be processed and each second triangle, where the distance between the current probe point to be processed and a second triangle is The second coverage is determined based on the correlation between a second triangle and the current probe point to be processed, the correlation between a second triangle and another probe point, and the current angle between the current probe point to be processed and a group of triangles.
  • the coverage is the sum of the second coverage between the current probe point to be processed and each second triangle, and the current included angle is the angle formed by the current probe point to be processed, the centroid of a second triangle and another probe point , the probe point associated with the second triangle includes another probe point, and the coverage of the current probe point to be processed and a group of triangles includes the second coverage of the current probe point to be processed and each second triangle;
  • a set of triangles includes a first triangle and a second triangle, integrate the current probe point to be processed and the first coverage of each first triangle and the current probe point to be processed and the second coverage of each second triangle. , obtain the coverage of the current probe point to be processed and a group of triangles.
  • triangles in a group of triangles may be, but are not limited to, divided into first triangles and second triangles, where the first triangle is a group of triangles that is not associated with any probe.
  • the second triangle is a triangle in a group of triangles that has been associated with at least one probe point; if triangle A in a group of triangles has been associated with probe points 1 and 2, then triangle A is the second triangle; again, if a group of triangles Triangle B in has been associated with probe point 1, then triangle B is also the second triangle; and assuming triangle C in a group of triangles has not been associated with any probe point, then triangle C is the first triangle.
  • the current included angle is the included angle formed by a probe point, the center of mass of a second triangle, and another probe point. It can be, but is not limited to, understood as determining a probe point first. , the target structure formed by the center of mass of a second triangle and another probe point, and then obtain the included angle of the target structure, as shown in Figure 8,
  • the current included angle 804 is the included angle formed by a probe point P1, a center of mass O of a second triangle 802, and another probe point P2.
  • the first coverage between the current probe point to be processed and a first triangle is obtained based on the correlation between a first triangle and the current probe point to be processed
  • the second coverage degree between the current probe point to be processed and a second triangle is based on the correlation between a second triangle and the current probe point to be processed, the correlation between a second triangle and another probe point, and the current clip.
  • the first coverage can be, but is not limited to, understood as the coverage between a single probe point and the triangle (ie, single dimension)
  • the second coverage can be, but is not limited to, understood as a single probe point and Another exploration point is common to the coverage between triangles (i.e., multiple dimensions), and the computational complexity of the first coverage may be, but is not limited to, lower than the computational complexity of the second coverage;
  • the correlation (degree) is k1
  • the first coverage of probe point P1 can also be but is not limited to k1; and for two Probe points, such as probe point P1 and probe point P2, and it is still assumed that the correlation degree of probe point P1 is k1 and the correlation degree of probe point P2 is k2, then the calculation of the second coverage of probe point P2 needs to be done first.
  • the second coverage can also be calculated in more dimensions based on the three-dimensional angles formed by multiple probe points, such as Assume that the correlation degree of probe point P1 is k1, the correlation degree of probe point P2 is k2, and the correlation degree of probe point P3 is k3.
  • the centroid of the triangle and the probe point P1 is used to calculate the second coverage of the probe point P3, or the coverage of the triangle by the probe point P1, the probe point P2 and the probe point P3 is defined as k1+( 1-cos ⁇ )*k2+(1-cos ⁇ )*k3.
  • the coverage is obtained in different ways. For example, for the first triangle, the coverage (first coverage) is obtained based on the relationship between a first triangle and the current waiting triangle. The correlation degree of the processing probe point is obtained; for another example, for the second triangle, its coverage (second coverage) is obtained based on the correlation degree between a second triangle and the current probe point to be processed, the correlation between a second triangle and another The correlation degree of the probe point and the current included angle are determined.
  • a set of triangles 902 includes multiple triangles, such as triangle A, triangle B, and triangle C
  • the candidate probe point set 904 includes multiple probe points, such as probe point P1 and probe point P2. , probe point P3...probe point Pn; further determine the candidate probe point set 904 as the current probe point set, and obtain the coverage of each probe point in the current probe point set and each triangle in a group of triangles 902, where , when the triangle is the first triangle, the coverage of a first triangle and a probe point is obtained based on the correlation between a first triangle and a probe point (for example, the probe point currently to be processed), such as (P1 ⁇ A ) is triangle A and probe point P1, and then add the first coverage of each triangle and probe point P1 to obtain the first coverage T1 (because in the current S902, there is no second coverage among the triangles in a group of triangles 902.
  • the candidate probe point set 908 is determined as the current probe point set, and the coverage of each probe point in the current probe point set and each triangle in a set of triangles 902 is obtained, where the triangle is the second triangle In the case of , the coverage of a second triangle and a probe point is obtained based on the correlation between a second triangle and a probe point.
  • (P2 ⁇ A ⁇ P1) is triangle A and probe point P2, and then each triangle Add the first coverage and the second coverage of probe point P2 to obtain the first coverage T21 (since triangle A has been associated with probe point P1 in current S904, triangle A is the second triangle, so in current S904 In S904, the first coverage and the second coverage need to be added); based on this, the same processing is performed on each probe point in the candidate probe point set 908, and the coverage of each probe point and each triangle is obtained respectively.
  • the probe point P3 with the largest coverage is determined as the target probe point Set probe points 910 and delete probe point P1 in candidate probe point set 908 to obtain a new candidate probe point set (the subsequent steps are the same and will not be explained here with examples), in which probe point P3 is associated with triangle C.
  • Each triangle in a group of triangles 902 is associated with two probe points, and the target probe point set 912 is determined.
  • triangle A is associated with probe points P1 and probe points P3
  • triangle B is associated with probe points P1 and probe points P3.
  • triangle C is associated with probe points P2 and P3.
  • the coverage of the current probe point to be processed and the set of triangles is determined, thereby achieving the effect of improving the calculation accuracy of the coverage.
  • S3 Sum the first product value and the second product value to obtain the coverage of the current probe point to be processed and a group of triangles.
  • the second triangle may be, but is not limited to, divided into triangles that are less than the predetermined number and triangles that are full of the predetermined number, wherein the triangles that are less than the predetermined number may be but are not limited to. It is limited to triangles that have associated probe points, but the number of associated probe points has not reached the predetermined number. Triangles that have reached the predetermined number can be, but are not limited to, associated probe points, and the number of associated probe points has reached the predetermined number. triangle.
  • triangles that are less than the predetermined number are triangles that are associated with one probe point, and triangles that are full of the predetermined number are triangles that are associated with two probe points. .
  • the first triangle corresponds to First series Number
  • triangles that are less than the predetermined number correspond to the third coefficient
  • triangles that are full of the predetermined number correspond to the fourth coefficient, where the first coefficient is greater than the third coefficient, and the third coefficient is greater than the fourth coefficient;
  • triangle A the first triangle
  • the new The value of probe point 1 associated with triangle A is higher, and when calculating the coverage value of probe point 1 and triangle A, it will be multiplied by a higher calculation coefficient (the first coefficient, such as 3);
  • Triangle B triangles that are less than the predetermined number
  • the coverage value of probe point 2 and triangle B it will be combined with another probe point.
  • the general calculation coefficient (the third coefficient, such as 2) is multiplied; another example is that triangle C (a triangle that has reached the predetermined number) has been associated with 2 probe points, then the value of probe point 3 newly associated with triangle C is lower. Furthermore, when calculating the coverage value of probe point 3 and triangle C, it will be multiplied by a lower calculation coefficient (the fourth coefficient, such as 1).
  • the coverage of the current probe point to be processed and a group of triangles is obtained, thereby achieving the effect of improving the efficiency of screening probe points in the set of candidate probe points.
  • obtain the second coverage of the current probe point to be processed and each second triangle including:
  • the second triangle includes the first associated triangle, and the number of probe points associated with the first associated triangle is equal to 1, determine the third coverage of the current probe point to be processed and each first associated triangle as the current probe point to be processed.
  • the second coverage degree between the probe point and each second triangle, wherein the third coverage degree between the current probe point to be processed and a first associated triangle is based on the association degree between a first associated triangle and the current probe point to be processed , the correlation degree between a first associated triangle and the probe point associated with the first associated triangle, and the first included angle.
  • the first included angle is the current probe point to be processed, the centroid of a first associated triangle and the first included angle. The angle formed by the associated probe points of the associated triangle;
  • the second triangle includes a second associated triangle, and the number of associated probe points of the second associated triangle is greater than 1, obtain the fourth coverage degree of the associated probe points of the second associated triangle and the second associated triangle, and the current
  • the fifth coverage degree of the probe point to be processed and the second associated triangle is determined as the current coverage degree of the probe point to be processed and the second associated triangle when the fifth coverage degree is greater than the fourth coverage degree.
  • Processing the second coverage between the probe point and each second triangle, wherein the fifth coverage between the current probe point to be processed and a second associated triangle is based on the association between a second associated triangle and the current probe point to be processed degree, the correlation degree between a second associated triangle and the probe point associated with the second associated triangle, and the value determined by the second included angle.
  • the second included angle is the current probe point to be processed, the centroid of a second associated triangle, and The angle formed by the associated probe points of the second associated triangle.
  • probe point P1 is the probe point with the greatest correlation.
  • Probe point P1 and probe point P2 are probe points of equal value. Then, when probe point P3 is selected from the candidate probe point set 1002 to be associated with triangle A, the coverage T1 of probe point P1, probe point P2 and triangle A is obtained. , the coverage T3 of probe point P2, probe point P3 and triangle A, and the coverage T2 of probe point P1, probe point P3 and triangle A, and then in the coverage Determine the maximum coverage among coverage T1, coverage T2 and coverage T1, such as coverage T3, and then update the probe points currently associated with triangle A, from probe points P1 and probe points P2 to probe points P2 and probe points P3.
  • the probe points in the candidate probe point set are screened to obtain the target probe point set, including:
  • S3 determine the second current probe point from each probe point except the first current probe point based on a set of included angles and the correlation between the current triangle and each probe point except the first current probe point;
  • S4 Determine the first current probe point and the second current probe point as probe points in the target probe point set.
  • each triangle in a set of triangles can be associated with any N probe points in the candidate probe point set sequentially or in parallel, but is not limited to Among them, N is the predetermined quantity;
  • triangle A in a set of triangles 1102 is used as the current triangle.
  • the probe point P1 with the greatest correlation with triangle A is determined in the candidate probe point set 1104, and the probe point is P1 is determined as the probe point in the target probe point set 1106; then the angles formed by each probe point in the candidate probe point set 1106 except probe point P1 and the centroid of triangle A and probe point P1 are obtained, and a set of clips is obtained.
  • probe point P3 is determined from each probe point except probe point P1; probe point P3 is also determined as The probe points in the target probe point set 1106; similarly, all the triangles in a group of triangles 1102 are also used as the current triangles to perform the above steps to obtain the target probe point set 1108, where triangle A, probe point P1, probe point P3 Association, triangle B is associated with probe point P1 and probe point P3, triangle C is associated with probe point P2 and probe point P3.
  • the spherical harmonic basis coefficient of the probe point corresponding to each vertex in the virtual model to be rendered is determined from the map to be processed;
  • S3 Determine the spherical harmonic basis coefficient of each vertex in the virtual model to be rendered according to the spherical harmonic basis coefficient of the probe point corresponding to each vertex in the virtual model to be rendered;
  • S4 Perform illumination rendering on the virtual model to be rendered according to the spherical harmonic basis coefficients of each vertex in the virtual model to be rendered.
  • the probe points in the model space where the virtual model to be rendered is located can be, but are not limited to, converted to probe points in the target space, and the basic functions provided by the baker are used. Calculate the light reception of each probe point, and finally obtain the spherical harmonic coefficient of the illumination; then save the spherical harmonic coefficients of all elements in the baked virtual model to be rendered as a map (the spherical harmonic coefficients of all elements are saved as one or more map, or the spherical harmonic coefficient of an element is saved as a map, etc.); at runtime, the spherical harmonic map is sampled according to the pre-saved index and weight data, and the result Do a dot product of the coefficient and the basis function corresponding to the normal to obtain the lighting information and complete the lighting rendering of the virtual model to be rendered.
  • the correspondence between each vertex in the virtual model to be rendered and each probe point can be distinguished, but is not limited to, and specifically divided into situations in which the correspondence between the vertex and the probe point is
  • the spherical harmonic basis coefficient of the vertex is determined to be equal to the spherical harmonic basis coefficient of one probe point; in the case of a corresponding relationship between the vertex and the probe point, the spherical harmonic basis coefficient of the vertex is determined to be equal to multiple The weighted sum of the spherical harmonic basis coefficients of the probe point.
  • the index and weight of the relevant probe points can be added to the attributes of the vertices but are not limited to.
  • the attribute structure of the vertices is limited. For example, assuming that each vertex is associated with two probe points, and the attribute space of each vertex is 32 bits (not fixed, 32 bit is just an example), then in the 32-bit attribute space, at least two probe points can be allocated, but are not limited to
  • the attribute space of points such as the attribute space of assigned probe point A8bit, the attribute space of assigned probe point B8bit, can support up to 256 indexes, and the remaining 16bits, of which 8bits are used for weight distribution (the weights between probe points can be calculated in association , for example, the weight of probe point B can be calculated by the weight of probe point A, so one channel can be saved to implement other functions), and an 8-bit attribute space is reserved to save the vertex AO, among which the vertex AO can be but not It
  • perform illumination rendering on the virtual model to be rendered based on the spherical harmonic basis coefficients of each vertex in the virtual model to be rendered including:
  • the virtual model to be rendered when the virtual model to be rendered is a close-up virtual model, obtain the second spherical harmonic fundamental coefficients of each vertex in the virtual model to be rendered, and perform illumination rendering on the virtual model to be rendered according to the second spherical harmonic fundamental coefficients, where,
  • the second spherical harmonic fundamental coefficient is a coefficient calculated based on the first spherical harmonic fundamental coefficient.
  • the spherical harmonic basis coefficients may, but are not limited to, include 3-stage spherical harmonic basis coefficients, wherein the second-order and third-order coefficients of the spherical harmonic basis coefficients may, but are not limited to, pass through The first-order coefficients are obtained by normalization;
  • the spherical harmonic basis coefficients of the three stages are shown in the formula 1202 in Figure 12, where SHl,m is the spherical harmonic basis coefficient of the probe point, and the subscripts l and m are both
  • SHl,m is the spherical harmonic basis coefficient of the probe point
  • subscripts l and m are both
  • N is the number of triangles associated with the probe point
  • w(i) is the weight, such as L(i) is the incident light from a certain direction
  • the suffix part of the spherical harmonic basis function is less than 1, so the high-order (2nd and 3rd order) spherical harmonic basis coefficients can be calculated from the 1st-order spherical harmonic basis coefficients.
  • the harmonic coefficients are normalized.
  • the data format of the spherical harmonic basis coefficients can be, but is not limited to, encoded in the following manner:
  • the format of the texture can be It is not limited to Uint4. This format is supported by hardware and is more convenient for encoding.
  • each set of spherical harmonic basis coefficients usually needs to occupy 2 pixels, and then the low-order spherical harmonic basis coefficients (the first sphere) are The harmonic basis coefficient) and the high-order spherical harmonic basis coefficient (the second spherical harmonic basis coefficient) are divided into two different pixels, which makes it more convenient to do lod.
  • nearby objects need to be completely, High-order spherical harmonic calculations, while distant objects only need to perform low-order spherical harmonic calculations Harmonic calculation is enough, so that the distant object is sampled only once; further, it is also possible but not limited to splitting the high-order spherical harmonic basis coefficients into another map, so that the distant object only needs to load half of the Texture amount;
  • the first spherical harmonic fundamental coefficient 1302 includes the 1st and 2nd order coefficients
  • the second spherical harmonic fundamental coefficient 1304 includes the 3rd order coefficient
  • the spherical harmonic fundamentals of each order of the RGB 3 channels are Coefficients are divided into two pixels of 16 bytes.
  • the first spherical harmonic fundamental sub-coefficient of RGB 3 channels is divided into the first pixel 1302
  • the second spherical harmonic fundamental sub-coefficient of RGB 3 channels is divided into the second pixel 1304. ;
  • the 16-byte storage space is divided into three parts.
  • the first part is 6 bytes, which is used to allocate the first-order spherical harmonic basis coefficients of the RGB 3 channels;
  • the second part is 9 bytes, which is used Used to allocate the second-order spherical harmonic coefficients of the 3 RGB channels;
  • the third part, 1byte is a reserved byte that can be used to save shadow data to achieve a relatively rough shadow effect based on probe points;
  • the 16byte storage space is divided into two parts.
  • the first part is 15byte, which is used to allocate the third-order spherical harmonic basis coefficients of the RGB 3 channels;
  • the second part is 1byte, which is used as reserved bytes. , can be used to save shadow data to achieve a relatively rough shadow effect based on probe points.
  • the third spherical harmonic fundamental coefficient and the fourth spherical harmonic fundamental coefficient of each probe point in the target probe point set are obtained, the third spherical harmonic fundamental coefficient is saved in the first data format as a map to be processed, and the third spherical harmonic fundamental coefficient is The four spherical harmonic fundamental coefficients are saved as textures to be processed in the second data format.
  • the fourth spherical harmonic fundamental coefficient is a coefficient calculated based on the third spherical harmonic fundamental coefficient.
  • the number of bytes occupied by the first data format is larger than that of the second spherical harmonic fundamental coefficient.
  • the number of bytes occupied by the data format is obtained.
  • a first-order spherical harmonic fundamental coefficient (the third spherical harmonic fundamental sub-coefficient) occupies 2 bytes, and the three first-order spherical harmonic fundamental coefficients of the RGB channel occupy a total of 6 bytes.
  • a 2nd-order spherical harmonic fundamental coefficient (the fourth spherical harmonic fundamental sub-coefficient) occupies 1 byte
  • the nine 2nd-order spherical harmonic fundamental coefficients of the RGB channel occupy 9 bytes in total; in addition, a 3rd-order spherical harmonic fundamental coefficient
  • the basis coefficient (the fourth spherical harmonic basis sub-coefficient) also occupies 1 byte, and the fifteen third-order spherical harmonic basis coefficients of the RGB channel occupy a total of 15 bytes.
  • obtain the occlusion degree of each probe point in the virtual model to be rendered and the set of candidate probe points including:
  • S4 Determine the obstruction between the current triangle and each probe point in the candidate probe point set based on the number of detection rays in a set of detection rays that contact each probe point in the candidate probe point set and the number of detection rays in a set of detection rays. Spend.
  • obtaining the obstruction degree of each probe point in the virtual model to be rendered and the set of candidate probe points includes: obtaining the obstruction degree of each triangle divided into the virtual model to be rendered and each probe point in the set of candidate probe points.
  • the probe points in the candidate probe point set are screened to obtain the target probe point set, including: according to the virtual model to be rendered
  • the obstruction degree of each triangle divided into by the model and each probe point in the candidate probe point set is filtered to obtain the target probe point set, in which each probe point in the target probe point set is consistent with the target probe point set.
  • obtain the spherical harmonic basis coefficient of each probe point in the target probe point set, and perform lighting rendering of the virtual model to be rendered based on the spherical harmonic basis coefficient of each probe point in the target probe point set including: Obtain the spherical harmonic basis coefficient of each probe point in the target probe point set, and perform lighting rendering on the virtual model to be rendered based on the index relationship and the spherical harmonic basis coefficient of each probe point in the target probe point set.
  • obtain a set of candidate probe points for the virtual model to be rendered including:
  • the method of filtering out invalid probe points may, but is not limited to, include filtering out invalid areas located in the virtual model to be rendered (such as the interior of the virtual model to be rendered, the backlight, etc. ), filter out the probe points whose correlation degree with the virtual model to be rendered is lower than the effective threshold, etc.
  • the side view of the virtual model 1502 to be rendered is shown in Figure 15.
  • candidate probe points of the virtual model 1502 to be rendered such as probe point e and probe point d
  • d is a probe point located outside the virtual model 1502 to be rendered
  • e is a probe point located inside the virtual model 1502 to be rendered; in addition, the lighting information of the probe point inside the virtual model 1502 to be rendered is invalid. , is also deleted from the set of candidate probe points.
  • the original probe point set of the virtual model to be rendered is obtained, where the virtual model to be rendered is divided into a set of triangles, and the original probe point set includes one or more probe points corresponding to each triangle in the set of triangles. points; filter out invalid probe points in the original probe point set to obtain a candidate probe point set, thereby achieving the effect of improving the execution efficiency of lighting rendering.
  • the lighting rendering method of the above virtual model is applied to the lighting rendering scene of 3D games to improve the game image quality and realism.
  • the lighting rendering method of the above virtual model is applied to the lighting rendering scene of 3D games to improve the game image quality and realism.
  • S1608 Determine whether the candidate probe points meet the filtering conditions. If so, execute S1610; if not, execute S1606;
  • S1612 determine whether the number of selected probe points meets the conditions, or all triangles have been associated with at least two probe points. If so, execute S1618; if not, execute S1614;
  • probe points are automatically calculated in the model space, where the color of each probe point in the model space can be, but is not limited to, different, and the vertex color is the same as
  • the associated maximum weight probe points are the same, and the vertex line segments can but are not limited to be used to represent the normal direction; further based on the calculated probe points, several probe points and weights associated with each vertex on the model are calculated and will be calculated
  • the probe point index and weight are stored in the model vertex data; furthermore, the scene is passed to the baker for baking, where the scene is composed of several models, and the same model may have multiple instances. That is, convert the probe points in the model space to the world space, use the basic functions provided by the baker to calculate the light reception of the probe points, and finally obtain the spherical harmonic coefficient of the illumination.
  • the spherical harmonic coefficients of all baked virtual models are saved as maps (the spherical harmonic coefficients of all virtual models are saved as one map, or the spherical harmonics of one virtual model).
  • the coefficients are saved as one map, or the spherical harmonic coefficients of multiple virtual models are saved as one map, or the spherical harmonic coefficients of multiple virtual models are saved as multiple maps).
  • virtual model 1702, virtual model 1704 and the spherical harmonic coefficients of the virtual model 1706 are saved as maps 1708 to be processed.
  • a certain compression algorithm is used to assemble the coefficients into several textures; during runtime, the spherical harmonic texture is sampled based on the index and weight data saved at the vertices, and the dot product of the obtained coefficients and the basis function corresponding to the normal is lighting information.
  • a virtual model lighting rendering device for implementing the above virtual model lighting rendering method is also provided.
  • the device includes:
  • the first acquisition unit 1802 is used to obtain a candidate probe point set of the virtual model to be rendered, where the probe points in the candidate probe point set are used for lighting rendering of the virtual model to be rendered;
  • the second acquisition unit 1804 is used to obtain the virtual model to be rendered and the obstruction degree of each probe point in the candidate probe point set;
  • the screening unit 1806 is used to screen the probe points in the candidate probe point set according to the obstruction degree of each probe point in the virtual model to be rendered and the candidate probe point set, and obtain the target probe point set;
  • the third acquisition unit 1808 is used to obtain the spherical harmonic basis coefficient of each probe point in the target probe point set, and perform illumination rendering of the virtual model to be rendered according to the spherical harmonic basis coefficient of each probe point in the target probe point set.
  • the screening unit 1806 includes:
  • the first determination module is used to regard each triangle in a group of triangles as the current triangle, and determine the association between the current triangle and each probe point in the candidate probe point set based on the blocking degree between the current triangle and each probe point in the candidate probe point set. degree, where the virtual model to be rendered is divided into a set of triangles;
  • the first screening module is used to filter the probe points in the candidate probe point set according to the correlation between each current triangle and each probe point in the candidate probe point set, and obtain the target probe point set.
  • the first determination module includes:
  • the first execution sub-module is used to perform the following steps for each triangle in a set of triangles, where each triangle is the current triangle when performing the following steps:
  • the first determination sub-module is used to determine the projection area of the current triangle projected onto each detection area in a group of detection areas, where the group of detection areas includes detection areas corresponding to each detection point in the candidate detection point set;
  • the second determination submodule is used to determine the current triangle and the candidate probe point set based on the projected area of the current triangle onto each detection area in a set of detection areas and the obstruction degree of the current triangle and each probe point in the candidate probe point set. The correlation between each exploration point.
  • the first screening module includes:
  • the second execution submodule is used to repeatedly perform the following steps until each current triangle in a set of triangles is associated with a predetermined number of probe points, wherein the target probe point set includes probes associated with each current triangle in a set of triangles. point, the current probe point set is initialized as the candidate probe point set:
  • the first acquisition sub-module is used to regard each probe point in the current probe point set as the current probe point to be processed, and obtain the coverage of the current probe point to be processed and a group of triangles, where the current probe point to be processed and a group of triangles are
  • the coverage of a triangle is the sum of the coverage between the current probe point to be processed and each current triangle in a group of triangles.
  • the coverage between the current probe point to be processed and a current triangle is based on the current triangle and the current triangle to be processed. The relevance of the probe points is determined;
  • the second acquisition submodule is used to select the probe point with the largest coverage in the current probe point set as the probe point in the target probe point set, and delete the probe point with the maximum coverage from the current probe point set, where the target probe point
  • the probe point in the point set is rooted
  • the coverage corresponding to each probe point is associated with the current triangle in a set of triangles.
  • the coverage corresponding to each probe point is the coverage between each probe point and each triangle in the set of triangles.
  • first obtain submodules including:
  • the first acquisition subunit is used to obtain the first coverage of the current probe point to be processed and each first triangle if a group of triangles includes a first triangle that is not associated with a probe point, where the current probe point to be processed is associated with a first triangle.
  • the first coverage between the first triangles is obtained based on the correlation between a first triangle and the current probe point to be processed.
  • the coverage between the current probe point to be processed and a group of triangles is the relationship between the current probe point to be processed and each probe point.
  • the sum of the first coverage between a triangle, the coverage of the current probe point to be processed and a group of triangles includes the first coverage of the current probe point to be processed and each first triangle;
  • the second acquisition subunit is used to obtain the second coverage of the current probe point to be processed and each second triangle if a set of triangles includes a second triangle with associated probe points, where the current probe point to be processed is associated with a second triangle.
  • the second coverage between the second triangles is determined based on the correlation between a second triangle and the current probe point to be processed, the correlation between a second triangle and another probe point, and the current included angle.
  • the current probe point is to be processed.
  • the coverage between the probe point and a group of triangles is the sum of the second coverage between the current probe point to be processed and each second triangle.
  • the current angle is the current probe point to be processed, the centroid of one second triangle and the other The angle formed by the probe point.
  • the probe point associated with the second triangle includes another probe point.
  • the coverage of the current probe point to be processed and a group of triangles includes the second coverage of the current probe point to be processed and each second triangle. ;
  • the integration subunit is used to integrate the current probe point to be processed with the first coverage of each first triangle and the current probe point to be processed with the first coverage of each second triangle if a set of triangles includes a first triangle and a second triangle.
  • the second coverage is to obtain the coverage between the current probe point to be processed and a group of triangles.
  • integrate subunits including:
  • the first sub-acquisition unit is used to obtain the first coefficient corresponding to the first triangle and the second coefficient corresponding to the second triangle, wherein the first coefficient is greater than the second coefficient;
  • the second sub-acquisition unit is used to obtain the first product value of the first coverage and the first coefficient, and the second product value of the second coverage and the second coefficient;
  • the sub-summation unit is used to sum the first product value and the second product value to obtain the coverage of the current probe point to be processed and a group of triangles.
  • the second acquisition subunit includes:
  • the first sub-determination unit is used to, if the second triangle includes the first associated triangle, and the number of probe points associated with the first associated triangle is equal to 1, combine the current probe point to be processed with the third coverage of each first associated triangle.
  • the degree is determined as the second coverage degree between the current probe point to be processed and each second triangle, wherein the third coverage degree between the current probe point to be processed and a first associated triangle is based on a first associated triangle and the current to be processed probe point.
  • a first level It is obtained by determining the correlation degree of the connected triangle and the probe point associated with the first associated triangle, and the first included angle.
  • the first included angle is the current probe point to be processed, the centroid of a first associated triangle and the associated first associated triangle. The angle formed by the probe points;
  • the second sub-determination unit is used to obtain the associated probe points of the second associated triangle and the third associated probe point of the second associated triangle if the second triangle includes a second associated triangle and the number of associated probe points of the second associated triangle is greater than 1.
  • Four coverages, and the fifth coverage degree of the current probe point to be processed and the second associated triangle, and when the fifth coverage degree is greater than the fourth coverage degree, the current probe point to be processed and the fifth coverage degree of the second associated triangle The coverage is determined as the second coverage between the current probe point to be processed and each second triangle, wherein the fifth coverage between the current probe point to be processed and a second associated triangle is based on a second associated triangle and the current
  • the second included angle is the current probe point to be processed, a first probe point The angle formed by the centroid of the two associated triangles and the associated probe points of the second associated triangle.
  • the first screening module includes:
  • the third execution sub-module is used to perform the following steps for each triangle in a set of triangles, wherein each triangle is the current triangle when performing the following steps:
  • the third determination sub-module is used to determine the first current probe point that has the greatest correlation with the current triangle in the set of candidate probe points;
  • the third acquisition submodule is used to obtain the angle formed by each probe point in the candidate probe point set except the first current probe point, the centroid of the current triangle and the first current probe point, and obtain a set of included angles;
  • the fourth determination sub-module is used to determine the detection point from each probe point except the first current probe point based on a set of included angles and the correlation between the current triangle and each probe point except the first current probe point.
  • the fifth determination sub-module is used to determine the first current probe point and the second current probe point as probe points in the target probe point set.
  • the third acquisition unit 1808 includes:
  • the saving module is used to save the spherical harmonic basis coefficients of each probe point in the target probe point set as a map to be processed;
  • the second determination module is used to determine the spherical harmonic basis coefficient of the probe point corresponding to each vertex in the virtual model to be rendered from the map to be processed when the virtual model to be rendered needs to be rendered;
  • the third determination module is used to determine the spherical harmonic basis coefficient of each vertex in the virtual model to be rendered based on the spherical harmonic basis coefficient of the probe point corresponding to each vertex in the virtual model to be rendered;
  • the rendering module is used to perform illumination rendering on the virtual model to be rendered based on the spherical harmonic basis coefficients of each vertex in the virtual model to be rendered.
  • the rendering module includes:
  • the fourth acquisition sub-module is used to obtain the first spherical harmonic fundamental coefficients of each vertex in the virtual model to be rendered when the virtual model to be rendered belongs to the distant view virtual model, and perform the operation on the virtual model to be rendered based on the first spherical harmonic fundamental coefficients.
  • the fifth acquisition submodule is used to obtain the second spherical harmonic fundamental coefficients of each vertex in the virtual model to be rendered when the virtual model to be rendered belongs to the close-up virtual model, and perform the operation on the virtual model to be rendered based on the second spherical harmonic fundamental coefficients. Lighting rendering, where the second spherical harmonic fundamental coefficient is a coefficient calculated based on the first spherical harmonic fundamental coefficient.
  • save modules including:
  • the saving submodule is used to save the third spherical harmonic fundamental coefficient in the first data format when the third spherical harmonic fundamental coefficient and the fourth spherical harmonic fundamental coefficient of each probe point in the target probe point set are obtained.
  • the second acquisition unit 1804 includes:
  • the fourth execution sub-module is used to perform the following steps for each triangle in a set of triangles, where, when performing the following steps, each triangle is the current triangle, and the virtual model to be rendered is divided into a set of triangles:
  • the first detection submodule is used to emit a set of detection rays from the current triangle
  • the second detection sub-module is used to determine the number of detection rays in a set of detection rays that contact each probe point in the candidate probe point set;
  • the blocking submodule is used to determine the current triangle and each probe point in the candidate probe set based on the number of detection rays in a set of detection rays that contact each probe point in the candidate probe point set, and the number of detection rays in a set of detection rays. The blocking degree of the probe point.
  • the second acquisition unit 1804 includes: a first acquisition module, used to acquire the obstruction degree of each triangle into which the virtual model to be rendered is divided and each probe point in the candidate probe point set;
  • the screening unit 1806 includes: a second screening module, used to screen the probe points in the candidate probe point set according to the obstruction degrees of each triangle divided into which the virtual model is to be rendered and each probe point in the candidate probe point set, to obtain A target probe point set, in which there is an index relationship between each probe point in the target probe point set and each triangle into which the virtual model to be rendered is divided;
  • the third acquisition unit 1808 includes: a second acquisition module, used to acquire the spherical harmonic basis coefficient of each probe point in the target probe point set, and based on the index relationship and the spherical harmonic basis coefficient of each probe point in the target probe point set, Perform lighting rendering on the virtual model to be rendered.
  • the first acquisition unit 1802 includes:
  • the third acquisition module is used to obtain the original probe point set of the virtual model to be rendered, where the virtual model to be rendered is divided into a set of triangles, and the original probe point set includes one or more probe points corresponding to each triangle in the set of triangles. ;
  • the fourth acquisition module is used to filter out invalid probe points from the original probe point set and obtain a candidate probe point set.
  • an electronic device for implementing the above lighting rendering method of a virtual model is also provided.
  • the electronic device includes a memory 1902 and a processor 1904.
  • the memory 1902 A computer program is stored, and the processor 1904 is configured to execute the steps in any of the above method embodiments through the computer program.
  • the above-mentioned electronic device may be located in at least one network device among multiple network devices of the computer network.
  • the above-mentioned processor can be configured to perform the following steps through a computer program:
  • S1 Obtain a set of candidate probe points for the virtual model to be rendered, where the probe points in the set of candidate probe points are used for lighting rendering of the virtual model to be rendered;
  • S3 Screen the probe points in the candidate probe point set according to the obstruction degree of each probe point in the virtual model to be rendered and the candidate probe point set, and obtain the target probe point set;
  • S4 Obtain the spherical harmonic basis coefficient of each probe point in the target probe point set, and perform lighting rendering of the virtual model to be rendered based on the spherical harmonic basis coefficient of each probe point in the target probe point set.
  • the structure shown in Figure 19 is only illustrative, and the electronic device can also be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a handheld computer, and a Mobile Internet Devices (MID), PAD and other terminal equipment.
  • Figure 19 does not limit the structure of the above electronic device.
  • the electronic device may also include more or fewer components (such as network interfaces, etc.) than shown in FIG. 19 , or have a different configuration than shown in FIG. 19 .
  • the memory 1902 can be used to store software programs and modules, such as the program instructions/modules corresponding to the lighting rendering method and device of the virtual model in the embodiment of the present application.
  • the processor 1904 runs the software programs and modules stored in the memory 1902, Thereby executing various functional applications and data processing, that is, realizing the lighting rendering method of the virtual model mentioned above.
  • Memory 1902 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 1902 may further include memory located remotely relative to the processor 1904, and these remote memories may be connected to the terminal through a network.
  • the above-mentioned networks include but are not limited to the Internet, intranets, local area networks, mobile communication networks and combinations thereof.
  • the memory 1902 may be specifically, but not limited to, used to store information such as a set of candidate probe points, an obstruction degree, and a set of target probe points.
  • the above-mentioned memory 1902 may include, but is not limited to, the illumination of the above-mentioned virtual model.
  • the above-mentioned transmission device 1906 is used to receive or send data via a network.
  • Specific examples of the above-mentioned network may include wired networks and wireless networks.
  • the transmission device 1906 includes a network adapter (Network Interface Controller, NIC), which can be connected to other network devices and routers through network cables to communicate with the Internet or a local area network.
  • the transmission device 1906 is a radio frequency (Radio Frequency, RF) module, which is used to communicate with the Internet wirelessly.
  • RF Radio Frequency
  • the above-mentioned electronic device also includes: a display 1908 for displaying information such as the above-mentioned candidate probe point set, obstruction degree, and target probe point set; and a connection bus 1910 for connecting various module components in the above-mentioned electronic device.
  • the above-mentioned terminal device or server may be a node in a distributed system, wherein the distributed system may be a blockchain system, and the blockchain system may be composed of multiple nodes communicating through a network.
  • a distributed system formed by formal connections.
  • nodes can form a peer-to-peer (Peer To Peer, referred to as P2P) network, and any form of computing equipment, such as servers, terminals and other electronic devices, can become a node in the blockchain system by joining the peer-to-peer network.
  • P2P peer To Peer
  • a computer program product includes a computer program containing program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network via the communications component, and/or installed from removable media.
  • various functions provided by the embodiments of the present application are executed.
  • the computer system includes a central processing unit (Central Processing Unit, CPU), which can be loaded into the random access memory (Random Access Memory, RAM) according to the program stored in the read-only memory (Read-Only Memory, ROM) or from the storage part. program to perform various appropriate actions and processes. In random access memory, various programs and data required for system operation are also stored.
  • the central processing unit, the read-only memory and the random access memory are connected to each other through a bus.
  • the input/output interface I/O interface
  • the following components are connected to the input/output interface: the input part including keyboard, mouse, etc.; including the output part such as cathode ray tube (CRT), liquid crystal display (LCD), etc., and speakers; including hard disk The storage part, etc.; and the communication part including network interface cards such as LAN cards, modems, etc.
  • the communication section performs communication processing via a network such as the Internet.
  • Drivers are also connected to input/output interfaces as required.
  • Removable media such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, etc., are installed on the drive as needed, so that the computer program read therefrom is installed into the storage section as needed.
  • the processes described in the respective method flow charts may be implemented as computer software programs.
  • embodiments of the present application include a computer program product including a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart. in such
  • the computer program may be downloaded and installed from the network via the communication component, and/or installed from removable media.
  • various functions defined in the system of the present application are executed.
  • a computer-readable storage medium is provided.
  • a processor of a computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, causing the electronic device to perform the various optional tasks described above. Methods provided in Implementation Methods.
  • the program can be completed by hardware related to the device.
  • the program can be stored in a computer-readable storage medium.
  • the storage medium can include: flash disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM). ), disk or CD, etc.
  • the integrated units in the above embodiments are implemented in the form of software functional units and sold or used as independent products, they can be stored in the above computer-readable storage medium.
  • the technical solution of the present application is essentially or contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, It includes several instructions to cause one or more computer devices (which can be personal computers, servers or network devices, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
  • the disclosed client can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • multiple units or components may be combined or may be Integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the units or modules may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

本申请公开了虚拟模型的光照渲染方法、装置和存储介质及电子设备。其中,该方法包括:获取待渲染虚拟模型的候选探点集合,其中,侯选探点集合中的探点用于对待渲染虚拟模型进行光照渲染;获取待渲染虚拟模型与候选探点集合中的各个探点的阻挡度;根据待渲染虚拟模型与候选探点集合中的各个探点的阻挡度,对候选探点集合中的探点进行筛选,得到目标探点集合;获取目标探点集合中的各个探点的球谐基系数,并根据目标探点集合中的各个探点的球谐基系数,对待渲染虚拟模型进行光照渲染。本申请解决了虚拟模型的光照渲染效率较低的技术问题。

Description

虚拟模型的光照渲染方法、装置和存储介质及电子设备
本申请要求于2022年4月2日提交中国专利局、申请号202210344256.9、申请名称为“虚拟模型的光照渲染方法、装置和存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,具体而言,涉及一种虚拟模型的光照渲染方法、装置和存储介质及电子设备。
背景技术
在虚拟模型的光照渲染场景中,通常会利用光照图(lightmap)的方式,对虚拟模型进行逐像素的光照渲染,但该方式通常会占用大量的内存和存储空间,也需较高的计算量支持,进而导致虚拟模型的光照渲染效率较低的问题出现。因此,存在虚拟模型的光照渲染效率较低的问题。
发明内容
本申请实施例提供了一种虚拟模型的光照渲染方法、装置和存储介质及电子设备,以至少解决虚拟模型的光照渲染效率较低的技术问题。
根据本申请实施例的一个方面,提供了一种虚拟模型的光照渲染方法,该方法由电子设备执行,包括:获取待渲染虚拟模型的候选探点集合,其中,上述侯选探点集合中的探点用于对上述待渲染虚拟模型进行光照渲染;获取上述待渲染虚拟模型与上述候选探点集合中各个探点的阻挡度;根据上述待渲染虚拟模型与上述候选探点集合中各个探点的阻挡度,对上述候选探点集合中的探点进行筛选,得到目标探点集合;获取上述目标探点集合中各个探点的球谐基系数,并根据上述目标探点集合中各个探点的球谐基系数,对上述待渲染虚拟模型进行光照渲染。
根据本申请实施例的另一方面,还提供了一种虚拟模型的光照渲染装置,该装置部署在电子设备上,包括:第一获取单元,用于获取待渲染虚拟模型的候选探点集合,其中,上述侯选探点集合中的探点用于对上述待渲染虚拟模型进行光照渲染;第二获取单元,用于获取上述待渲染虚拟模型与上述候选探点集合中各个探点的阻挡度;筛选单元,用于根据上述待渲染虚拟模型与上述候选探点集合中各个探点的阻挡度,对上述候选探点集合中的探点进行筛选,得到目标探点集合;第三获取单元,用于获取上述目标探点集合中各个探点的球谐基系数,并根据上述目标探点集合中各个探点的球谐基系数,对上述待渲染虚拟模型进行光照渲染。
根据本申请实施例的又一个方面,提供一种计算机可读的存储介质,所述计算机可读的存储介质包括存储的计算机程序,其中,所述计算机程序可被电子设备运行时执行如以上虚拟模型的光照渲染方法。
根据本申请实施例的又一个方面,提供一种计算机程序产品,该计算机程序产品包括计算机程序,该计算机程序存储在计算机可读存储介质中。电子设备的处理器从计算机可 读存储介质读取该计算机程序,处理器执行该计算机程序,使得该电子设备执行如以上虚拟模型的光照渲染方法。
根据本申请实施例的又一方面,还提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,上述处理器通过计算机程序执行上述的虚拟模型的光照渲染方法。
在本申请实施例中,获取待渲染虚拟模型的候选探点集合,其中,上述侯选探点集合中的探点用于对上述待渲染虚拟模型进行光照渲染;获取上述待渲染虚拟模型与上述候选探点集合中各个探点的阻挡度;根据上述待渲染虚拟模型与上述候选探点集合中各个探点的阻挡度,对上述候选探点集合中的探点进行筛选,得到目标探点集合;获取上述目标探点集合中各个探点的球谐基系数,并根据上述目标探点集合中各个探点的球谐基系数,对上述待渲染虚拟模型进行光照渲染,利用上述待渲染虚拟模型与上述候选探点集合中的各个探点的阻挡度对候选探点集合中的探点进行筛选的方式,得到数量更少的重要探点,进而达到了降低通过探点的球谐基系数对上述待渲染虚拟模型进行光照渲染时的计算量的目的,从而实现了提高虚拟模型的光照渲染效率的技术效果,进而解决了虚拟模型的光照渲染效率较低的技术问题。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的一种可选的虚拟模型的光照渲染方法的应用环境的示意图;
图2是根据本申请实施例的一种可选的虚拟模型的光照渲染方法的流程的示意图;
图3是根据本申请实施例的一种可选的虚拟模型的光照渲染方法的示意图;
图4是根据本申请实施例的另一种可选的虚拟模型的光照渲染方法的示意图;
图5是根据本申请实施例的另一种可选的虚拟模型的光照渲染方法的示意图;
图6是根据本申请实施例的另一种可选的虚拟模型的光照渲染方法的示意图;
图7是根据本申请实施例的另一种可选的虚拟模型的光照渲染方法的示意图;
图8是根据本申请实施例的另一种可选的虚拟模型的光照渲染方法的示意图;
图9是根据本申请实施例的另一种可选的虚拟模型的光照渲染方法的示意图;
图10是根据本申请实施例的另一种可选的虚拟模型的光照渲染方法的示意图;
图11是根据本申请实施例的另一种可选的虚拟模型的光照渲染方法的示意图;
图12是根据本申请实施例的另一种可选的虚拟模型的光照渲染方法的示意图;
图13是根据本申请实施例的另一种可选的虚拟模型的光照渲染方法的示意图;
图14是根据本申请实施例的另一种可选的虚拟模型的光照渲染方法的示意图;
图15是根据本申请实施例的另一种可选的虚拟模型的光照渲染方法的示意图;
图16是根据本申请实施例的另一种可选的虚拟模型的光照渲染方法的示意图;
图17是根据本申请实施例的另一种可选的虚拟模型的光照渲染方法的示意图;
图18是根据本申请实施例的一种可选的虚拟模型的光照渲染装置的示意图;
图19是根据本申请实施例的一种可选的电子设备的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
根据本申请实施例的一个方面,提供了一种虚拟模型的光照渲染方法,在一种可能的实现方式中作为一种可选的实施方式,上述虚拟模型的光照渲染方法可以但不限于应用于如图1所示的环境中。本申请实施例提供的方法可以由电子设备执行,其中,该电子设备可以但不限于包括用户设备102,其中,该用户设备102上可以但不限于包括显示器108、处理器106及存储器104。
具体过程可如下步骤:
S102,用户设备102获取对待渲染虚拟模型的光照渲染请求;
S104-S112,用户设备102响应上述光照渲染请求,获取待渲染虚拟模型的候选探点集合,并进一步获取待渲染虚拟模型与候选探点集合中的各个探点的阻挡度;通过处理器106对候选探点集合中的探点进行筛选,得到目标探点集合,再获取目标探点集合中的各个探点的球谐基系数,并根据目标探点集合中的各个探点的球谐基系数,对待渲染虚拟模型进行光照渲染,以及将上述光照渲染的结果显示在显示器108,将上述光照渲染的结果存储在存储器104。
除图1示出的示例之外,上述步骤还可以由服务器辅助完成,即由服务器执行待渲染虚拟模型的候选探点集合的获取、阻挡度的获取、探点的筛选、光照渲染等步骤,从而减轻用户设备的处理压力。该用户设备102包括但不限于手持设备(如手机)、笔记本电脑、台式电脑、车载设备等,本申请并不限制用户设备102的具体实现方式。
在一种可能的实现方式中作为一种可选的实施方式,如图2所示,虚拟模型的光照渲染方法包括:
S202,获取待渲染虚拟模型的候选探点集合,其中,侯选探点集合中的探点用于对待渲染虚拟模型进行光照渲染;
S204,获取待渲染虚拟模型与候选探点集合中各个探点的阻挡度;
S206,根据待渲染虚拟模型与候选探点集合中各个探点的阻挡度,对候选探点集合中的探点进行筛选,得到目标探点集合;
S208,获取目标探点集合中各个探点的球谐基系数,并根据目标探点集合中各个探点的球谐基系数,对待渲染虚拟模型进行光照渲染。
在一种可能的实现方式中,在本实施例中,上述虚拟模型的光照渲染方法可以但不限于应用在三维(3 Dimensions,简称3D)游戏中的模型渲染场景中,在模型空间找到少量重要的点,从而达到节省内存的目的;具体的,先沿着模型表面生成所有的候选探点,再从中挑选若干个重要的,直到满足需求为止,以利用重要程度较高的探点进行光照渲染,以兼顾光照渲染的效率以及游戏画质表现与真实感的提升;
举例说明,例如图3所示,利用上述虚拟模型的光照渲染方法对待渲染虚拟模型302的候选探点集合进行筛选,得到目标探点集合,如图3中的(a)所示,得到多个探点(不同阴影可以但不限用于表示不同权重的探点);再者,如图3中的(b)所示,目标探点集合中的各个探点可以但不限于与待渲染虚拟模型302的各个顶点之间存在关联(相同阴影可以但不限用于表示顶点与探点之间的关联关系,且在一个顶点关联多个探点的情况下,该顶点的阴影与上述多个探点中最大权重的探点的阴影相同);进一步获取目标探点集合中的各个探点的球谐基系数,并根据目标探点集合中各个探点的球谐基系数以及目标探点集合中各个探点与待渲染虚拟模型302的各个顶点之间的关联关系,对待渲染虚拟模型进行光照渲染。
在本实施例中,探点可以但不限于理解为在空间中用于采集光照信息的三维空间点,且该三维空间点还用于对待渲染虚拟模型进行光照渲染。
在本实施例中,待渲染虚拟模型可以但不限于被划分为多个三角形进行处理,且每个三角形可以但不限于对应多个顶点,其中,一个顶点可以但不限于关联多个探点,一个探点也可以但不限于关联多个顶点;而获取待渲染虚拟模型的候选探点集合的过程,也可以但不限理解为获取待渲染虚拟模型被划分为的每个三角形的候选探点。
举例说明,如图4所示,三角形402为待渲染虚拟模型被划分为的多个三角形中的一个三角形,以该三角形402为例,如图4中的(a)所示,O为三角形402的质心,a、b、c分别为三角形404的线段AO、BO、CO的中点;进一步如图4中的(b)所示,将a,b,c沿三角形402的法线方向偏移一个预设单位,进而得到了3个候选探点a’、b’、c’;同理,参考上述三角形402的候选探点的获取方式,获取待渲染虚拟模型被划分为的每个三角形的候选探点。
此外,在本实施例中,待渲染虚拟模型可以但不限于被划分为的三角形的面积可以但不限于存在区别,且针对不同的面积,对于三角形的候选探点的获取方式也可以但不限于存在不同,如出于提高性能的考虑,对面积小于或等于目标阈值的三角形,可以但不限于生成第一数量的候选探点(如三角形的质心沿三角形的法线方向做偏移所得到的候选探点),而面积大于目标阈值的三角形,可以但不限于生成第二数量的候选探点,其中,第二数量大于第一数量。
在一种可能的实现方式中,在本实施例中,待渲染虚拟模型与候选探点集合中各个探点的阻挡度可以但不限于理解待渲染虚拟模型对候选探点集合中各个探点的阻挡程度,或候选探点集合中各个探点对待渲染虚拟模型的阻挡程度,或待渲染虚拟模型与候选探点集合中各个探点之间是否有阻挡,若无阻挡,则该阻挡度为0,若有阻挡,则该阻挡度大于0, 且具体的阻挡度与该阻挡的程度呈正相关关系;或相反,若无阻挡,则该阻挡度为1,若有阻挡,则该阻挡度小于1且大于0,且具体的阻挡度与该阻挡的程度呈反相关关系。
此外,在待渲染虚拟模型被划分为多个三角形的情况下,待渲染虚拟模型与候选探点集合中各个探点的阻挡度可以但不限于理解为上述多个三角形中的各个三角形与候选探点集合中各个探点的阻挡度,例如待渲染虚拟模型被划分为N个三角形,候选探点集合中包括M个探点,则上述多个三角形中的各个三角形与候选探点集合中各个探点的阻挡度可以但不限于包括(N×M)个阻挡度。
在一种可能的实现方式中,在本实施例中,对候选探点集合中的探点进行筛选的方式可以但不限于将候选探点集合中的阻挡度大于或等于阻挡阈值的探点确定为目标探点集合中的探点。
在一种可能的实现方式中,在本实施例中,球谐基系数可以但不限于为球谐光照中基函数的系数,或可以但不限于理解为先将光照采样成N个系数,然后在渲染的时候用上述球谐基系数对上述采样到的光照进行还原,以完成渲染。
在一种可能的实现方式中,在本实施例中,相比于不对候选探点集合中的探点进行筛选,获取所有探点的球谐基系数的方式,在对候选探点集合中的探点进行筛选的基础上,再获取目标探点集合中的各个探点的球谐基系数所需付出的计算量得到一定程度的降低,进而提高了对待渲染虚拟模型进行光照渲染的效率。
在一种可能的实现方式中,在本实施例中,在得到目标探点集合之后,将目标探点集合中的探点传递至烘培器中进行烘培,在该烘培器中将目标探点集合中的探点转化为世界空间下的探点,并利用烘培器所提供的基础功能以确定出目标探点集合中的探点的受光情况,进而得到目标探点集合中的探点的球谐基系数,获取目标探点集合中的各个探点的球谐基系数;此外,在待渲染虚拟模型为目标场景中的若干模型之一的情况下,为提高数据处理效率,可以但不限于将目标场景中所有模型的探点传递至烘培器。
在一种可能的实现方式中,在本实施例中,由于目标探点集合中的探点为单一模型空间下的探点,那么在待渲染虚拟模型为目标场景中的若干模型之一的情况下,单一模型空间下的探点并不涉及其他模型空间下的探点,进而当对目标场景中所有模型进行烘培时,就可能出现目标探点集合中的探点处于其他模型的内部的异常情况,且该异常情况下的探点属于无效探点,如图5所示,虚拟模型504的探点A被同处一个目标场景502中的虚拟模型506遮挡,导致探点A成为无效探点;
进一步针对该异常情况,可以但不限于对目标探点集合中探点的相关数据进行记录,其中,相关数据包括以下至少之一:探点到待渲染虚拟模型的最近距离、探点所能关联的其他探点。进而在烘焙时,如果某个探点因该异常情况导致无效,首先在最近距离范围内找有效的另一探点;如果找不到,则遍历所有能关联的且实际有效的探点,(与距离的平方成反比)进行加权平均。
需要说明的是,获取待渲染虚拟模型的候选探点集合,其中,侯选探点集合中的探点用于对待渲染虚拟模型进行光照渲染;获取待渲染虚拟模型与候选探点集合中各个探点的阻挡度;根据待渲染虚拟模型与候选探点集合中的各个探点的阻挡度,对候选探点集合中 的探点进行筛选,得到目标探点集合;获取目标探点集合中各个探点的球谐基系数,并根据目标探点集合中各个探点的球谐基系数,对待渲染虚拟模型进行光照渲染,利用待渲染虚拟模型与候选探点集合中各个探点的阻挡度对候选探点集合中的探点进行筛选的方式,得到数量更少的重要探点,以降低通过探点的球谐基系数对待渲染虚拟模型进行光照渲染时的计算量,进而实现了提高虚拟模型的光照渲染效率的技术效果。
举例说明,例如图6所示,获取待渲染虚拟模型602的候选探点,以组成候选探点集合,如图6中的(a)所示,其中,侯选探点集合中的候选探点用于对待渲染虚拟模型302进行光照渲染;进一步获取待渲染虚拟模型602与候选探点集合中各个候选探点的阻挡度,并根据待渲染虚拟模型602与候选探点集合中的各个候选探点的阻挡度,对候选探点集合中的候选探点进行筛选,得到数量更少的目标探点,以组成目标探点集合,如图6中的(b)所示,其中,各个目标探点与待渲染虚拟模型602被划分成的各个三角形关联(虚线用于表示关联关系);进一步获取目标探点集合中的各个目标探点的球谐基系数,并根据目标探点集合中的各个目标探点的球谐基系数,对待渲染虚拟模型602进行光照渲染,得到图6中的(c)所示的待渲染虚拟模型604。
通过本申请提供的实施例,利用待渲染虚拟模型与候选探点集合中的各个探点的阻挡度对候选探点集合中的探点进行筛选的方式,得到数量更少的重要探点,进而达到了降低通过探点的球谐基系数对待渲染虚拟模型进行光照渲染时的计算量的目的,从而实现了提高虚拟模型的光照渲染效率的技术效果。
作为一种可选的方案,根据待渲染虚拟模型与候选探点集合中各个探点的阻挡度,对候选探点集合中的探点进行筛选,得到目标探点集合,包括:
S1,将一组三角形中每个三角形分别作为当前三角形,根据当前三角形与候选探点集合中各个探点的阻挡度,确定当前三角形与候选探点集合中各个探点的关联度,其中,待渲染虚拟模型被划分成一组三角形;
S2,根据每个当前三角形与候选探点集合中各个探点的关联度,对候选探点集合中的探点进行筛选,得到目标探点集合。
在一种可能的实现方式中,在本实施例中,当前三角形与候选探点集合中各个探点的阻挡度可以但不限与当前三角形与候选探点集合中各个探点的关联度之间呈正相关关系。
在一种可能的实现方式中,在本实施例中,S1的实现方式可以是:获取当前三角形与候选探点集合中各个探点之间的距离值;根据当前三角形与候选探点集合中各个探点之间的距离值以及当前三角形与候选探点集合中各个探点的阻挡度,确定当前三角形与候选探点集合中各个探点的关联度。例如获取当前三角形与候选探点集合中各个探点之间的距离值以及当前三角形与候选探点集合中各个探点的阻挡度的乘积结果,并将该乘积结果确定为当前三角形与候选探点集合中各个探点的关联度。
在一种可能的实现方式中,在本实施例中,对候选探点集合中的探点进行筛选的方式可以但不限于将候选探点集合中的关联度大于或等于关联阈值的探点确定为目标探点集合中的探点。
通过本申请提供的实施例,确定每个三角形与候选探点集合中各个探点的关联度,以对候选探点集合中的探点进行筛选,得到目标探点集合,实现了提高虚拟模型的光照渲染效率的效果。
作为一种可选的方案,根据当前三角形与候选探点集合中的各个探点的阻挡度,确定当前三角形与候选探点集合中各个探点的关联度,包括:
S1,确定当前三角形投射至一组检测区域中各个检测区域的投影面积,其中,一组检测区域包括候选探点集合中各个探点分别对应的检测区域;
S2,根据当前三角形投射至一组检测区域中各个检测区域的投影面积以及当前三角形与候选探点集合中各个探点的阻挡度,确定当前三角形与候选探点集合中的各个探点的关联度。
在一种可能的实现方式中,在本实施例中,检测区域可以但不限于理解为以探点为中心的平面区域,如圆形、矩形、多边形等;或,检测区域可以但不限于理解为以探点为中心的三维单位体,如球体、矩形体、多边形体等。
在一种可能的实现方式中,在本实施例中,关联度可以但不限于为投影面积与阻挡度的整合结果,如乘积结果、求和结果等;关联度还可以但不限于为投影面积、阻挡度以及其他参数的整合结果,其中,其他参数可以但不限于包括以下至少之一:当前三角形与候选探点集合中的各个探点的距离、当前三角形与候选探点集合中的各个探点的夹角角度等。
举例说明,例如图7所示,待渲染虚拟模型702被划分为多个三角形(一组三角形),以确定三角形704(一组三角形中的当前三角形)与探点706(候选探点集合中的探点)的关联度为例说明,首先确定三角形704投射至探点706对应的检测区域708;进一步获取三角形704投射至检测区域708的投影710,再根据投影710的投影面积以及三角形704与探点706的阻挡度,确定三角形704与探点706的关联度。
通过本申请提供的实施例,确定当前三角形与候选探点集合中的各个探点的关联度,实现了提高关联度的获取准确性的效果。
作为一种可选的方案,根据当前三角形与候选探点集合中各个探点的关联度,对候选探点集合中的探点进行筛选,得到目标探点集合,包括:
重复执行以下步骤,直到一组三角形中的每个当前三角形都关联有预定数量的探点,其中,目标探点集合包括一组三角形中每个当前三角形关联的探点,当前探点集合被初始化为候选探点集合:
S1,将当前探点集合中每个探点分别作为当前待处理探点,获取当前待处理探点与一组三角形的覆盖度,其中,当前待处理探点与一组三角形的覆盖度是当前待处理探点与一组三角形中每个当前三角形之间的覆盖度之和,当前待处理探点与一个当前三角形之间的覆盖度是根据一个当前三角形与当前待处理探点的关联度确定的;
S2,在当前探点集合中选取覆盖度最大的探点作为目标探点集合中的探点,并从当前探点集合中删除覆盖度最大的探点,其中,目标探点集合中的探点被根据每个探点对应的覆盖度与一组三角形中的当前三角形进行关联,每个探点对应的覆盖度为每个探点与一组三角形中每个三角形之间覆盖度。
在一种可能的实现方式中,当前待处理探点与一组三角形的覆盖度可以但不限于包括直接覆盖度或间接覆盖度,其中,直接覆盖度可以但不限于理解为根据当前待处理探点与一组三角形的关联度直接计算得到的覆盖度,或者可以但不限于将当前待处理探点与一组三角形的关联度理解为当前待处理探点与一组三角形的覆盖度;间接覆盖度可以但不限于理解为根据当前待处理探点与一组三角形的关联度以及其他参数共同计算得到的覆盖度,其中,其他参数可以但不限于包括以下至少之一:当前待处理探点与一组三角形的阻挡度、当前探点集合中的每两个探点的关联度、当前探点集合中的每两个探点以及一组三角形共同组成的结构的结构参数等。
在一种可能的实现方式中,在本实施例中,一组三角形中的每个三角形都关联有预定数量的探点可以但不限于理解为在对候选探点集合中的探点进行筛选的过程中,不断更新目标探点集合中的探点,直至一组三角形中的每个三角形都关联有预定数量的探点,如新增、删除、替换目标探点集合中的探点等;
进一步举例说明,以预定数量的探点是2个探点为例,重复执行以下步骤,直到预设一组三角形中的每个三角形都关联2个探点:
获取当前待处理探点与一组三角形的覆盖度;在当前探点集合中选取覆盖度最大的探点作为目标探点集合中的探点,并从当前探点集合中删除覆盖度最大的探点;
以一组三角形中的三角形A为例说明,假设根据该覆盖度最大的探点与一组三角形中的各个三角形的覆盖度,确定该覆盖度最大的探点与一组三角形中的三角形A关联,则首先明确的三角形A已关联的探点数量,若三角形A已关联的探点数量都未达到2个,则保留该覆盖度最大的探点在目标探点集合中的位置,并建立该覆盖度最大的探点与三角形A的关联关系;
但若三角形A已关联的探点数量已达到2个,则根据该覆盖度最大的探点与三角形A的覆盖度,判断该覆盖度最大的探点的重要程度是否大于三角形A已关联的探点的重要程度,若是,则使用该覆盖度最大的探点代替三角形A已关联的探点(即,三角形A已关联的探点数量仍保持2个,但重要程度较低的探点被该覆盖度最大的探点代替);若否,则保留三角形A已关联的探点,并可以但不限于删除该覆盖度最大的探点在目标探点集合中的位置(即,如果获取到的覆盖度最大的探点与一组三角形中的任一三角形都未建立关联关系,则可以但不限于将该覆盖度最大的探点从目标探点集合中删除,或在选取覆盖度最大的探点作为目标探点集合中的探点之前,就先判断获取到的覆盖度最大的探点是否与一组三角形中的任一三角形建立关联关系,若无则不选取覆盖度最大的探点作为目标探点集合中的探点)。
需要说明的是,目标探点集合中的探点被根据每个探点与一组三角形中的每个三角形之间的覆盖度与一组三角形中的三角形进行关联,且目标探点集合中的探点也被根据每个探点与一组三角形中的每个三角形之间的覆盖度进行更新,或者说目标探点集合中的探点可以但不限于理解为与一组三角形中的三角形关联的探点。
进一步举例说明,可以但不限于通过以下述编程语言执行得到目标探点集合的相关步骤:
通过本申请提供的实施例,对候选探点集合进行筛选,实现了提高对候选探点集合中的探点的筛选效率的效果。
作为一种可选的方案,获取当前待处理探点与一组三角形的覆盖度,包括:
S1,若一组三角形中包括未关联探点的第一三角形,获取当前待处理探点与每个第一三角形的第一覆盖度,其中,当前待处理探点与一个第一三角形之间的第一覆盖度是根据一个第一三角形与当前待处理探点的关联度得到的,当前待处理探点与一组三角形的覆盖度是当前待处理探点与每个第一三角形之间的第一覆盖度之和,当前待处理探点与一组三角形的覆盖度包括当前待处理探点与每个第一三角形的第一覆盖度;
S2,若一组三角形中包括已关联探点的第二三角形,获取当前待处理探点与每个第二三角形的第二覆盖度,其中,当前待处理探点与一个第二三角形之间的第二覆盖度是根据一个第二三角形与当前待处理探点的关联度、一个第二三角形与另一个探点的关联度、以及当前夹角确定得到的,当前待处理探点与一组三角形的覆盖度是当前待处理探点与每个第二三角形之间的第二覆盖度之和,当前夹角是当前待处理探点、一个第二三角形的质心以及另一个探点形成的夹角,第二三角形已关联的探点包括另一个探点,当前待处理探点与一组三角形的覆盖度包括当前待处理探点与每个第二三角形的第二覆盖度;
S3,若一组三角形中包括第一三角形以及第二三角形,整合当前待处理探点与每个第一三角形的第一覆盖度以及当前待处理探点与每个第二三角形的第二覆盖度,得到当前待处理探点与一组三角形的覆盖度。
在一种可能的实现方式中,在本实施例中,一组三角形中的三角形可以但不限于被分为第一三角形和第二三角形,其中,第一三角形为一组三角形中未关联任何探点的三角形,第二三角形为一组三角形中已关联至少一个探点的三角形;如一组三角形中的三角形A已关联探点1和探点2,则三角形A为第二三角形;再如一组三角形中的三角形B已关联探点1,则三角形B也为第二三角形;而假设一组三角形中的三角形C还未关联任何探点,则三角形C为第一三角形。
在一种可能的实现方式中,在本实施例中,当前夹角是一个探点、一个第二三角形的质心以及另一个探点形成的夹角,可以但不限于理解为先确定一个探点、一个第二三角形的质心以及另一个探点所形成的目标结构,再获取该目标结构的夹角角度,如图8所示, 当前夹角804为一个探点P1、一个第二三角形802的质心O以及另一个探点P2形成的夹角。
在一种可能的实现方式中,在本实施例中,当前待处理探点与一个第一三角形之间的第一覆盖度是根据一个第一三角形与当前待处理探点的关联度得到的,而当前待处理探点与一个第二三角形之间的第二覆盖度是根据一个第二三角形与当前待处理探点的关联度、一个第二三角形与另一个探点的关联度、以及当前夹角确定得到的值;其中,第一覆盖度可以但不限于理解为单个探点与三角形之间的覆盖度(即,单维度),而第二覆盖度可以但不限于理解为单个探点和另一探点共同与三角形之间的覆盖度(即,多维度),且第一覆盖度的计算复杂度可以但不限于低于第二覆盖度的计算复杂度;
进一步举例说明,在一种可能的实现方式中例如对单个探点P1的情况,关联(程)度为k1,则探点P1的第一覆盖度也可以但不限于为k1;而对两个探点,如探点P1、探点P2的情况,且仍假设探点P1的关联度为k1、探点P2的关联度为k2,那么对探点P2的第二覆盖度的计算,需先获取探点P2与三角形质心以及探点P1所形成的夹角θ,进而计算探点P2的第二覆盖度,或者说计算探点P1以及探点P2对该三角形的覆盖度为k1+(1-cosθ)*k2,其目的在于希望θ越大其覆盖度也越大;此外,还可以但不限于根据多个探点所形成的立体角度对第二覆盖度进行更多维度的计算,如仍假设探点P1的关联度为k1、探点P2的关联度为k2、探点P3的关联度为k3,那么对探点P3的第二覆盖度的计算,需先获取探点P3、探点P2与三角形质心以及探点P1所形成的立体角θ`,进而计算探点P3的第二覆盖度,或者说探点P1、探点P2以及探点P3对该三角形的覆盖度定义为k1+(1-cosθ`)*k2+(1-cosθ`)*k3。
需要说明的是,对于一组三角形中的不同三角形,覆盖度的获取方式也存在不同,如对于第一三角形,其覆盖度(第一覆盖度)的获取方式为根据一个第一三角形与当前待处理探点的关联度得到;再如对于第二三角形,其覆盖度(第二覆盖度)的获取方式为根据一个第二三角形与当前待处理探点的关联度、一个第二三角形与另一个探点的关联度、以及当前夹角确定得到。此外,一组三角形中可能存在多个不同的三角形,如一组三角形中包括3个第一三角形、4个第二三角形等,那么再计算一组三角形对应的覆盖度时,可以先分别计算3个第一三角形所对应的第一覆盖度,以及4个第二三角形对应的第二覆盖度,再进行求和处理,以得到一组三角形对应的覆盖度。
进一步举例说明,例如图9所示,一组三角形902中包括多个三角形,如三角形A、三角形B、三角形C,候选探点集合904中包括多个探点,如探点P1、探点P2、探点P3……探点Pn;进一步将候选探点集合904确定为当前探点集合,获取当前探点集合中的每个探点与一组三角形902中的每个三角形的覆盖度,其中,在三角形为第一三角形的情况下,根据一个第一三角形与一个探点(例如是当前待处理探点)的关联度得到一个第一三角形与一个探点的覆盖度,如(P1×A)为三角形A与探点P1,再将每个三角形与探点P1的第一覆盖度相加,以得到第一覆盖度T1(由于在当前S902中,一组三角形902中的三角形无第二三角形,因此在当前S902中只需相加第一覆盖度);基于此,对候选探点集合904中的每个探点都进行相同处理,分别得到每个探点与每个三角形的覆盖度,如覆盖度 T2、覆盖度T3……覆盖度Tn;再根据得到的覆盖度T1、覆盖度T2、覆盖度T3……覆盖度Tn,将覆盖度最大的探点P1确定为目标探点集合906的探点,并删除候选探点集合904中的探点P1,得到候选探点集合908,其中,探点P1与三角形A关联;
进一步如S904,将候选探点集合908确定为当前探点集合,获取当前探点集合中的每个探点与一组三角形902中的每个三角形的覆盖度,其中,在三角形为第二三角形的情况下,根据一个第二三角形与一个探点的关联度得到一个第二三角形与一个探点的覆盖度,如(P2×A×P1)为三角形A与探点P2,再将每个三角形与探点P2的第一覆盖度以及第二覆盖度相加,以得到第一覆盖度T21(由于在当前S904中,三角形A已关联探点P1,则三角形A为第二三角形,因此在当前S904中需相加第一覆盖度以及第二覆盖度);基于此,对候选探点集合908中的每个探点都进行相同处理,分别得到每个探点与每个三角形的覆盖度,如覆盖度T32……覆盖度Tn(n-1);再根据得到的覆盖度T21、覆盖度T32……覆盖度Tn(n-1),将覆盖度最大的探点P3确定为目标探点集合910的探点,并删除候选探点集合908中的探点P1,得到新的候选探点集合(后续步骤同理,在此不再举例说明),其中,探点P3与三角形C关联。
在一组三角形902中的每个三角形都关联有2个探点,确定目标探点集合912,其中,三角形A与探点P1、探点P3关联、三角形B与探点P1、探点P3关联、三角形C与探点P2、探点P3关联。
通过本申请提供的实施例,确定当前待处理探点与所述一组三角形的覆盖度,实现了提高覆盖度的计算准确性的效果。
作为一种可选的方案,整合当前待处理探点与每个第一三角形的第一覆盖度以及当前待处理探点与每个第二三角形的第二覆盖度,得到当前探点集合中的当前待处理探点与一组三角形的覆盖度,包括:
S1,获取第一三角形对应的第一系数、以及第二三角形对应的第二系数,其中,第一系数大于第二系数;
S2,获取第一覆盖度与第一系数的第一乘积值、以及第二覆盖度与第二系数的第二乘积值;
S3,对第一乘积值和第二乘积值进行求和,得到当前待处理探点与一组三角形的覆盖度。
在一种可能的实现方式中,在本实施例中,第二三角形可以但不限于被分为未满预定数量的三角形以及已满预定数量的三角形,其中,未满预定数量的三角形可以但不限于为已关联探点、但该已关联的探点数量未达到预定数量的三角形,已满预定数量的三角形可以但不限于为已关联探点、且该已关联的探点数量已达到预定数量的三角形。
进一步举例说明,在一种可能的实现方式中假设预定数量为2,则未满预定数量的三角形为已关联1个探点的三角形,已满预定数量的三角形为已关联2个探点的三角形。
在一种可能的实现方式中,在本实施例中,在一组三角形中的三角形被分为第一三角形、未满预定数量的三角形以及已满预定数量的三角形的情况下,第一三角形对应第一系 数、未满预定数量的三角形对应第三系数、已满预定数量的三角形对应第四系数,其中,第一系数大于第三系数,第三系数大于第四系数;
进一步举例说明,为提高对候选探点集合中的探点的筛选效率,可以但不限于对不同的三角形分配不同的计算系数,如三角形A(第一三角形)还未关联任何探点,则新关联至该三角形A的探点1的价值较高,进而在计算探点1与三角形A的覆盖值时,会再与一个较高的计算系数(第一系数,如3)相乘;再如三角形B(未满预定数量的三角形)已关联1个探点,则新关联至该三角形B的探点2的价值一般,进而在计算探点2与三角形B的覆盖值时,会再与一个一般的计算系数(第三系数,如2)相乘;再如三角形C(已满预定数量的三角形)已关联2个探点,则新关联至该三角形C的探点3的价值较低,进而在计算探点3与三角形C的覆盖值时,会再与一个较低的计算系数(第四系数,如1)相乘。
通过本申请提供的实施例,得到当前待处理探点与一组三角形的覆盖度,实现了提高对候选探点集合中的探点的筛选效率的效果。
作为一种可选的方案,获取当前待处理探点与每个第二三角形的第二覆盖度,包括:
S1,若第二三角形包括第一关联三角形,且第一关联三角形已关联的探点数量等于1个,将当前待处理探点与每个第一关联三角形的第三覆盖度确定为当前待处理探点与每个第二三角形的第二覆盖度,其中,当前待处理探点与一个第一关联三角形之间的第三覆盖度是根据一个第一关联三角形与当前待处理探点的关联度、一个第一关联三角形与第一关联三角形已关联的探点的关联度、以及第一夹角确定得到的,第一夹角是当前待处理探点、一个第一关联三角形的质心以及第一关联三角形已关联的探点形成的夹角;
S2,若第二三角形包括第二关联三角形,且第二关联三角形已关联的探点数量大于1个,获取第二关联三角形已关联的探点与第二关联三角形的第四覆盖度、以及当前待处理探点与第二关联三角形的第五覆盖度,并在第五覆盖度大于第四覆盖度的情况下,将当前待处理探点与第二关联三角形的第五覆盖度确定为当前待处理探点与每个第二三角形的第二覆盖度,其中,当前待处理探点与一个第二关联三角形之间的第五覆盖度是根据一个第二关联三角形与当前待处理探点的关联度、一个第二关联三角形与第二关联三角形已关联的探点的关联度、以及第二夹角确定得到的值,第二夹角是当前待处理探点、一个第二关联三角形的质心以及第二关联三角形已关联的探点形成的夹角。
进一步举例说明,例如图10所示,作为第二关联三角形的三角形A,当前已关联探点P1以及探点P2,其中,探点P1为关联度最大的探点,可选择将探点P1设置为固定探点,不做更新;进一步在候选探点集合1002中选取探点P3待与三角形A关联的情况下,获取探点P1、探点P2与三角形A的覆盖度T1、以及探点P1、探点P3与三角形A的覆盖度T2,进而在覆盖度T2小于覆盖度T1的情况下,更新三角形A当前关联的探点,由探点P1以及探点P2调整为探点P1以及探点P3;或,
探点P1以及探点P2为同等价值的探点,进而在候选探点集合1002中选取探点P3待与三角形A关联的情况下,获取探点P1、探点P2与三角形A的覆盖度T1、探点P2、探点P3与三角形A的覆盖度T3、以及探点P1、探点P3与三角形A的覆盖度T2,进而在覆 盖度T1、覆盖度T2以及覆盖度T1中确定出最大的覆盖度,如覆盖度T3,进而更新三角形A当前关联的探点,由探点P1以及探点P2调整为探点P2以及探点P3。
通过本申请提供的实施例,实现了提高对候选探点集合中的探点的筛选准确性的效果。
作为一种可选的方案,根据每个当前三角形与候选探点集合中各个探点的关联度,对候选探点集合中的探点进行筛选,得到目标探点集合,包括:
S1,在候选探点集合中确定与当前三角形的关联度最大的第一当前探点;
S2,获取候选探点集合中除第一当前探点之外的各个探点与当前三角形的质心以及第一当前探点形成的夹角,得到一组夹角;
S3,根据一组夹角,以及当前三角形与除第一当前探点之外的各个探点的关联度,从除第一当前探点之外的各个探点中确定出第二当前探点;
S4,将第一当前探点以及第二当前探点确定为目标探点集合中的探点。
需要说明的是,为提高对候选探点集合中的探点的筛选效率,可以但不限于依次或并行对一组三角形中的每个三角形与候选探点集合中的任N个探点关联,其中,N为预定数量;
进一步举例说明,例如图11所示,将一组三角形1102中的三角形A作为当前三角形为例说明,在候选探点集合1104中确定与三角形A的关联度最大的探点P1,并将探点P1确定为目标探点集合1106中的探点;再获取候选探点集合1106中除探点P1之外的各个探点与三角形A的质心以及探点P1形成的夹角,共得到一组夹角;根据一组夹角以及三角形A与除探点P1之外的各个探点的关联度,从除探点P1之外的各个探点中确定出探点P3;将探点P3也确定为目标探点集合1106中的探点;同理,将一组三角形1102中的全部三角形也作为当前三角形执行上述步骤,以得到目标探点集合1108,其中,三角形A与探点P1、探点P3关联、三角形B与探点P1、探点P3关联、三角形C与探点P2、探点P3关联。
通过本申请提供的实施例,实现了提高对候选探点集合中的探点的筛选效率的效果。
作为一种可选的方案,根据目标探点集合中各个探点的球谐基系数,对待渲染虚拟模型进行光照渲染,包括:
S1,将目标探点集合中各个探点的球谐基系数保存为待处理贴图;
S2,在需要对待渲染虚拟模型进行渲染时,从待处理贴图中确定待渲染虚拟模型中各个顶点对应的探点的球谐基系数;
S3,根据待渲染虚拟模型中各个顶点对应的探点的球谐基系数,确定待渲染虚拟模型中各个顶点的球谐基系数;
S4,根据待渲染虚拟模型中各个顶点的球谐基系数,对待渲染虚拟模型进行光照渲染。
在一种可能的实现方式中,在本实施例中,可以但不限于将待渲染虚拟模型所在的模型空间下的探点转化到目标空间下的探点,并利用烘焙器所提供的基础功能计算出各个探点的受光情况,最终得到光照的球谐系数;再将烘焙出来的待渲染虚拟模型中的所有元素的球谐系数保存为贴图(所有元素的球谐系数保存为一个或多个贴图、或一个元素的球谐系数保存为一个贴图等);运行时根据预先保存的索引和权重数据,采样球谐贴图,得到 的系数与法线所对应的基函数做点积,以得到光照信息,完成对待渲染虚拟模型进行光照渲染。
在一种可能的实现方式中,在本实施例中,可以但不限于区分待渲染虚拟模型中的各个顶点与各个探点具有的对应关系的情况,并具体分为在顶点与探点具有对应关系的情况下,将该顶点的球谐基系数确定为等于一个探点的球谐基系数;在顶点与探点具有对应关系的情况下,将该顶点的球谐基系数确定为等于多个探点的球谐基系数的加权之和。
在一种可能的实现方式中,在本实施例中,可以但不限于在顶点的属性里面增加相关探点的索引和权重,为了光照渲染兼顾效果与效率,对顶点的属性结构进行了限定,如假设每个顶点关联两个探点,且每个顶点的属性空间为32bit(并不固定,32bit仅做举例说明),则在该32bit的属性空间中,可以但不限于分配至少两个探点的属性空间,如分配探点A8bit的属性空间、分配探点B8bit的属性空间,至多可支持256个索引,余下的16bit,其中,8bit用于分配权重(探点之间的权重可关联计算,如探点B的权重可通过探点A的权重计算得到了,因此还可节省一个通道实现其他功能),再预留8bit的属性空间,用于保存顶点AO,其中,顶点AO可以但不限于理解为遮挡参数或环境光遮挡系数,进而在不增加内存的情况下,增加了表现细节。
作为一种可选的方案,根据待渲染虚拟模型中的各个顶点的球谐基系数,对待渲染虚拟模型进行光照渲染,包括:
S1,在待渲染虚拟模型属于远景虚拟模型的情况下,获取待渲染虚拟模型中各个顶点的第一球谐基子系数,并根据第一球谐基子系数对待渲染虚拟模型进行光照渲染;
S2,在待渲染虚拟模型属于近景虚拟模型的情况下,获取待渲染虚拟模型中各个顶点的第二球谐基子系数,并根据第二球谐基子系数对待渲染虚拟模型进行光照渲染,其中,第二球谐基子系数为根据第一球谐基子系数计算得到的系数。
在一种可能的实现方式中,在本实施例中,球谐基系数可以但不限于包括3阶段的球谐基系数,其中,球谐基系数的二阶和三阶系数可以但不限于通过一阶系数做归一化处理得到;
进一步举例说明,在一种可能的实现方式中3阶段的球谐基系数如图12中的公式1202所示,其中,SHl,m为探点的球谐基系数,下标l、m皆为球谐基的通用表示,N为探点所关联的三角形数量,w(i)为权重,如L(i)为某方向入射光照;如图12中的公式1204所示,球谐基函数的后缀部分小于1,所以高阶(2阶和3阶)的球谐基系数可由1阶的球谐基系数做归一化处理。
在一种可能的实现方式中,在本实施例中,考虑到采样效率、LOD、以及整合其他功能,球谐基系数的数据格式可以但不限于采用如下方式进行编码:首先贴图的格式可以但不限于是Uint4,这种格式是硬件所能支持的,比较方便编码的进行;进一步对每组球谐基系数,通常需要占用2个像素,进而将低阶的球谐基系数(第一球谐基子系数)与高阶的球谐基系数(第二球谐基子系数)分到两个不同的像素,进而更方便的做lod,如近处的物体(近景虚拟模型)需要进行完整的、高阶的球谐计算,而远处的物体只需进行低阶的球 谐计算即可,这样远处物体的采样只有1次;更进一步,还可以但不限于将高阶的球谐基系数拆分到另一张贴图,于是远处的物体就只用加载一半的贴图量;
进一步举例说明,如图13所示,第一球谐基子系数1302包括第1和2阶系数,第二球谐基子系数1304包括第3阶系数,进而将RGB 3个通道的各阶球谐基系数,分至16byte的两个像素,如将支RGB 3个通道的第一球谐基子系数分至第一像素1302,将支RGB 3个通道的第二球谐基子系数分至第二像素1304;
在一种可能的实现方式中,对于第一像素1302,16byte的存储空间被分为三部分,第一部分6byte,用于分配RGB 3个通道的1阶球谐基系数;第二部分9byte,用于分配RGB 3个通道的2阶球谐基系数;第三部分1byte,为预留的字节,可用于保存阴影数据,以实现一个基于探点的相对粗糙的阴影效果;
再者,对于第二像素1304,16byte的存储空间被分为两部分,第一部分15byte,用于分配RGB 3个通道的3阶球谐基系数;第二部分1byte,用为预留的字节,可用于保存阴影数据,以实现一个基于探点的相对粗糙的阴影效果。
通过本申请提供的实施例,针对远景虚拟模型或近景虚拟模型的待渲染虚拟模型采用不同的光照渲染方式,实现了兼顾光照渲染的效率以及真实性的效果。
作为一种可选的方案,将目标探点集合中各个探点的球谐基系数保存为待处理贴图,包括:
在获取到目标探点集合中各个探点的第三球谐基子系数以及第四球谐基子系数的情况下,将第三球谐基子系数以第一数据格式保存为待处理贴图,以及将第四球谐基子系数以第二数据格式保存为待处理贴图,其中,第四球谐基子系数是根据第三球谐基子系数计算得到的系数,第一数据格式所占用的字节数大于第二数据格式所占用的字节数。
进一步举例说明,例如图13所示,一个1阶球谐基系数(第三球谐基子系数)占据2个字节数,RGB通道的三个1阶球谐基系数总占据6个字节数,而一个2阶球谐基系数(第四球谐基子系数)占据1个字节数,RGB通道的九个2阶球谐基系数总占据9个字节数;此外,一个3阶球谐基系数(第四球谐基子系数)同样也占据1个字节数,RGB通道的十五个3阶球谐基系数总占据15个字节数。
通过本申请提供的实施例,区别地采用不同数据格式保存得到待处理贴图,实现了提高球谐基系数的处理效率的效果。
作为一种可选的方案,获取待渲染虚拟模型与候选探点集合中各个探点的阻挡度,包括:
S2,将一组三角形中每个三角形分别作为当前三角形,从当前三角形发出一组检测射线,其中,待渲染虚拟模型被划分成所述一组三角形;
S3,确定一组检测射线中与候选探点集合中各个探点接触的检测射线的数量;
S4,根据一组检测射线中与候选探点集合中各个探点接触的检测射线的数量、以及一组检测射线中的检测射线的数量,确定当前三角形与候选探点集合中各个探点的阻挡度。
进一步举例说明,基于图7所示场景,继续例如图14所示,以三角形704为当前三角形举例说明,从三角形704发出一组检测射线;确定一组检测射线中与候选探点集合中各 个探点(如探点706)接触的检测射线的数量;根据一组检测射线中与候选探点集合中各个探点接触的检测射线的数量、以及一组检测射线中的检测射线的数量,确定三角形704与候选探点集合中各个探点的阻挡度。
通过本申请提供的实施例,实现了提高阻挡度的获取准确性的效果。
作为一种可选的方案,获取待渲染虚拟模型与候选探点集合中各个探点的阻挡度,包括:获取待渲染虚拟模型被划分成的各个三角形与候选探点集合中各个探点的阻挡度;
作为一种可选的方案,根据待渲染虚拟模型与候选探点集合中各个探点的阻挡度,对候选探点集合中的探点进行筛选,得到目标探点集合,包括:根据待渲染虚拟模型被划分成的各个三角形与候选探点集合中各个探点的阻挡度,对候选探点集合中的探点进行筛选,得到目标探点集合,其中,目标探点集合中各个探点与待渲染虚拟模型被划分成的各个三角形之间具有索引关系;
作为一种可选的方案,获取目标探点集合中的各个探点的球谐基系数,并根据目标探点集合中各个探点的球谐基系数,对待渲染虚拟模型进行光照渲染,包括:获取目标探点集合中各个探点的球谐基系数,并根据索引关系以及目标探点集合中各个探点的球谐基系数,对待渲染虚拟模型进行光照渲染。
作为一种可选的方案,获取待渲染虚拟模型的候选探点集合,包括:
S1,获取待渲染虚拟模型的原始探点集合,其中,待渲染虚拟模型被划分成一组三角形,原始探点集合包括一组三角形中的各三角形对应的一个或多个探点;
S2,从原始探点集合中过滤掉无效的探点,得到候选探点集合。
在一种可能的实现方式中,在本实施例中,过滤掉无效的探点的方式可以但不限于包括过滤掉位于待渲染虚拟模型的无效区域(如待渲染虚拟模型的内部、背光部等)的探点、滤掉与待渲染虚拟模型之间的关联程度低于有效阈值的探点等。
需要说明的是,对原始探点集合中的探点进行过滤,可得到相对优质的探点,更利于后续光照渲染的执行。
进一步举例说明,例如图15所示待渲染虚拟模型1502的侧切图,对待渲染虚拟模型1502的所有候选探点,如探点e、探点d,检查上述候选探点是否位于待渲染虚拟模型1502的内部,具体的d为一个位于待渲染虚拟模型1502外部的探点,e则为一个位于待渲染虚拟模型1502内部的探点;此外,对于待渲染虚拟模型1502内部的探点的光照信息无效,也从候选探点集合中删除。
通过本申请提供的实施例,获取待渲染虚拟模型的原始探点集合,其中,待渲染虚拟模型被划分成一组三角形,原始探点集合包括一组三角形中的各三角形对应的一个或多个探点;在原始探点集合中过滤掉无效的探点,得到候选探点集合,实现了提高光照渲染的执行效率的效果。
作为一种可选的方案,为方便理解,将上述虚拟模型的光照渲染方法应用在3D游戏的光照渲染场景中,以提升游戏画质表现与真实感,同时对于在空间上具有曲面形态的模型有更好的效果;
进一步举例说明,如图16所示,上述虚拟模型的光照渲染方法应用在3D游戏的光照渲染场景中的步骤如下述内容所示:
S1602,获取由若干三角形组成的模型;
S1604,生成所有的候选探点;
S1606,去掉无效的候选探点;
S1608,判断候选探点是否都满足筛选条件,若是,则执行S1610,若否,则执行S1606;
S1610,得到所有三角形与所有探点的关联程度;
S1612,判断选出的探点数量是否满足条件,或所有三角形都已关联到至少两个探点,若是,则执行S1618,若否,则执行S1614;
S1614,从剩下的候选探点中确定覆盖度最大的探点;
S1616,更新剩余候选探点和筛选出的探点列表;
S1618,得到最终的探点列表。
在一种可能的实现方式中,在本实施例中,在模型空间,自动计算出若干个探点,其中,模型空间中每个探点的颜色可以但不限于都不一样,且顶点颜色与所关联的最大权重探点相同,顶点线段可以但不限用于表示法线方向;进一步根据计算出来的探点,计算出模型上每个顶点所关联的若干探点以及权重,并将计算出来的探点索引和权重保存在模型顶点数据中;再者,将场景传递到烘焙器烘焙,其中,场景由若干模型组成,同样的模型可能存在多个实例。即,将模型空间下的探点转化到世界空间,利用烘焙器所提供的基础功能计算出探点的受光情况,最终得到光照的球谐系数。
在一种可能的实现方式中,在本实施例中,在烘焙出来的所有虚拟模型的球谐系数保存为贴图(所有虚拟模型的球谐系数都保存为一个贴图、或一个虚拟模型的球谐系数保存为一个贴图、或多个虚拟模型的球谐系数保存为一个贴图、或多个虚拟模型的球谐系数保存为多个贴图),进一步如图17所示,将虚拟模型1702、虚拟模型1704以及虚拟模型1706的球谐系数保存为待处理贴图1708。具体的,采用一定的压缩算法,将系数组装到若干张贴图里面;运行时根据顶点保存的索引和权重数据,采样球谐贴图,得到的系数与法线所对应的基函数做点积便是光照信息。
可以理解的是,在本申请的具体实施方式中,涉及到用户信息等相关的数据,当本申请以上实施例运用到具体产品或技术中时,需要获得用户许可或者同意,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
根据本申请实施例的另一个方面,还提供了一种用于实施上述虚拟模型的光照渲染方法的虚拟模型的光照渲染装置。如图18所示,该装置包括:
第一获取单元1802,用于获取待渲染虚拟模型的候选探点集合,其中,侯选探点集合中的探点用于对待渲染虚拟模型进行光照渲染;
第二获取单元1804,用于获取待渲染虚拟模型与候选探点集合中各个探点的阻挡度;
筛选单元1806,用于根据待渲染虚拟模型与候选探点集合中各个探点的阻挡度,对候选探点集合中的探点进行筛选,得到目标探点集合;
第三获取单元1808,用于获取目标探点集合中各个探点的球谐基系数,并根据目标探点集合中各个探点的球谐基系数,对待渲染虚拟模型进行光照渲染。
具体实施例可以参考上述虚拟模型的光照渲染装置中所示示例,本示例中在此不再赘述。
作为一种可选的方案,筛选单元1806,包括:
第一确定模块,用于将一组三角形中每个三角形分别作为当前三角形,根据当前三角形与候选探点集合中各个探点的阻挡度,确定当前三角形与候选探点集合中各个探点的关联度,其中,待渲染虚拟模型被划分成一组三角形;
第一筛选模块,用于根据每个当前三角形与候选探点集合中各个探点的关联度,对候选探点集合中的探点进行筛选,得到目标探点集合。
具体实施例可以参考上述虚拟模型的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,第一确定模块,包括:
第一执行子模块,用于对于一组三角形中的每个三角形,执行以下步骤,其中,在执行以下步骤时,每个三角形为当前三角形:
第一确定子模块,用于确定当前三角形投射至一组检测区域中各个检测区域的投影面积,其中,一组检测区域包括候选探点集合中各个探点分别对应的检测区域;
第二确定子模块,用于根据当前三角形投射至一组检测区域中各个检测区域的投影面积以及当前三角形与候选探点集合中各个探点的阻挡度,确定当前三角形与候选探点集合中的各个探点的关联度。
具体实施例可以参考上述虚拟模型的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,第一筛选模块,包括:
第二执行子模块,用于重复执行以下步骤,直到一组三角形中的每个当前三角形都关联有预定数量的探点,其中,目标探点集合包括一组三角形中每个当前三角形关联的探点,当前探点集合被初始化为候选探点集合:
第一获取子模块,用于将当前探点集合中每个探点分别作为当前待处理探点,获取当前待处理探点与一组三角形的覆盖度,其中,当前待处理探点与一组三角形的覆盖度是当前待处理探点与一组三角形中每个当前三角形之间的覆盖度之和,当前待处理探点与一个当前三角形之间的覆盖度是根据一个当前三角形与当前待处理探点的关联度确定的;
第二获取子模块,用于在当前探点集合中选取覆盖度最大的探点作为目标探点集合中的探点,并从当前探点集合中删除覆盖度最大的探点,其中,目标探点集合中的探点被根 据每个探点对应的覆盖度与一组三角形中的当前三角形进行关联,每个探点对应的覆盖度为每个探点与一组三角形中每个三角形之间覆盖度。
具体实施例可以参考上述虚拟模型的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,第一获取子模块,包括:
第一获取子单元,用于若一组三角形中包括未关联探点的第一三角形,获取当前待处理探点与每个第一三角形的第一覆盖度,其中,当前待处理探点与一个第一三角形之间的第一覆盖度是根据一个第一三角形与当前待处理探点的关联度得到的,当前待处理探点与一组三角形的覆盖度是当前待处理探点与每个第一三角形之间的第一覆盖度之和,当前待处理探点与一组三角形的覆盖度包括当前待处理探点与每个第一三角形的第一覆盖度;
第二获取子单元,用于若一组三角形中包括已关联探点的第二三角形,获取当前待处理探点与每个第二三角形的第二覆盖度,其中,当前待处理探点与一个第二三角形之间的第二覆盖度是根据一个第二三角形与当前待处理探点的关联度、一个第二三角形与另一个探点的关联度、以及当前夹角确定得到的,当前待处理探点与一组三角形的覆盖度是当前待处理探点与每个第二三角形之间的第二覆盖度之和,当前夹角是当前待处理探点、一个第二三角形的质心以及另一个探点形成的夹角,第二三角形已关联的探点包括另一个探点,当前待处理探点与一组三角形的覆盖度包括当前待处理探点与每个第二三角形的第二覆盖度;
整合子单元,用于若一组三角形中包括第一三角形以及第二三角形,整合当前待处理探点与每个第一三角形的第一覆盖度以及当前待处理探点与每个第二三角形的第二覆盖度,得到当前待处理探点与一组三角形的覆盖度。
具体实施例可以参考上述虚拟模型的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,整合子单元,包括:
第一子获取单元,用于获取第一三角形对应的第一系数、以及第二三角形对应的第二系数,其中,第一系数大于第二系数;
第二子获取单元,用于获取第一覆盖度与第一系数的第一乘积值、以及第二覆盖度与第二系数的第二乘积值;
子求和单元,用于对第一乘积值和第二乘积值进行求和,得到当前待处理探点与一组三角形的覆盖度。
具体实施例可以参考上述虚拟模型的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,第二获取子单元,包括:
第一子确定单元,用于若第二三角形包括第一关联三角形,且第一关联三角形已关联的探点数量等于1个,将当前待处理探点与每个第一关联三角形的第三覆盖度确定为当前待处理探点与每个第二三角形的第二覆盖度,其中,当前待处理探点与一个第一关联三角形之间的第三覆盖度是根据一个第一关联三角形与当前待处理探点的关联度、一个第一关 联三角形与第一关联三角形已关联的探点的关联度、以及第一夹角确定得到的,第一夹角是当前待处理探点、一个第一关联三角形的质心以及第一关联三角形已关联的探点形成的夹角;
第二子确定单元,用于若第二三角形包括第二关联三角形,且第二关联三角形已关联的探点数量大于1个,获取第二关联三角形已关联的探点与第二关联三角形的第四覆盖度、以及当前待处理探点与第二关联三角形的第五覆盖度,并在第五覆盖度大于第四覆盖度的情况下,将当前待处理探点与第二关联三角形的第五覆盖度确定为当前待处理探点与每个第二三角形的第二覆盖度,其中,当前待处理探点与一个第二关联三角形之间的第五覆盖度是根据一个第二关联三角形与当前待处理探点的关联度、一个第二关联三角形与第二关联三角形已关联的探点的关联度、以及第二夹角确定得到的值,第二夹角是当前待处理探点、一个第二关联三角形的质心以及第二关联三角形已关联的探点形成的夹角。
具体实施例可以参考上述虚拟模型的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,第一筛选模块,包括:
第三执行子模块,用于对于一组三角形中的每个三角形,执行以下步骤,其中,在执行以下步骤时,每个三角形为当前三角形:
第三确定子模块,用于在候选探点集合中确定与当前三角形的关联度最大的第一当前探点;
第三获取子模块,用于获取候选探点集合中除第一当前探点之外的各个探点与当前三角形的质心以及第一当前探点形成的夹角,得到一组夹角;
第四确定子模块,用于根据一组夹角,以及当前三角形与除第一当前探点之外的各个探点的关联度,从除第一当前探点之外的各个探点中确定出第二当前探点;
第五确定子模块,用于将第一当前探点以及第二当前探点确定为目标探点集合中的探点。
具体实施例可以参考上述虚拟模型的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,第三获取单元1808,包括:
保存模块,用于将目标探点集合中各个探点的球谐基系数保存为待处理贴图;
第二确定模块,用于在需要对待渲染虚拟模型进行渲染时,从待处理贴图中确定待渲染虚拟模型中各个顶点对应的探点的球谐基系数;
第三确定模块,用于根据待渲染虚拟模型中各个顶点对应的探点的球谐基系数,确定待渲染虚拟模型中各个顶点的球谐基系数;
渲染模块,用于根据待渲染虚拟模型中各个顶点的球谐基系数,对待渲染虚拟模型进行光照渲染。
具体实施例可以参考上述虚拟模型的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,渲染模块,包括:
第四获取子模块,用于在待渲染虚拟模型属于远景虚拟模型的情况下,获取待渲染虚拟模型中各个顶点的第一球谐基子系数,并根据第一球谐基子系数对待渲染虚拟模型进行光照渲染;
第五获取子模块,用于在待渲染虚拟模型属于近景虚拟模型的情况下,获取待渲染虚拟模型中各个顶点的第二球谐基子系数,并根据第二球谐基子系数对待渲染虚拟模型进行光照渲染,其中,第二球谐基子系数为根据第一球谐基子系数计算得到的系数。
具体实施例可以参考上述虚拟模型的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,保存模块,包括:
保存子模块,用于在获取到目标探点集合中各个探点的第三球谐基子系数以及第四球谐基子系数的情况下,将第三球谐基子系数以第一数据格式保存为待处理贴图,以及将第四球谐基子系数以第二数据格式保存为待处理贴图,其中,第四球谐基子系数是根据第三球谐基子系数计算得到的系数,第一数据格式所占用的字节数大于第二数据格式所占用的字节数。
具体实施例可以参考上述虚拟模型的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,第二获取单元1804,包括:
第四执行子模块,用于对于一组三角形中的每个三角形,执行以下步骤,其中,在执行以下步骤时,每个三角形为当前三角形,待渲染虚拟模型被划分成一组三角形:
第一检测子模块,用于从当前三角形发出一组检测射线;
第二检测子模块,用于确定一组检测射线中与候选探点集合中各个探点接触的检测射线的数量;
阻挡子模块,用于根据一组检测射线中与候选探点集合中各个探点接触的检测射线的数量、以及一组检测射线中的检测射线的数量,确定当前三角形与候选探点集合中各个探点的阻挡度。
具体实施例可以参考上述虚拟模型的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,第二获取单元1804,包括:第一获取模块,用于获取待渲染虚拟模型被划分成的各个三角形与候选探点集合中各个探点的阻挡度;
筛选单元1806,包括:第二筛选模块,用于根据待渲染虚拟模型被划分成的各个三角形与候选探点集合中各个探点的阻挡度,对候选探点集合中的探点进行筛选,得到目标探点集合,其中,目标探点集合中的各个探点与待渲染虚拟模型被划分成的各个三角形之间具有索引关系;
第三获取单元1808,包括:第二获取模块,用于获取目标探点集合中各个探点的球谐基系数,并根据索引关系以及目标探点集合中的各个探点的球谐基系数,对待渲染虚拟模型进行光照渲染。
具体实施例可以参考上述虚拟模型的光照渲染方法中所示示例,本示例中在此不再赘述。
作为一种可选的方案,第一获取单元1802,包括:
第三获取模块,用于获取待渲染虚拟模型的原始探点集合,其中,待渲染虚拟模型被划分成一组三角形,原始探点集合包括一组三角形中的各三角形对应的一个或多个探点;
第四获取模块,用于从原始探点集合中过滤掉无效的探点,得到候选探点集合。
具体实施例可以参考上述虚拟模型的光照渲染方法中所示示例,本示例中在此不再赘述。
根据本申请实施例的又一个方面,还提供了一种用于实施上述虚拟模型的光照渲染方法的电子设备,如图19所示,该电子设备包括存储器1902和处理器1904,该存储器1902中存储有计算机程序,该处理器1904被设置为通过计算机程序执行上述任一项方法实施例中的步骤。
在一种可能的实现方式中,在本实施例中,上述电子设备可以位于计算机网络的多个网络设备中的至少一个网络设备。
在一种可能的实现方式中,在本实施例中,上述处理器可以被设置为通过计算机程序执行以下步骤:
S1,获取待渲染虚拟模型的候选探点集合,其中,侯选探点集合中的探点用于对待渲染虚拟模型进行光照渲染;
S2,获取待渲染虚拟模型与候选探点集合中各个探点的阻挡度;
S3,根据待渲染虚拟模型与候选探点集合中各个探点的阻挡度,对候选探点集合中的探点进行筛选,得到目标探点集合;
S4,获取目标探点集合中各个探点的球谐基系数,并根据目标探点集合中各个探点的球谐基系数,对待渲染虚拟模型进行光照渲染。
在一种可能的实现方式中,本领域普通技术人员可以理解,图19所示的结构仅为示意,电子设备也可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图19其并不对上述电子设备的结构造成限定。例如,电子设备还可包括比图19中所示更多或者更少的组件(如网络接口等),或者具有与图19所示不同的配置。
其中,存储器1902可用于存储软件程序以及模块,如本申请实施例中的虚拟模型的光照渲染方法和装置对应的程序指令/模块,处理器1904通过运行存储在存储器1902内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的虚拟模型的光照渲染方法。存储器1902可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器1902可进一步包括相对于处理器1904远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。其中,存储器1902具体可以但不限于用于存储候选探点集合、阻挡度以及目标探点集合等信息。作为一种示例,如图19所示,上述存储器1902中可以但不限于包括上述虚拟模型的光照 渲染装置中的第一获取单元1802、第二获取单元1804、筛选单元1806及第三获取单元1808。此外,还可以包括但不限于上述虚拟模型的光照渲染装置中的其他模块单元,本示例中不再赘述。
在一种可能的实现方式中,上述的传输装置1906用于经由一个网络接收或者发送数据。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置1906包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置1906为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
此外,上述电子设备还包括:显示器1908,用于显示上述第候选探点集合、阻挡度以及目标探点集合等信息;和连接总线1910,用于连接上述电子设备中的各个模块部件。
在其他实施例中,上述终端设备或者服务器可以是一个分布式系统中的一个节点,其中,该分布式系统可以为区块链系统,该区块链系统可以是由该多个节点通过网络通信的形式连接形成的分布式系统。其中,节点之间可以组成点对点(Peer To Peer,简称P2P)网络,任意形式的计算设备,比如服务器、终端等电子设备都可以通过加入该点对点网络而成为该区块链系统中的一个节点。
根据本申请的一个方面,提供了一种计算机程序产品,该计算机程序产品包括计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分从网络上被下载和安装,和/或从可拆卸介质被安装。在该计算机程序被中央处理器执行时,执行本申请实施例提供的各种功能。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
需要说明的是,电子设备的计算机系统仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
计算机系统包括中央处理器(Central Processing Unit,CPU),其可以根据存储在只读存储器(Read-Only Memory,ROM)中的程序或者从存储部分加载到随机访问存储器(Random Access Memory,RAM)中的程序而执行各种适当的动作和处理。在随机访问存储器中,还存储有系统操作所需的各种程序和数据。中央处理器、在只读存储器以及随机访问存储器通过总线彼此相连。输入/输出接口(Input/Output接口,即I/O接口)也连接至总线。
以下部件连接至输入/输出接口:包括键盘、鼠标等的输入部分;包括诸如阴极射线管(Cathode Ray Tube,CRT)、液晶显示器(Liquid Crystal Display,LCD)等以及扬声器等的输出部分;包括硬盘等的存储部分;以及包括诸如局域网卡、调制解调器等的网络接口卡的通信部分。通信部分经由诸如因特网的网络执行通信处理。驱动器也根据需要连接至输入/输出接口。可拆卸介质,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器上,以便于从其上读出的计算机程序根据需要被安装入存储部分。
特别地,根据本申请的实施例,各个方法流程图中所描述的过程可以被实现为计算机软件程序。例如,本申请的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的 实施例中,该计算机程序可以通过通信部分从网络上被下载和安装,和/或从可拆卸介质被安装。在该计算机程序被中央处理器执行时,执行本申请的系统中限定的各种功能。
根据本申请的一个方面,提供了一种计算机可读存储介质,计算机设备的处理器从计算机可读存储介质读取该计算机程序,处理器执行该计算机程序,使得电子设备执行上述各种可选实现方式中提供的方法。
在一种可能的实现方式中在一种可能的实现方式中,在本实施例中,本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (18)

  1. 一种虚拟模型的光照渲染方法,所述方法由电子设备执行,包括:
    获取待渲染虚拟模型的候选探点集合,其中,所述侯选探点集合中的探点用于对所述待渲染虚拟模型进行光照渲染;
    获取所述待渲染虚拟模型与所述候选探点集合中各个探点的阻挡度;
    根据所述待渲染虚拟模型与所述候选探点集合中各个探点的阻挡度,对所述候选探点集合中的探点进行筛选,得到目标探点集合;
    获取所述目标探点集合中各个探点的球谐基系数,并根据所述目标探点集合中各个探点的球谐基系数,对所述待渲染虚拟模型进行光照渲染。
  2. 根据权利要求1所述的方法,所述根据所述待渲染虚拟模型与所述候选探点集合中各个探点的阻挡度,对所述候选探点集合中的探点进行筛选,得到目标探点集合,包括:
    将一组三角形中每个三角形分别作为当前三角形,根据所述当前三角形与所述候选探点集合中各个探点的阻挡度,确定所述当前三角形与所述候选探点集合中各个探点的关联度,其中,所述待渲染虚拟模型被划分成所述一组三角形;
    根据每个所述当前三角形与所述候选探点集合中各个探点的关联度,对所述候选探点集合中的探点进行筛选,得到所述目标探点集合。
  3. 根据权利要求2所述的方法,所述根据所述当前三角形与所述候选探点集合中各个探点的阻挡度,确定所述当前三角形与所述候选探点集合中各个探点的关联度,包括:
    确定所述当前三角形投射至一组检测区域中各个检测区域的投影面积,其中,所述一组检测区域包括所述候选探点集合中各个探点分别对应的检测区域;
    根据所述当前三角形投射至一组检测区域中各个检测区域的投影面积以及所述当前三角形与所述候选探点集合中各个探点的阻挡度,确定所述当前三角形与所述候选探点集合中的各个探点的关联度。
  4. 根据权利要求2所述的方法,所述根据每个所述当前三角形与所述候选探点集合中各个探点的关联度,对所述候选探点集合中的探点进行筛选,得到所述目标探点集合,包括:重复执行以下步骤,直到所述一组三角形中的每个所述当前三角形都关联有预定数量的探点,其中,所述目标探点集合包括所述一组三角形中每个当前三角形关联的探点,当前探点集合被初始化为所述候选探点集合:
    将所述当前探点集合中每个探点分别作为当前待处理探点,获取所述当前待处理探点与所述一组三角形的覆盖度,其中,所述当前待处理探点与所述一组三角形的覆盖度是所述当前待处理探点与所述一组三角形中每个所述当前三角形之间的覆盖度之和,所述当前待处理探点与一个当前三角形之间的覆盖度是根据所述一个当前三角形与所述当前待处理探点的关联度确定的;
    在所述当前探点集合中选取所述覆盖度最大的探点作为所述目标探点集合中的探点,并从所述当前探点集合中删除所述覆盖度最大的探点,其中,所述目标探点集合中的探点被根据所述每个探点对应的覆盖度与所述一组三角形中的当前三角形进行关联,所述每个探点对应的覆盖度为所述每个探点与所述一组三角形中每个三角形之间覆盖度。
  5. 根据权利要求4所述的方法,所述获取所述当前待处理探点与所述一组三角形的覆盖度,包括:
    若所述一组三角形中包括未关联探点的第一三角形,获取所述当前待处理探点与每个所述第一三角形的第一覆盖度,其中,所述当前待处理探点与一个所述第一三角形之间的第一覆盖度是根据一个所述第一三角形与所述当前待处理探点的关联度得到的,所述当前待处理探点与所述一组三角形的覆盖度是所述当前待处理探点与每个所述第一三角形之间的第一覆盖度之和,所述当前待处理探点与所述一组三角形的覆盖度包括所述当前待处理探点与每个所述第一三角形的第一覆盖度;
    若所述一组三角形中包括已关联探点的第二三角形,获取所述当前待处理探点与每个所述第二三角形的第二覆盖度,其中,所述当前待处理探点与一个所述第二三角形之间的第二覆盖度是根据一个所述第二三角形与所述当前待处理探点的关联度、一个所述第二三角形与另一个探点的关联度、以及当前夹角确定得到的,所述当前待处理探点与所述一组三角形的覆盖度是所述当前待处理探点与每个所述第二三角形之间的第二覆盖度之和,所述当前夹角是所述当前待处理探点、一个所述第二三角形的质心以及所述另一个探点形成的夹角,所述第二三角形已关联的探点包括所述另一个探点,所述当前待处理探点与所述一组三角形的覆盖度包括所述当前待处理探点与每个所述第二三角形的第二覆盖度;
    若所述一组三角形中包括所述第一三角形以及所述第二三角形,整合所述当前待处理探点与每个所述第一三角形的第一覆盖度以及所述当前待处理探点与每个所述第二三角形的第二覆盖度,得到所述当前待处理探点与所述一组三角形的覆盖度。
  6. 根据权利要求5所述的方法,所述整合所述当前待处理探点与每个所述第一三角形的第一覆盖度以及所述当前待处理探点与每个所述第二三角形的第二覆盖度,得到所述当前待处理探点与所述一组三角形的覆盖度,包括:
    获取所述第一三角形对应的第一系数、以及所述第二三角形对应的第二系数,其中,所述第一系数大于所述第二系数;
    获取所述第一覆盖度与所述第一系数的第一乘积值、以及所述第二覆盖度与所述第二系数的第二乘积值;
    对所述第一乘积值和所述第二乘积值进行求和,得到所述当前待处理探点与所述一组三角形的覆盖度。
  7. 根据权利要求5所述的方法,所述获取所述当前待处理探点与每个所述第二三角形的第二覆盖度,包括:
    若所述第二三角形包括第一关联三角形,且所述第一关联三角形已关联的探点数量等于1个,将所述当前待处理探点与每个所述第一关联三角形的第三覆盖度确定为所述当前待处理探点与每个所述第二三角形的第二覆盖度,其中,所述当前待处理探点与一个所述第一关联三角形之间的第三覆盖度是根据一个所述第一关联三角形与所述当前待处理探点的关联度、一个所述第一关联三角形与所述第一关联三角形已关联的探点的关联度、以及第一夹角确定得到的,所述第一夹角是所述当前待处理探点、所述一个所述第一关联三角形的质心以及所述第一关联三角形已关联的探点形成的夹角;
    若所述第二三角形包括第二关联三角形,且所述第二关联三角形已关联的探点数量大于1个,获取所述第二关联三角形已关联的探点与所述第二关联三角形的第四覆盖度、以及所述当前待处理探点与所述第二关联三角形的第五覆盖度,并在所述第五覆盖度大于所述第四覆盖度的情况下,将所述当前待处理探点与所述第二关联三角形的第五覆盖度确定为所述当前待处理探点与每个所述第二三角形的第二覆盖度,其中,所述当前待处理探点与一个所述第二关联三角形之间的第五覆盖度是根据一个所述第二关联三角形与所述当前待处理探点的关联度、一个所述第二关联三角形与所述第二关联三角形已关联的探点的关联度、以及第二夹角确定得到的值,所述第二夹角是所述当前待处理探点、所述一个所述第二关联三角形的质心以及所述第二关联三角形已关联的探点形成的夹角。
  8. 根据权利要求2所述的方法,所述根据每个所述当前三角形与所述候选探点集合中各个探点的关联度,对所述候选探点集合中的探点进行筛选,得到所述目标探点集合,包括:
    在所述候选探点集合中确定与所述当前三角形的关联度最大的第一当前探点;
    获取所述候选探点集合中除所述第一当前探点之外的各个探点与所述当前三角形的质心以及所述第一当前探点形成的夹角,得到一组夹角;
    根据所述一组夹角,以及所述当前三角形与除所述第一当前探点之外的各个探点的关联度,从除所述第一当前探点之外的各个探点中确定出第二当前探点;
    将所述第一当前探点以及所述第二当前探点确定为所述目标探点集合中的探点。
  9. 根据权利要求1所述的方法,所述根据所述目标探点集合中各个探点的球谐基系数,对所述待渲染虚拟模型进行光照渲染,包括:
    将所述目标探点集合中各个探点的球谐基系数保存为待处理贴图;
    在需要对所述待渲染虚拟模型进行渲染时,从所述待处理贴图中确定所述待渲染虚拟模型中各个顶点对应的探点的球谐基系数;
    根据所述待渲染虚拟模型中各个顶点对应的探点的球谐基系数,确定所述待渲染虚拟模型中各个顶点的球谐基系数;
    根据所述待渲染虚拟模型中各个顶点的球谐基系数,对所述待渲染虚拟模型进行光照渲染。
  10. 根据权利要求9所述的方法,所述根据所述待渲染虚拟模型中各个顶点的球谐基系数,对所述待渲染虚拟模型进行光照渲染,包括:
    在所述待渲染虚拟模型属于远景虚拟模型的情况下,获取所述待渲染虚拟模型中各个顶点的第一球谐基子系数,并根据所述第一球谐基子系数对所述待渲染虚拟模型进行光照渲染;
    在所述待渲染虚拟模型属于近景虚拟模型的情况下,获取所述待渲染虚拟模型中各个顶点的第二球谐基子系数,并根据所述第二球谐基子系数对所述待渲染虚拟模型进行光照渲染,其中,所述第二球谐基子系数为根据所述第一球谐基子系数计算得到的系数。
  11. 根据权利要求9所述的方法,所述将所述目标探点集合中各个探点的球谐基系数保存为待处理贴图,包括:
    在获取到所述目标探点集合中各个探点的第三球谐基子系数以及第四球谐基子系数的情况下,将所述第三球谐基子系数以第一数据格式保存为所述待处理贴图,以及将所述第四球谐基子系数以第二数据格式保存为所述待处理贴图,其中,所述第四球谐基子系数是根据所述第三球谐基子系数计算得到的系数,所述第一数据格式所占用的字节数大于所述第二数据格式所占用的字节数。
  12. 根据权利要求1至11中任一项所述的方法,所述获取所述待渲染虚拟模型与所述候选探点集合中各个探点的阻挡度,包括:
    将一组三角形中每个三角形分别作为当前三角形,从所述当前三角形发出一组检测射线,其中,所述待渲染虚拟模型被划分成所述一组三角形;
    确定所述一组检测射线中与所述候选探点集合中各个探点接触的检测射线的数量;
    根据所述一组检测射线中与所述候选探点集合中各个探点接触的检测射线的数量、以及所述一组检测射线中的检测射线的数量,确定所述当前三角形与所述候选探点集合中各个探点的阻挡度。
  13. 根据权利要求1至11中任一项所述的方法,所述获取所述待渲染虚拟模型与所述候选探点集合中各个探点的阻挡度,包括:获取所述待渲染虚拟模型被划分成的各个三角形与所述候选探点集合中各个探点的阻挡度;
    所述根据所述待渲染虚拟模型与所述候选探点集合中各个探点的阻挡度,对所述候选探点集合中的探点进行筛选,得到目标探点集合,包括:根据所述待渲染虚拟模型被划分成的各个三角形与所述候选探点集合中各个探点的阻挡度,对所述候选探点集合中的探点进行筛选,得到所述目标探点集合,其中,所述目标探点集合中各个探点与所述待渲染虚拟模型被划分成的各个三角形之间具有索引关系;
    所述获取所述目标探点集合中的各个探点的球谐基系数,并根据所述目标探点集合中的各个探点的球谐基系数,对所述待渲染虚拟模型进行光照渲染,包括:获取所述目标探点集合中的各个探点的球谐基系数,并根据所述索引关系以及所述目标探点集合中各个探点的球谐基系数,对所述待渲染虚拟模型进行光照渲染。
  14. 根据权利要求1至11中任一项所述的方法,所述获取待渲染虚拟模型的候选探点集合,包括:
    获取所述待渲染虚拟模型的原始探点集合,其中,所述待渲染虚拟模型被划分成一组三角形,所述原始探点集合包括所述一组三角形中的各三角形对应的一个或多个探点;
    从所述原始探点集合中过滤掉无效的探点,得到所述候选探点集合。
  15. 一种虚拟模型的光照渲染装置,所述装置部署在电子设备上,包括:
    第一获取单元,用于获取待渲染虚拟模型的候选探点集合,其中,所述侯选探点集合中的探点用于对所述待渲染虚拟模型进行光照渲染;
    第二获取单元,用于获取所述待渲染虚拟模型与所述候选探点集合中各个探点的阻挡度;
    筛选单元,用于根据所述待渲染虚拟模型与所述候选探点集合中各个探点的阻挡度,对所述候选探点集合中的探点进行筛选,得到目标探点集合;
    第三获取单元,用于获取所述目标探点集合中各个探点的球谐基系数,并根据所述目标探点集合中各个探点的球谐基系数,对所述待渲染虚拟模型进行光照渲染。
  16. 一种计算机可读的存储介质,所述计算机可读的存储介质包括存储的计算机程序,其中,所述计算机程序可被电子设备运行时执行所述权利要求1至14任一项中所述的方法。
  17. 一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现权利要求1至14任一项中所述方法的步骤。
  18. 一种电子设备,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为通过所述计算机程序执行所述权利要求1至14任一项中所述的方法。
PCT/CN2023/075919 2022-04-02 2023-02-14 虚拟模型的光照渲染方法、装置和存储介质及电子设备 WO2023185287A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210344256.9A CN116934947A (zh) 2022-04-02 2022-04-02 虚拟模型的光照渲染方法、装置和存储介质及电子设备
CN202210344256.9 2022-04-02

Publications (1)

Publication Number Publication Date
WO2023185287A1 true WO2023185287A1 (zh) 2023-10-05

Family

ID=88199169

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/075919 WO2023185287A1 (zh) 2022-04-02 2023-02-14 虚拟模型的光照渲染方法、装置和存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN116934947A (zh)
WO (1) WO2023185287A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160260247A1 (en) * 2015-03-03 2016-09-08 Imagination Technologies Limited Graphics processing using directional representations of lighting at probe positions within a scene
US20180093183A1 (en) * 2016-10-04 2018-04-05 Square Enix, Ltd. Methods, systems and computer-readable media for diffuse global illumination using probes
CN110517355A (zh) * 2018-05-22 2019-11-29 苹果公司 用于照明混合现实对象的环境合成
CN113694516A (zh) * 2020-05-20 2021-11-26 福建天晴在线互动科技有限公司 一种基于光照环境实时切换烘焙数据的方法及其系统
CN115120970A (zh) * 2022-04-28 2022-09-30 腾讯科技(深圳)有限公司 虚拟场景的烘焙方法、装置、设备以及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160260247A1 (en) * 2015-03-03 2016-09-08 Imagination Technologies Limited Graphics processing using directional representations of lighting at probe positions within a scene
US20180093183A1 (en) * 2016-10-04 2018-04-05 Square Enix, Ltd. Methods, systems and computer-readable media for diffuse global illumination using probes
CN110517355A (zh) * 2018-05-22 2019-11-29 苹果公司 用于照明混合现实对象的环境合成
CN113694516A (zh) * 2020-05-20 2021-11-26 福建天晴在线互动科技有限公司 一种基于光照环境实时切换烘焙数据的方法及其系统
CN115120970A (zh) * 2022-04-28 2022-09-30 腾讯科技(深圳)有限公司 虚拟场景的烘焙方法、装置、设备以及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WU LIFAN LIW086@ENG.UCSD.EDU; CAI GUANGYAN G5CAI@UCSD.EDU; ZHAO SHUANG SHZ@ICS.UCI.EDU; RAMAMOORTHI RAVI RAVIR@CS.UCSD.EDU: "Analytic spherical harmonic gradients for real-time rendering with many polygonal area lights", ACM TRANSACTIONS ON GRAPHICS, ACM, NY, US, vol. 39, no. 4, 8 July 2020 (2020-07-08), US , pages 134:1 - 134:14, XP058458570, ISSN: 0730-0301, DOI: 10.1145/3386569.3392373 *

Also Published As

Publication number Publication date
CN116934947A (zh) 2023-10-24

Similar Documents

Publication Publication Date Title
WO2020119684A1 (zh) 一种3d导航语义地图更新方法、装置及设备
CN104637089B (zh) 三维模型数据处理方法和装置
RU2677584C1 (ru) Использование межкадровой когерентности в архитектуре построения изображений с сортировкой примитивов на промежуточном этапе
WO2019001006A1 (zh) 一种图像数据的编码、解码方法及装置
US9042678B2 (en) Method and apparatus for reducing size of image data
CN109767466B (zh) 画面渲染方法、装置、终端及对应的存储介质
US10262451B1 (en) View-dependent color compression
CN110570506B (zh) 一种地图资源管理方法、装置、计算设备及存储介质
CN116188808B (zh) 图像特征提取方法和系统、存储介质及电子设备
US20230125255A1 (en) Image-based lighting effect processing method and apparatus, and device, and storage medium
WO2023029893A1 (zh) 纹理映射方法、装置、设备及存储介质
CN111417984A (zh) 用于对表示3d对象的点云的颜色进行编码/解码的方法及装置
JP7432793B1 (ja) 三次元点群に基づくマッピング方法、装置、チップ及びモジュール機器
CN114286172B (zh) 数据处理方法及装置
WO2023185287A1 (zh) 虚拟模型的光照渲染方法、装置和存储介质及电子设备
WO2023185317A1 (zh) 虚拟地形的光照渲染方法、装置、介质、设备和程序产品
CN112969027B (zh) 电动镜头的聚焦方法和装置、存储介质及电子设备
CN113274735B (zh) 模型处理方法、装置、电子设备及计算机可读存储介质
CN115908687A (zh) 渲染网络的训练、渲染方法、装置及电子设备
CN114339252A (zh) 一种数据压缩方法及装置
CN116567194B (zh) 虚拟图像合成方法、装置、设备及存储介质
CN117689556B (zh) 直方图编码方法、直方图解码方法、装置、设备和介质
CN113452981B (zh) 图像处理方法、装置、电子设备及存储介质
CN108399606A (zh) 一种图像调整的方法及装置
CN117911605A (zh) 三维场景构建方法、装置、设备、存储介质和程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23777666

Country of ref document: EP

Kind code of ref document: A1