WO2022188460A1 - 光照渲染方法、装置、电子设备及存储介质 - Google Patents

光照渲染方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2022188460A1
WO2022188460A1 PCT/CN2021/131872 CN2021131872W WO2022188460A1 WO 2022188460 A1 WO2022188460 A1 WO 2022188460A1 CN 2021131872 W CN2021131872 W CN 2021131872W WO 2022188460 A1 WO2022188460 A1 WO 2022188460A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
information
distance field
spatial range
texture
Prior art date
Application number
PCT/CN2021/131872
Other languages
English (en)
French (fr)
Inventor
李文耀
Original Assignee
网易(杭州)网络有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 网易(杭州)网络有限公司 filed Critical 网易(杭州)网络有限公司
Priority to US18/256,055 priority Critical patent/US20240062449A1/en
Publication of WO2022188460A1 publication Critical patent/WO2022188460A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/21Collision detection, intersection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/12Shadow map, environment map

Definitions

  • the present disclosure relates to the field of rendering technologies, and in particular, to a lighting rendering method, an apparatus, an electronic device, and a storage medium.
  • Some advanced rendering effects in games such as diffuse global light, specular reflection, soft shadows, ambient occlusion and other indirect lighting rendering can be achieved through raymarching (ray stepping) technology.
  • raymarching ray stepping
  • the length of the ray step depends on the shortest distance between the current position and the virtual model in the scene, that is, the SDF (Signed-distance-field) value.
  • the SDF data of each model is calculated offline and the SDF data is stored in a small 3D texture.
  • the 3D texture containing the model SDF data is applied to the 3D texture through translation, rotation and scaling.
  • the real SDF information of the model is obtained, and then updated to the SDF data of the whole scene.
  • an embodiment of the present disclosure provides a lighting rendering method, including:
  • Color information of the scene intersection is determined according to the incident light information and the material information of the scene intersection, where the color information is used for indirect lighting calculation.
  • the directional distance field information of the scene corresponding to the updated current frame is information obtained by reading the directional distance field texture map corresponding to the updated current frame, wherein the Different map pixels in the directional distance field texture map correspond to different spatial ranges in the game world space determined according to the position of the virtual camera, and the directional distance field information of the scene corresponding to the spatial range is stored in the map pixels.
  • the directional distance field texture map includes multiple layers of texture maps, and each layer of texture maps uses the same pixel size to store the directional distance field information of a corresponding spatial range of the scene.
  • the corresponding spatial range is determined according to the distance between the spatial range of the game world space and the virtual camera.
  • the directional distance field information of the scene corresponding to the spatial range stored by each map pixel in the directional distance field texture map is calculated in the following manner:
  • the depth information of the multi-layer scene and the depth value of the spatial range corresponding to each texture pixel or according to the directional distance field of the scene corresponding to the spatial range stored by each texture pixel in the directional distance field texture map of the previous frame information, to determine the directional distance field information of the scene corresponding to the spatial range stored by each map pixel in the directional distance field texture map of the current frame.
  • the generating multi-layer scene depth information according to the current position information of the virtual camera includes:
  • first depth layer is used to indicate that the scene is close to the virtual camera
  • second depth layer is used to indicate that the scene is far away from the virtual camera camera depth information
  • the multi-layer scene depth information is generated according to the first distance and the second distance of each scene.
  • the generating the multi-layer scene depth information according to the first distance and the second distance of each scene includes:
  • each map pixel in the directional distance field texture map of the current frame is determined according to the multi-layer scene depth information and the depth value of the spatial range corresponding to each map pixel
  • the stored directional distance field information of the scene corresponding to the spatial range including:
  • the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel is determined.
  • the determining, according to the comparison result, the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel includes:
  • the depth value of the spatial range corresponding to the current texture pixel, and the first depth information and the second depth information, the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel is determined.
  • the directional distance field of the current frame is determined according to the directional distance field information of the scene corresponding to the spatial range stored by each map pixel in the directional distance field texture map of the previous frame.
  • the directional distance field information of the scene corresponding to the spatial range stored by each map pixel in the field texture map including:
  • the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel in the directional distance field texture map of the previous frame is any texture pixel among the texture pixels.
  • the method further includes:
  • the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel determines whether the current texture is to be used for the current texture.
  • the correction parameter of the directional distance field information of the scene corresponding to the spatial range stored by the pixel determines whether the current texture is to be used for the current texture.
  • the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel is corrected.
  • the directional distance field information of the scene corresponding to the spatial range stored according to the current texture pixel and the directional distance field information of the scene corresponding to the spatial range stored by the neighbor texture pixels of the current texture pixel Determine the correction parameters for the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel, including:
  • the correction parameter of the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel is half of the size of the spatial range.
  • the scene directional distance field information of the corresponding spatial range stored according to the current texture pixel, and the corresponding spatial range stored by the neighbor texture pixels of the current texture pixel is determined, and the correction parameters of the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel are determined, including:
  • the correction parameter is the minimum sum value.
  • an embodiment of the present disclosure further provides a lighting rendering device, including: a determination module and an emission module;
  • the determining module is used to determine the current scene coloring point
  • the emission module configured to emit virtual stepping rays according to the current scene shading point
  • the determining module is configured to determine the stepping length corresponding to the virtual stepping light according to the updated directional distance field information of the scene corresponding to the current frame;
  • the determining module configured to control the virtual stepping light to extend into the scene according to the stepping length to determine at least one scene intersection;
  • the determining module is configured to determine the color information of the scene intersection according to the incident light information and the material information of the scene intersection, wherein the color information is used for indirect lighting calculation.
  • the directional distance field information of the scene corresponding to the updated current frame is information obtained by reading the directional distance field texture map corresponding to the updated current frame, wherein the Different map pixels in the directional distance field texture map correspond to different spatial ranges in the game world space determined according to the position of the virtual camera, and the directional distance field information of the scene corresponding to the spatial range is stored in the map pixels.
  • the directional distance field texture map includes multiple layers of texture maps, and each layer of texture maps uses the same pixel size to store the directional distance field information of a corresponding spatial range of the scene.
  • the corresponding spatial range is determined according to the distance between the spatial range of the game world space and the virtual camera.
  • the apparatus further includes: an acquiring module and a generating module;
  • the acquisition module is used to acquire the current position information of the virtual camera
  • the generating module is configured to generate multi-layer scene depth information according to the current position information of the virtual camera;
  • the determining module is further configured to, according to the depth information of the multi-layer scene and the depth value of the spatial range corresponding to each texture pixel, or according to the corresponding stored value of each texture pixel in the directional distance field texture map of the previous frame.
  • the directional distance field information of the scene in the spatial range, and the directional distance field information of the scene corresponding to the spatial range stored in the directional distance field texture map of the current frame is determined.
  • the generating module is specifically configured to determine each scene captured by the virtual camera according to the current position information; respectively determine a first depth layer and a second depth layer of each scene.
  • a depth layer the first depth layer is used for depth information indicating that the scene is close to the virtual camera, and the second depth layer is used for indicating the depth information of the scene away from the virtual camera; determining the first depth of each scene
  • the layer and the second depth layer are respectively a first distance and a second distance from the virtual camera; the multi-layer scene depth information is generated according to the first distance and the second distance of each scene.
  • the generating module is specifically configured to sort the first distances of the scenes in order from near to far, and determine the first distance of the depth information of the multi-layer scenes.
  • Depth information where the first depth information is used to indicate the frontal layer depth information of each scene facing the virtual camera; sort the second distances of the scenes in order from near to far, and determine The second depth information of the multi-layer scene depth information is used to indicate the reverse layer depth information of the respective scenes facing away from the virtual camera.
  • the determining module is specifically configured to determine that the spatial range corresponding to the current texture pixel is within the field of view of the virtual camera; The depth value is compared with the first depth information and the second depth information of the depth information of the multi-layer scene, and the current texture pixel is any texture pixel of the texture pixels; according to the comparison result, determine the The directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel.
  • the determining module is specifically configured to, according to the comparison result, the depth value of the spatial range corresponding to the current texture pixel, and the first depth information and the second depth information, to determine the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel.
  • the determining module is specifically configured to determine that the spatial range corresponding to the current texture pixel is not within the field of view of the virtual camera;
  • the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel is determined as the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel in the directional distance field texture map of the current frame, so
  • the current texture pixel is any one of the texture pixels.
  • the apparatus further includes: a correction module
  • the determining module is also used for the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel and the directional distance field of the scene corresponding to the spatial range stored by the neighbor texture pixels of the current texture pixel. information, and determine the correction parameters for the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel;
  • the correction module is configured to correct the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel according to the correction parameter.
  • the determining module is specifically configured to, if there is a sign of the directional distance field information of the scene corresponding to the spatial range stored by the neighbor texture pixel, is the same as the symbol of the current texture pixel. If the sign of the stored scene directional distance field information corresponding to the spatial range is opposite, it is determined that the correction parameter of the scene directional distance field information corresponding to the spatial range stored by the current texture pixel is half the size of the spatial range.
  • the determining module is specifically configured to, if the symbols of the directional distance field information of the scene corresponding to the spatial range stored by all the neighbor texture pixels are the same as those stored by the current texture pixels.
  • the signs of the directional distance field information of the scene corresponding to the spatial range of the The sum of the absolute values of the directional distance field information of the scene corresponding to the spatial range, the minimum sum value is determined from the sum values corresponding to each of the neighbor texture pixels; If the absolute value of the directional distance field information of the scene in the spatial range is determined, the correction parameter of the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel is determined as the minimum sum value.
  • embodiments of the present disclosure provide an electronic device, including: a processor, a storage medium, and a bus, where the storage medium stores machine-readable instructions executable by the processor, and when the electronic device runs, the processor and the storage medium In communication via a bus, the processor executes machine-readable instructions to perform the steps of the method provided in the first aspect when executed.
  • an embodiment of the present disclosure provides a computer-readable storage medium, where a computer program is stored on the storage medium, and when the computer program is run by a processor, the steps of the method provided in the first aspect are executed.
  • FIG. 1 is a schematic flow chart 1 of a lighting rendering method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a directional distance field texture map according to an embodiment of the present disclosure
  • FIG. 3 is a second schematic flowchart of a lighting rendering method provided by an embodiment of the present disclosure
  • FIG. 4 is a third schematic flowchart of a lighting rendering method provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of a multi-layer scene depth provided by an embodiment of the present disclosure.
  • FIG. 6 is a fourth schematic flowchart of a lighting rendering method provided by an embodiment of the present disclosure.
  • FIG. 7 is a fifth schematic flowchart of a lighting rendering method provided by an embodiment of the present disclosure.
  • FIG. 8 is a sixth schematic flowchart of a lighting rendering method provided by an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of a lighting rendering apparatus according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • SDF Signed Distance Field, directed distance field, given a position point in any space, returns the closest distance of this position point to the scene object, if the position point is outside the object, it returns a positive value, and if it is inside the object, it returns a negative value.
  • RayMarching light stepping.
  • the ray is emitted from the camera to each pixel of the screen, and the ray intersects the scene in a step-by-step manner.
  • the distance of each step is determined by the current ray position. If the step length is too large, thinner objects may be skipped and the actual intersection point may be missed. .
  • the material, texture and other information of the scene surface can be obtained according to the position of the intersection point, and the lighting can be calculated in combination with the light source information.
  • Map pixel of SDF texture map refers to the pixel grid in the texture map, each pixel grid corresponds to a certain range of game scene space.
  • Direct lighting The lighting information generated by the light of the light source directly irradiating the scene.
  • Indirect lighting The lighting information generated by one or more reflections of light from the light source illuminating the scene.
  • FIG. 1 is a schematic flow chart 1 of a lighting rendering method provided by an embodiment of the present disclosure
  • the execution body of the method may be a terminal or a server.
  • the method may include:
  • the current scene may refer to the target rendering model in the game scene to be rendered.
  • the to-be-rendered game scene may include multiple to-be-rendered models, and the to-be-rendered models may be virtual buildings, virtual characters, virtual terrains, and the like in the to-be-rendered game scene.
  • the virtual camera can emit a virtual stepping light to the current scene, and the point on the current scene hit by the virtual stepping light is determined as the current scene shading point, and the shading point is also the point where lighting rendering needs to be performed. After lighting rendering of all the shading points of the current scene, the rendering of the current scene can be completed.
  • the shading point can be an area in the scene or a specific point, which is determined according to the actual situation.
  • Some advanced rendering effects of the game can be achieved by indirect lighting rendering of the scene.
  • the method is mainly applied to the calculation of the indirect illumination of the scene, and the calculation of the direct illumination may be calculated according to the known direct illumination information.
  • the lighting information generated by the reflection of other scenes in the game scene can be calculated according to information such as scene surface material texture and other information obtained at the intersection of the stepping light and other scenes.
  • a virtual stepping ray can be randomly emitted into the scene from the current scene shading point, so that the virtual stepping ray finally determines the intersection point with other scenes through stepping.
  • virtual stepping rays are randomly emitted, and some of the randomly sent rays may intersect with the scene, or all of them may intersect with the scene, or All of them do not intersect with the scene.
  • the virtual virtual light intersects with the scene, it means that the light emitted by the light source in the game scene is reflected to the current scene shading point through the scene.
  • S103 Determine the stepping length corresponding to the virtual stepping light according to the updated directional distance field information of the scene corresponding to the current frame.
  • the virtual stepping ray when it is emitted forward from the current scene shading point, it may step by step according to the stepping length, and the stepping length depends on the shortest distance between the current position of the virtual stepping ray and the scene, that is, the virtual stepping ray.
  • the directed distance field information of the current position of the incoming ray Since the virtual stepping ray advances step by step, each frame corresponds to a further step, and the traveling length of each step can be determined according to the directional distance field information of the scene in the current frame until the virtual stepping ray intersects the scene.
  • the virtual stepping ray intersecting the scene means that the distance between the current position of the ray and the scene meets the preset distance or is on the surface of the scene, that is, as long as the ray steps is close enough to the scene, or the spacing is acceptable, that the virtual stepper ray intersects the scene.
  • the scene due to the change of the scene position, for example, the scene is a virtual vehicle, its position may change in real time.
  • the directional distance field information of the scene corresponding to the current frame also changes, and the The directional distance field information of the scene corresponding to the current frame is updated, and the step length corresponding to the virtual stepping light is determined according to the updated directional distance field information of the scene corresponding to the current frame, thereby ensuring the accuracy of the obtained step length.
  • each frame can determine the step length of the virtual stepping ray of the current frame, so that the virtual stepping ray can be controlled to extend forward according to the step length determined in each frame, so that the virtual stepping ray can be extended forward.
  • the step ray intersects at least one scene, and the point of intersection is determined. Wherein, at least one scene is another scene except the current scene (target scene).
  • S105 Determine color information of the scene intersection according to the incident light information and the material information of the scene intersection, where the color information is used for indirect lighting calculation.
  • the incident light information may include: light color, intensity, angle, material, attenuation, and the like.
  • the material information of the scene intersection point may include information such as material and texture of the scene surface at the scene intersection point position.
  • the color information of each scene intersection point may be determined according to the incident light information and the material information of each scene intersection point.
  • an indirect lighting rendering algorithm can be used to perform indirect lighting rendering on the current scene according to the color information of the intersection points of each scene and the color information of the current scene.
  • the indirect lighting rendering algorithm may be executed with reference to the existing lighting rendering algorithm.
  • the lighting rendering method includes: determining the current scene shading point; issuing virtual stepping rays according to the current scene shading points; determining the corresponding virtual stepping rays according to the updated directional distance field information of the scene corresponding to the current frame
  • the virtual stepping light is controlled to extend into the scene to determine at least one scene intersection; according to the incident light information and the material information of the scene intersection, the color information of the scene intersection is determined, wherein the color information is used for Indirect lighting calculations.
  • the obtained directional distance field information of the scene corresponding to the current frame is updated information in real time, which can improve the accuracy of the obtained directional distance field information, and realize the acquisition of an accurate directional distance field in a dynamic scene. Therefore, based on the obtained more accurate directional distance field information, the accurate stepping of the virtual stepping light can be controlled, so that the obtained scene intersection is more accurate, and accurate indirect lighting is obtained, thereby improving the rendering of indirect lighting in dynamic scenes. Effect.
  • the directional distance field information of the scene corresponding to the updated current frame is the information obtained by reading the directional distance field texture map corresponding to the updated current frame, wherein different map pixels in the directional distance field texture map correspond to According to different spatial ranges in the game world space determined by the position of the virtual camera, the directional distance field information of the scene corresponding to the spatial range is stored in the texture pixels.
  • the directional distance field information of the scene can be stored in the directional distance field texture map, that is, the SDF texture map, so that the SDF texture map can be more efficiently obtained during the running of the game.
  • SDF texture maps are composed of multiple texture pixels, one texture pixel is also a pixel grid, each texture pixel is used to map different spatial ranges of the game world space, so that each texture pixel stores the corresponding spatial range
  • the scene has directed distance field information.
  • the directional distance field information of any point in the spatial range can be represented by the directional distance field information of the scene in the spatial range. That is, the directional distance field information of all points in a small spatial range is uniformly represented.
  • the directional distance field texture map includes multi-layer texture maps, each layer of texture maps uses the same pixel size to store the scene directional distance field information corresponding to the spatial range, and the spatial range corresponding to each layer of texture maps is based on the game world space. The distance between the spatial extent and the virtual camera is determined.
  • FIG. 2 is a schematic diagram of a directional distance field texture map provided by an embodiment of the present disclosure.
  • the established SDF texture map can be a multi-layer texture map.
  • the SDF texture map can be a three-layer 3D texture map established around a virtual camera, which maps distances respectively. The spatial extent of the game world space at different distances from the virtual camera.
  • each layer is the spatial extent of the SDF texture map mapping.
  • the spatial range of the scene directional distance field information stored in each layer of texture maps is determined by the position of the virtual camera.
  • the spatial range corresponding to the texture maps also changes.
  • the number of texture maps in each layer There is no change.
  • Each layer of texture maps uses the same pixel size to store the directional distance field information of the corresponding spatial range. That is, each layer of texture maps is a texture map with the same length, width and height, so the video memory occupied by the SDF texture map is always fixed.
  • the three-layer texture map adopts the same pixel size, and saves scenes corresponding to different densities for implementation.
  • different spatial extents are mapped using SDF texture maps of the same size (same pixel size) to achieve different mapping precisions.
  • the stepping length can be determined by reading the directional distance field information corresponding to the target space range from the SDF texture map.
  • the coverage box range of the SDF texture map in the game world space is now defined (AABB: axis-aligned bounding box), the minimum value of the bounding box coordinates is BBMin, and the maximum value of the bounding box coordinates is BBMax, then each texture pixel (texture coordinate UVW) of the SDF texture map can be accurately mapped to a space range (WorldPos), Vice versa, the mapping formula can be as follows:
  • UVW (WorldPos-BBMin)/(BBMax-BBMin)
  • FIG. 3 is a second schematic flowchart of a lighting rendering method provided by an embodiment of the present disclosure. optionally, in the above steps, the directional distance field information of the scene corresponding to the spatial range stored by each mapped pixel in the directional distance field texture map is as follows Calculated:
  • the current position information of the virtual camera may be acquired in real time, and the combination of the virtual camera and the light source may be equivalent to the eyes of an observer.
  • the virtual stepping light emitted by the virtual camera to the current scene can pass through the near-cut plane of the screen and hit the scene, where the near-cut plane can be composed of multiple pixels, and the size of the near-cut plane can be the same as the size of the display area of the terminal , can also be different from the size of the display interval.
  • the picture displayed on the close-up plane can be equivalent to the picture displayed on the mobile phone.
  • the multi-layer scene depth information of the virtual camera at the current position can be generated.
  • generate the depth information of the front layer and the depth information of the reverse layer and the specific implementation can refer to the following embodiments.
  • the directional distance field information of the scene corresponding to the spatial range stored by each map pixel in the SDF texture map may change due to the movement of the scene or the movement of the virtual camera, that is, each The directional distance field information of the scene corresponding to the spatial range stored by each texture pixel of the frame is not fixed. If the stored scene directional distance field information is not updated in time, the directional distance field of the scene will be read. The field information is inaccurate, which makes the final obtained scene intersection point wrong.
  • the corresponding spatial range stored by each texture pixel in the directional distance field texture map of the previous frame can be calculated
  • the directional distance field information of the scene which determines the directional distance field information of the scene corresponding to the spatial range stored by each map pixel in the directional distance field texture map of the current frame.
  • the depth information of the multi-layer scene generated above and the depth of the spatial range corresponding to each texture pixel can be used. value to determine the directional distance field information of the scene corresponding to the spatial range stored by each map pixel in the directional distance field texture map of the current frame.
  • the depth value of the spatial range corresponding to each texture pixel may be determined according to the distance between the spatial range and the current position of the virtual camera.
  • FIG. 4 is a third schematic flowchart of a lighting rendering method provided by an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of a multi-layer scene depth provided by an embodiment of the present disclosure.
  • the multi-layer scene depth information is generated according to the current position information of the virtual camera, which may include:
  • the lens capture range of the virtual camera is limited and cannot capture all the scenes in the game world space. For example, only scenes within 100 meters from the current position of the virtual camera can be captured. Based on the current position of the virtual camera, each scene captured by the virtual camera can be determined.
  • S402 Determine the first depth layer and the second depth layer of each scene respectively, the first depth layer is used for depth information indicating that the scene is close to the virtual camera, and the second depth layer is used for indicating the depth information of the scene away from the virtual camera.
  • the layer of each scene close to the virtual camera is the first depth layer, and the layer far from the virtual camera can be determined as the first depth layer.
  • One layer acts as the second depth layer.
  • the first depth layers of scene A, scene B, and scene C are D1, D2, and D3, respectively, and the second depth layers are D11, D22, and D33, respectively.
  • S403 Determine the first distance and the second distance from the first depth layer and the second depth layer of each scene to the virtual camera, respectively.
  • each scene can be obtained according to the distance between the first depth layer of each scene and the current position of the virtual camera. Similarly, according to the distance between the second depth layer of each scene and the current position of the virtual camera, the second distance of each scene is obtained.
  • S404 Generate multi-layer scene depth information according to the first distance and the second distance of each scene.
  • multi-layer scene depth information can be comprehensively produced from the first distance and the second distance of each scene.
  • step S404 generating multi-layer scene depth information according to the first distance and the second distance of each scene, which may include:
  • the multi-layer scene depth information may include first depth information and second depth information.
  • the first depth information can be obtained by sorting the first distances of each scene in order from near to far according to the distance of each scene from the virtual camera. It can be seen from FIG. Cameras are arranged from near to far. Assuming that the first distances of scene A, scene B, and scene C are D1, D2, and D3, respectively, the first depth information of the multi-layer scene depth information is: D1, D2, and D3.
  • S602. Sort the second distances of each scene in order from near to far, and determine the second depth information of the multi-layer scene depth information, where the second depth information is used to indicate the depth of the reverse layer of each scene facing away from the virtual camera information.
  • the second depth information of the multi-layer scene depth information is: D11, D22, and D33.
  • the first depth information is the front layer depth information representing the multi-layer scene depth information
  • the second depth information is the reverse layer depth information representing the multi-layer scene depth information
  • the depthpeeling technology is used to generate the multi-layer scene depth information under the current virtual camera position, so that it can be accurately known whether any spatial point in the current virtual camera field of view is inside or outside the scene, that is, the calculated scene has a direction
  • the positive and negative of the distance field information is reliable, and the positive and negative of the directional distance field information of each scene is calculated by relying on the depth information of the multi-layer scene, so that the deviation of the directional distance field information caused by the parallax problem can be avoided.
  • FIG. 7 is a schematic flowchart of a lighting rendering method according to an embodiment of the present disclosure.
  • the directional distance of the current frame is determined according to the depth information of the multi-layer scene and the depth value of the spatial range corresponding to each texture pixel.
  • the directional distance field information of the scene corresponding to the spatial range stored by each map pixel in the field texture map can include:
  • map the current texture pixel back to the game world space determine the spatial range corresponding to the current texture pixel, and calculate the depth value of the spatial range, so as to determine the spatial range corresponding to the current texture pixel within the field of view of the virtual camera according to the depth value.
  • the depth value of the spatial range when the depth value of the spatial range is less than or equal to the capture distance of the virtual camera, it can be considered that the spatial range is within the field of view of the virtual camera; otherwise, when the depth value of the spatial range is greater than the capture distance of the virtual camera, Then it can be considered that the spatial range is not within the field of view of the virtual camera.
  • the depth value of the spatial range may refer to the straight-line distance between the spatial range and the virtual camera.
  • the depth value of the spatial range can be compared with the first depth information and the second depth information of the obtained multi-layer scene depth information. to obtain the comparison result.
  • the above-mentioned current texture pixel refers to the texture pixel currently being calculated, which may be any texture pixel among the texture pixels of the SDF texture map.
  • different comparison results may correspond to different scene directional distance field information calculation methods respectively, and according to the comparison results, a corresponding method may be used to perform a scene directional operation corresponding to the spatial range stored by the current texture pixel. Calculation of distance field information.
  • determining the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel may include: according to the comparison result, the depth value of the spatial range corresponding to the current texture pixel , and the first depth information and the second depth information to determine the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel.
  • the first depth information is: D1, D2, D3...Dn
  • the second depth information is: D11, D22, D33...Dm.
  • the spatial range is in front of the scene surface closest to the virtual camera, then according to the formula: min(D1, D11)-depth, that is, the minimum value of D1 and D11 is subtracted From the depth value of the spatial range, the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel is calculated, and the current texture pixel of the current frame is marked as assigned.
  • the directional distance field information of the scene corresponding to the spatial range stored in the current texture pixel of the previous frame can be reused, and also That is, the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel of the previous frame is determined as the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel of the current frame, and the current texture pixel of the current frame is marked. Not assigned.
  • the spatial extent may be located between two scenes, and the layer closest to the spatial extent can be determined from the first depth information and the second depth information, and its depth is set as Zn.
  • Di ⁇ depth ⁇ Dii for example: there is D1 ⁇ depth ⁇ D11, or D2 ⁇ depth ⁇ D22, it proves that the spatial range is inside the scene, then according to the formula: max(Di-depth, depth-Dii) , taking Di as D1 and Dii as D11 as an example, that is, assuming that the spatial range handles the interior of scene A, the first result can be obtained by subtracting the depth value of the spatial range from the first distance of the scene A, and the depth value of the spatial range can be obtained by subtracting the depth value of the spatial range from the depth value of the spatial range.
  • Subtract the second distance of scene A to get the second result take the maximum value from the first result and the second result, use it as the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel, and mark the current texture of the current frame Pixels are assigned values.
  • the spatial range is outside the scene, and the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel can be calculated according to the formula: abs(depth-Zn). .
  • abs refers to taking the absolute value.
  • the above describes several methods for calculating the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel when the spatial range corresponding to the current texture pixel is within the field of view of the virtual camera.
  • each map in the directional distance field texture map of the current frame is determined.
  • the directional distance field information of the scene corresponding to the spatial range stored by the pixel may include: determining that the spatial range corresponding to the current texture pixel is not within the field of view of the virtual camera;
  • the stored directional distance field information of the scene corresponding to the spatial range is determined as the directional distance field information of the scene corresponding to the spatial range stored by the current map pixel in the directional distance field texture map of the current frame, and the current map pixel is the Any texture pixel.
  • the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel in the previous frame may be directly multiplexed.
  • the value of the previous frame is assigned to the current frame; if the absolute value of the value of the current frame is less than the size of the space range, the screen lighting information (using G -The diffuse reflection part of the direct lighting color result calculated by the buffer and scene light information) into the SDF texture map, and finally update the calculated value of the current frame to the SDF texture map, for each texture pixel in the SDF texture map , the same method as above can be used to calculate the directional distance field information of the scene corresponding to the spatial range stored by each texture pixel of the current frame, so as to obtain the updated SDF texture map.
  • the size of the texture pixel grid can refer to the size of the corresponding spatial range after the texture pixels are mapped back to the game world space. Assuming that the spatial range is 100x100x100, the size of the pixel grid is 100x100x100.
  • FIG. 8 is a sixth schematic flowchart of a lighting rendering method provided by an embodiment of the present disclosure. optionally, the method of the present disclosure may further include:
  • the directional distance field information of the scene corresponding to the spatial range stored in the current map pixel of the current frame obtained by the above calculation may have a parallax problem, thereby affecting the value of the step size when the virtual stepping ray performs ray stepping. A bug that caused the actual surface of the scene to be skipped.
  • the directional distance field information of the scene corresponding to the spatial range stored by each texture pixel obtained by the above calculation can be calculated. Correction is made to improve the accuracy of the directional distance field information of the scene corresponding to the spatial range stored by each texture pixel obtained.
  • the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel can be compared with the positive and negative signs of the directional distance field information of the scene corresponding to the spatial range stored by the neighbor texture pixels of the current texture pixel. Yes, so as to determine the correction parameter corresponding to the current texture pixel.
  • the neighbor texture pixel can refer to all texture pixels adjacent to the current texture pixel.
  • step S801 according to the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel, and the directional distance field information of the scene corresponding to the spatial range stored by the neighbor texture pixels of the current texture pixel, it is determined that the The correction parameters of the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel can include:
  • the correction parameter of the directional distance field information of the scene corresponding to the spatial range is half of the size of the spatial range.
  • the correction parameter can be determined as: the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel is multiplied by 0.5 times the texture pixel length , where the texture pixel length may refer to the size of the spatial range corresponding to the texture pixels, such as the above 100x100x100.
  • step S801 according to the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel, and the directional distance field information of the scene corresponding to the spatial range stored by the neighbor texture pixels of the current texture pixel, it is determined that the The correction parameters of the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel can include:
  • the sign of the directional distance field information of the scene corresponding to the spatial range stored by all the neighbor map pixels is the same as the sign of the directional distance field information of the corresponding spatial range stored by the current map pixel, then traverse each neighbor map pixel, respectively Calculate the sum of the distance from each neighbor texture pixel to the current texture pixel and the absolute value of the scene directional distance field information of the corresponding spatial range stored by each neighbor texture pixel, and determine the minimum sum value from the sum values corresponding to each neighbor texture pixel.
  • the distance from the neighbor texture pixel to the current texture pixel can be the texture pixel length.
  • the correction parameter of the scene directional distance field information corresponding to the spatial range stored by the current texture pixel is determined as the minimum sum value .
  • the scene corresponding to the spatial range stored by the current texture pixel can be stored as:
  • the distance field information is corrected to this minimum sum value.
  • the distance from the texture pixel to the scene is the directional distance field information of the scene corresponding to the spatial range stored by the texture pixel. In this way, the directional distance field information of the scene corresponding to the spatial range stored by the texture pixels can be minimized, and the condition of the directional distance field information can be satisfied.
  • the SDF texture map of the present disclosure may also store direct lighting information corresponding to each scene, for example: directly store the direct lighting information of the scene in the RGB channel of the SDF texture map, and the A channel stores The scene in the spatial range corresponding to each texture pixel has the directional distance field information, so when performing global illumination rendering on the scene, the indirect illumination information and the direct illumination information can be read from the SDF texture map respectively to complete the calculation of the global illumination.
  • the lighting rendering method includes: determining a current scene shading point; issuing a virtual stepping ray according to the current scene shading point; determining a virtual stepping ray according to the updated directional distance field information of the scene corresponding to the current frame Corresponding step length; control the virtual step light to extend into the scene according to the step length to determine at least one scene intersection; according to the incident light information and the material information of the scene intersection, determine the color information of the scene intersection, wherein the color information is used for Perform indirect lighting calculations.
  • the obtained directional distance field information of the scene corresponding to the current frame is updated information in real time, which can improve the accuracy of the obtained directional distance field information, and realize the acquisition of an accurate directional distance field in a dynamic scene. Therefore, based on the obtained more accurate directional distance field information, the accurate stepping of the virtual stepping light can be controlled, so that the obtained scene intersection is more accurate, and accurate indirect lighting is obtained, thereby improving the rendering of indirect lighting in dynamic scenes. Effect.
  • the present disclosure uses multi-layer SDF texture maps to store the directional distance field information, and different spatial ranges are mapped using SDF texture maps of the same size (same pixel size) to achieve different mapping precisions.
  • SDF texture maps of the same size (same pixel size)
  • the farther it is from the virtual camera the lower the required accuracy and the smaller the mapping accuracy of the SDF texture map.
  • This avoids the high memory usage caused by storing SDF texture maps that are uniform across the entire scene (the same spatial range is mapped in the far and near areas), and at the same time, the coverage is far away, and the calculation results of indirect lighting are consistent with the fine directional distance field information of the entire scene. The difference won't be much.
  • the depthpeeling technology is used to generate the depth information of the multi-layer scene under the current virtual camera position, which can accurately know whether any spatial point in the current virtual camera field of view is inside or outside the scene, that is, the calculated directional distance field information of the scene.
  • the positive and negative of the directional distance field information are reliable, relying on the depth information of the multi-layer scene to calculate the positive and negative of the directional distance field information of each scene, so that the deviation of the directional distance field information caused by the parallax problem can be avoided.
  • the direct lighting information of the scene is also stored in the SDF texture map, which can efficiently realize the global light calculation of the scene.
  • the apparatus, electronic device, storage medium, etc. used to execute the illumination rendering method provided by the present disclosure will be described below.
  • the specific implementation process and technical effect thereof are referred to above, and will not be repeated below.
  • FIG. 9 is a schematic diagram of a lighting rendering apparatus according to an embodiment of the present disclosure, and the functions implemented by the lighting rendering apparatus correspond to the steps performed by the above method.
  • the device can be understood as the above-mentioned terminal or server, or the processor of the server, and can also be understood as a component independent of the above-mentioned server or processor that implements the functions of the present disclosure under the control of the server.
  • the device can Including: a determining module 910, a transmitting module 920;
  • a determination module 910 configured to determine the current scene shading point
  • an emission module 920 configured to emit virtual stepping rays according to the current scene shading point
  • a determination module 910 configured to determine the step length corresponding to the virtual stepping light according to the updated directional distance field information of the scene corresponding to the current frame;
  • a determining module 910 configured to control the virtual stepping ray to extend into the scene according to the stepping length to determine at least one scene intersection;
  • the determining module 910 is configured to determine the color information of the scene intersection according to the incident light information and the material information of the scene intersection, wherein the color information is used for indirect lighting calculation.
  • the directional distance field information of the scene corresponding to the updated current frame is the information obtained by reading the directional distance field texture map corresponding to the updated current frame, wherein different map pixels in the directional distance field texture map correspond to According to different spatial ranges in the game world space determined by the position of the virtual camera, the directional distance field information of the scene corresponding to the spatial range is stored in the texture pixels.
  • the directional distance field texture map includes multi-layer texture maps, each layer of texture maps uses the same pixel size to store the scene directional distance field information corresponding to the spatial range, and the spatial range corresponding to each layer of texture maps is based on the game world space. The distance between the spatial extent and the virtual camera is determined.
  • the device further includes: an acquiring module and a generating module;
  • the acquisition module is used to acquire the current position information of the virtual camera
  • the generation module is used to generate multi-layer scene depth information according to the current position information of the virtual camera
  • the determination module 910 is further configured to, according to the depth information of the multi-layer scene and the depth value of the spatial range corresponding to each texture pixel, or according to the corresponding spatial range stored by each texture pixel in the directional distance field texture map of the previous frame.
  • the directional distance field information of the scene which determines the directional distance field information of the scene corresponding to the spatial range stored by each map pixel in the directional distance field texture map of the current frame.
  • the generation module is specifically configured to determine each scene captured by the virtual camera according to the current position information; respectively determine the first depth layer and the second depth layer of each scene, and the first depth layer is used to indicate the depth of the scene close to the virtual camera.
  • the second depth layer is used to indicate the depth information of the scene away from the virtual camera; determine the first distance and the second distance of the first depth layer and the second depth layer of each scene from the virtual camera respectively; according to the first distance of each scene and the second distance to generate multi-layer scene depth information.
  • the generation module is specifically configured to sort the first distances of each scene in order from near to far, and determine the first depth information of the multi-layer scene depth information, and the first depth information is used to indicate the depth of each scene. Facing the frontal layer depth information of the virtual camera; sort the second distance of each scene in order from near to far, determine the second depth information of the multi-layer scene depth information, and the second depth information is used to indicate the back of each scene. Inverse layer depth information to the virtual camera.
  • the determining module 910 is specifically configured to determine that the spatial range corresponding to the current texture pixel is within the field of view of the virtual camera; the depth value of the spatial range corresponding to the current texture pixel and the first depth information of the multi-layer scene depth information Comparing with the second depth information, the current texture pixel is any texture pixel in each texture pixel; according to the comparison result, the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel is determined.
  • the determining module 910 is specifically configured to determine, according to the comparison result, the depth value of the spatial range corresponding to the current texture pixel, and the first depth information and the second depth information, the corresponding spatial range stored by the current texture pixel.
  • the scene has directed distance field information.
  • the determining module 910 is specifically configured to determine that the spatial range corresponding to the current texture pixel is not within the field of view of the virtual camera;
  • the directional distance field information is determined as the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel in the directional distance field texture map of the current frame, and the current texture pixel is any texture pixel in each texture pixel.
  • the device further includes: a correction module
  • the determining module 910 is further configured to determine the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel and the scene directional distance field information corresponding to the spatial range stored by the neighbor texture pixels of the current texture pixel.
  • the correction module is used to correct the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel according to the correction parameter.
  • the determining module 910 is specifically configured to determine if there is a sign of the directional distance field information of the scene corresponding to the spatial range stored by a neighbor texture pixel and the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel. If the signs are opposite, the correction parameter of the scene directional distance field information corresponding to the spatial range stored by the current texture pixel is determined to be half of the size of the spatial range.
  • the determination module 910 is specifically used for if the symbols of the directional distance field information of the scene corresponding to the spatial range stored by all the neighbor texture pixels are the same as the symbols of the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel. are the same, then traverse each neighbor texture pixel, calculate the distance from each neighbor texture pixel to the current texture pixel and the absolute value of the scene directional distance field information of the corresponding spatial range stored by each neighbor texture pixel.
  • the minimum sum value is determined from the sum value corresponding to the pixel; if the minimum sum value is smaller than the absolute value of the directional distance field information of the scene corresponding to the spatial range stored by the current texture pixel, the scene corresponding to the spatial range stored by the current texture pixel is determined to have The correction parameter to the distance field information is the minimum sum value.
  • the foregoing apparatus is used to execute the method provided by the foregoing embodiment, and the implementation principle and technical effect thereof are similar, which will not be repeated here.
  • the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), or one or more microprocessors (digital singnal) processor, referred to as DSP), or, one or more Field Programmable Gate Array (Field Programmable Gate Array, referred to as FPGA) and the like.
  • ASIC Application Specific Integrated Circuit
  • DSP digital singnal processor
  • FPGA Field Programmable Gate Array
  • the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU for short) or other processors that can call program codes.
  • these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC for short).
  • the modules described above may be connected or communicate with each other via wired or wireless connections.
  • Wired connections may include metallic cables, fiber optic cables, hybrid cables, etc., or any combination thereof.
  • Wireless connections may include connections via LAN, WAN, Bluetooth, ZigBee, or NFC, among others, or any combination thereof.
  • Two or more modules can be combined into a single module, and any one module can be divided into two or more units.
  • the above modules may be one or more integrated circuits configured to implement the above method, for example: one or more specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), or one or more micro-integrated circuits A processor (Digital Singnal Processor, referred to as DSP), or, one or more Field Programmable Gate Array (Field Programmable Gate Array, referred to as FPGA), etc.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Singnal Processor
  • FPGA Field Programmable Gate Array
  • the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU for short) or other processors that can call program codes.
  • these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC for short).
  • FIG. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, where the device may be a computing device with a data processing function.
  • the device may include: a processor 801 and a memory 802 .
  • the memory 802 is used for storing programs, and the processor 801 calls the programs stored in the memory 802 to execute the above method embodiments.
  • the specific implementation manner and technical effect are similar, and details are not repeated here.
  • the memory 802 stores program codes, and when the program codes are executed by the processor 801, causes the processor 801 to execute the lighting rendering methods according to various exemplary embodiments of the present disclosure described in the above-mentioned "Exemplary Methods" section of this specification. various steps.
  • the processor 801 can be a general-purpose processor, such as a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate) Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, and discrete hardware components, can implement or execute the methods, steps, and logic block diagrams disclosed in the embodiments of the present disclosure.
  • a general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in conjunction with the embodiments of the present disclosure may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the memory 802 can be used to store non-volatile software programs, non-volatile computer-executable programs and modules.
  • the memory may include at least one type of storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory, a random access memory (Random Access Memory, RAM), a Static Random Access Memory (Static Random Access Memory, SRAM), an optional Programmable Read Only Memory (PROM), Read Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Magnetic Memory, Disk, CD and so on.
  • a flash memory such as a flash memory, a hard disk, a multimedia card, a card-type memory, a random access memory (Random Access Memory, RAM), a Static Random Access Memory (Static Random Access Memory, SRAM), an optional Programmable Read Only Memory (PROM), Read Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Magnetic Memory, Disk, CD and so on.
  • Memory is, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • the memory 802 in this embodiment of the present disclosure may also be a circuit or any other device capable of implementing a storage function, for storing program instructions and/or data.
  • the present disclosure also provides a program product, such as a computer-readable storage medium, including a program, which, when executed by a processor, is used to execute the above method embodiments.
  • a program product such as a computer-readable storage medium, including a program, which, when executed by a processor, is used to execute the above method embodiments.
  • the disclosed apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.
  • the above-mentioned integrated units implemented in the form of software functional units can be stored in a computer-readable storage medium.
  • the above-mentioned software functional unit is stored in a storage medium, and includes several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (English: processor) to execute the various embodiments of the present disclosure. part of the method.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (English: Read-Only Memory, referred to as: ROM), random access memory (English: Random Access Memory, referred to as: RAM), magnetic disk or optical disk, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • magnetic disk or optical disk etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)

Abstract

一种光照渲染方法、装置、电子设备及存储介质,该方法包括:确定当前场景着色点(S101);根据当前场景着色点发出虚拟步进光线(S102);根据更新的当前帧对应的场景有向距离场信息确定虚拟步进光线对应的步进长度(S103);根据步进长度控制虚拟步进光线向场景中延伸以确定至少一场景交点(S104);根据入射光信息和场景交点的材质信息,确定场景交点的颜色信息,其中,颜色信息用于进行间接光照计算(S105)。可提高获取的有向距离场信息的精确性,控制虚拟步进光线的准确步进,使得得到的场景交点较为精确,获取到准确的间接光照,从而提升动态场景下间接光照的渲染效果。

Description

光照渲染方法、装置、电子设备及存储介质
相关申请的交叉引用
本公开要求于2021年03月09日提交的申请号为202110258473.1、名称为“光照渲染方法、装置、电子设备及存储介质”的中国专利申请的优先权,该中国专利申请的全部内容通过引用结合在本公开中。
技术领域
本公开涉及渲染技术领域,具体而言,涉及一种光照渲染方法、装置、电子设备及存储介质。
背景技术
游戏中一些高级渲染效果如漫反射全局光、镜面反射、软阴影、环境光遮蔽等间接光照渲染可通过raymarching(光线步进)技术实现。在raymarching技术中,射线步进的长度取决于当前位置离场景中虚拟模型的最短距离,即SDF(Signed-distance-field,有向距离场)值。
现有技术中,是通过离线计算每个模型的SDF数据并将SDF数据存储到一张小型的3D纹理中,在实时渲染的时候再把此包含模型SDF数据的3D纹理通过平移旋转缩放应用到该模型在场景中的实际位置上,得到该模型的真实SDF信息,然后更新到全场景的SDF数据中。
但是,上述方法无法适用于动态模型的渲染。
发明内容
本公开实施例采用的技术方案如下:
第一方面,本公开实施例提供了一种光照渲染方法,包括:
确定当前场景着色点;
根据所述当前场景着色点发出虚拟步进光线;
根据更新的当前帧对应的场景有向距离场信息确定所述虚拟步进光线对应的步进长度;
根据所述步进长度控制所述虚拟步进光线向场景中延伸以确定至少一场景交点;
根据入射光信息和所述场景交点的材质信息,确定所述场景交点的颜色信息,其中,所述颜色信息用于进行间接光照计算。
在本公开的一种示例性实施例中,所述更新的当前帧对应的场景有向距离场信息为通过读取更新的当前帧对应的有向距离场纹理贴图获得的信息,其中,所述有向距离场纹理贴图中的不同贴图像素对应根据虚拟相机的位置确定的游戏世界空间中的不同空间范围, 所述贴图像素中存储对应的空间范围的场景有向距离场信息。
在本公开的一种示例性实施例中,所述有向距离场纹理贴图包括多层纹理贴图,每层纹理贴图采用相同像素尺寸存储对应空间范围的场景有向距离场信息,每层纹理贴图所对应的空间范围根据游戏世界空间的空间范围与所述虚拟相机的距离确定。
在本公开的一种示例性实施例中,所述有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息采用如下方式计算得到:
获取所述虚拟相机的当前位置信息;
根据所述虚拟相机的当前位置信息生成多层场景深度信息;
根据所述多层场景深度信息、以及各贴图像素所对应的空间范围的深度值,或者根据上一帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息,确定当前帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息。
在本公开的一种示例性实施例中,所述根据所述虚拟相机的当前位置信息生成多层场景深度信息,包括:
根据所述当前位置信息确定所述虚拟相机捕捉的各场景;
分别确定所述各场景的第一深度层和第二深度层,所述第一深度层用于指示场景靠近所述虚拟相机的深度信息,所述第二深度层用于指示场景远离所述虚拟相机的深度信息;
确定所述各场景的第一深度层和第二深度层分别距离所述虚拟相机的第一距离和第二距离;
根据所述各场景的第一距离和第二距离,生成所述多层场景深度信息。
在本公开的一种示例性实施例中,所述根据所述各场景的第一距离和第二距离,生成所述多层场景深度信息,包括:
按照由近及远的顺序依次将所述各场景的第一距离进行排序,确定所述多层场景深度信息的第一深度信息,所述第一深度信息用于指示所述各场景的面向所述虚拟相机的正面层深度信息;
按照由近及远的顺序依次将所述各场景的第二距离进行排序,确定所述多层场景深度信息的第二深度信息,所述第二深度信息用于指示所述各场景的背向所述虚拟相机的反面层深度信息。
在本公开的一种示例性实施例中,所述根据所述多层场景深度信息、以及各贴图像素所对应的空间范围的深度值,确定当前帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息,包括:
确定当前贴图像素对应的空间范围在所述虚拟相机的视野范围内;
将所述当前贴图像素所对应的空间范围的深度值与所述多层场景深度信息的第一深度信息和第二深度信息进行比对,所述当前贴图像素为所述各贴图像素中的任一贴图像素;
根据比对结果,确定所述当前贴图像素所存储的对应空间范围的场景有向距离场信息。
在本公开的一种示例性实施例中,所述根据比对结果,确定所述当前贴图像素所存储的对应空间范围的场景有向距离场信息,包括:
根据比对结果、当前贴图像素所对应的空间范围的深度值、以及所述第一深度信息和所述第二深度信息,确定当前贴图像素所存储的对应空间范围的场景有向距离场信息。
在本公开的一种示例性实施例中,所述根据上一帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息,确定当前帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息,包括:
确定当前贴图像素对应的空间范围不在所述虚拟相机的视野范围内;
将上一帧的有向距离场纹理贴图中所述当前贴图像素所存储的对应空间范围的场景有向距离场信息确定为当前帧的有向距离场纹理贴图中所述当前贴图像素所存储的对应空间范围的场景有向距离场信息,所述当前贴图像素为所述各贴图像素中的任一贴图像素。
在本公开的一种示例性实施例中,所述方法还包括:
根据所述当前贴图像素所存储的对应空间范围的场景有向距离场信息、以及所述当前贴图像素的邻居贴图像素所存储的对应空间范围的场景有向距离场信息,确定对所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数;
根据所述修正参数,修正当前贴图像素所存储的对应空间范围的场景有向距离场信息。
可选地,所述根据所述当前贴图像素所存储的对应空间范围的场景有向距离场信息、以及所述当前贴图像素的邻居贴图像素所存储的对应空间范围的场景有向距离场信息,确定对所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数,包括:
若存在一所述邻居贴图像素所存储的对应空间范围的场景有向距离场信息的符号与所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的符号相反,则确定所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数为所述空间范围的尺寸的一半。
在本公开的一种示例性实施例中,所述根据所述当前贴图像素所存储的对应空间范围的场景有向距离场信息、以及所述当前贴图像素的邻居贴图像素所存储的对应空间范围的场景有向距离场信息,确定对所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数,包括:
若所有所述邻居贴图像素所存储的对应空间范围的场景有向距离场信息的符号与所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的符号均相同,则遍历各所述邻居贴图像素,分别计算各所述邻居贴图像素到所述当前贴图像素的距离与各所述邻居贴图像素所存储的对应空间范围的场景有向距离场信息的绝对值之和,从各所述邻居贴图像素对应的和值中确定最小和值;
若所述最小和值小于所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的绝对值,则确定所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数为所述最小和值。
第二方面,本公开实施例还提供了一种光照渲染装置,包括:确定模块、发射模块;
所述确定模块,用于确定当前场景着色点;
所述发射模块,用于根据所述当前场景着色点发出虚拟步进光线;
所述确定模块,用于根据更新的当前帧对应的场景有向距离场信息确定所述虚拟步进光线对应的步进长度;
所述确定模块,用于根据所述步进长度控制所述虚拟步进光线向场景中延伸以确定至少一场景交点;
所述确定模块,用于根据入射光信息和所述场景交点的材质信息,确定所述场景交点的颜色信息,其中,所述颜色信息用于进行间接光照计算。
在本公开的一种示例性实施例中,所述更新的当前帧对应的场景有向距离场信息为通过读取更新的当前帧对应的有向距离场纹理贴图获得的信息,其中,所述有向距离场纹理贴图中的不同贴图像素对应根据虚拟相机的位置确定的游戏世界空间中的不同空间范围,所述贴图像素中存储对应的空间范围的场景有向距离场信息。
在本公开的一种示例性实施例中,所述有向距离场纹理贴图包括多层纹理贴图,每层纹理贴图采用相同像素尺寸存储对应空间范围的场景有向距离场信息,每层纹理贴图所对应的空间范围根据游戏世界空间的空间范围与所述虚拟相机的距离确定。
在本公开的一种示例性实施例中,所述装置还包括:获取模块、生成模块;
所述获取模块,用于获取所述虚拟相机的当前位置信息;
所述生成模块,用于根据所述虚拟相机的当前位置信息生成多层场景深度信息;
所述确定模块,还用于根据所述多层场景深度信息、以及各贴图像素所对应的空间范围的深度值,或者根据上一帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息,确定当前帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息。
在本公开的一种示例性实施例中,所述生成模块,具体用于根据所述当前位置信息确定所述虚拟相机捕捉的各场景;分别确定所述各场景的第一深度层和第二深度层,所述第一深度层用于指示场景靠近所述虚拟相机的深度信息,所述第二深度层用于指示场景远离所述虚拟相机的深度信息;确定所述各场景的第一深度层和第二深度层分别距离所述虚拟相机的第一距离和第二距离;根据所述各场景的第一距离和第二距离,生成所述多层场景深度信息。
在本公开的一种示例性实施例中,所述生成模块,具体用于按照由近及远的顺序依次将所述各场景的第一距离进行排序,确定所述多层场景深度信息的第一深度信息,所述第一深度信息用于指示所述各场景的面向所述虚拟相机的正面层深度信息;按照由近及远的顺序依次将所述各场景的第二距离进行排序,确定所述多层场景深度信息的第二深度信息,所述第二深度信息用于指示所述各场景的背向所述虚拟相机的反面层深度信息。
在本公开的一种示例性实施例中,所述确定模块,具体用于确定当前贴图像素对应的 空间范围在所述虚拟相机的视野范围内;将所述当前贴图像素所对应的空间范围的深度值与所述多层场景深度信息的第一深度信息和第二深度信息进行比对,所述当前贴图像素为所述各贴图像素中的任一贴图像素;根据比对结果,确定所述当前贴图像素所存储的对应空间范围的场景有向距离场信息。
在本公开的一种示例性实施例中,所述确定模块,具体用于根据比对结果、当前贴图像素所对应的空间范围的深度值、以及所述第一深度信息和所述第二深度信息,确定当前贴图像素所存储的对应空间范围的场景有向距离场信息。
在本公开的一种示例性实施例中,所述确定模块,具体用于确定当前贴图像素对应的空间范围不在所述虚拟相机的视野范围内;将上一帧的有向距离场纹理贴图中所述当前贴图像素所存储的对应空间范围的场景有向距离场信息确定为当前帧的有向距离场纹理贴图中所述当前贴图像素所存储的对应空间范围的场景有向距离场信息,所述当前贴图像素为所述各贴图像素中的任一贴图像素。
在本公开的一种示例性实施例中,所述装置还包括:修正模块;
所述确定模块,还用于根据所述当前贴图像素所存储的对应空间范围的场景有向距离场信息、以及所述当前贴图像素的邻居贴图像素所存储的对应空间范围的场景有向距离场信息,确定对所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数;
所述修正模块,用于根据所述修正参数,修正当前贴图像素所存储的对应空间范围的场景有向距离场信息。
在本公开的一种示例性实施例中,所述确定模块,具体用于若存在一所述邻居贴图像素所存储的对应空间范围的场景有向距离场信息的符号与所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的符号相反,则确定所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数为所述空间范围的尺寸的一半。
在本公开的一种示例性实施例中,所述确定模块,具体用于若所有所述邻居贴图像素所存储的对应空间范围的场景有向距离场信息的符号与所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的符号均相同,则遍历各所述邻居贴图像素,分别计算各所述邻居贴图像素到所述当前贴图像素的距离与各所述邻居贴图像素所存储的对应空间范围的场景有向距离场信息的绝对值之和,从各所述邻居贴图像素对应的和值中确定最小和值;若所述最小和值小于所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的绝对值,则确定所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数为所述最小和值。
第三方面,本公开实施例提供了一种电子设备,包括:处理器、存储介质和总线,存储介质存储有处理器可执行的机器可读指令,当电子设备运行时,处理器与存储介质之间通过总线通信,处理器执行机器可读指令,以执行时执行如第一方面中提供的方法的步骤。
第四方面,本公开实施例提供了一种计算机可读存储介质,该存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如第一方面提供的方法的步骤。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1为本公开实施例提供的光照渲染方法流程示意图一;
图2为本公开实施例提供的一种有向距离场纹理贴图示意图;
图3为本公开实施例提供的光照渲染方法流程示意图二;
图4为本公开实施例提供的光照渲染方法流程示意图三;
图5为本公开实施例提供的一种多层场景深度示意图;
图6为本公开实施例提供的光照渲染方法流程示意图四;
图7为本公开实施例提供的光照渲染方法流程示意图五;
图8为本公开实施例提供的光照渲染方法流程示意图六;
图9为本公开实施例提供的一种光照渲染装置的示意图;
图10为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,应当理解,本公开中附图仅起到说明和描述的目的,并不用于限定本公开的保护范围。另外,应当理解,示意性的附图并未按实物比例绘制。本公开中使用的流程图示出了根据本公开的一些实施例实现的操作。应该理解,流程图的操作可以不按顺序实现,没有逻辑的上下文关系的步骤可以反转顺序或者同时实施。此外,本领域技术人员在本公开内容的指引下,可以向流程图添加一个或多个其他操作,也可以从流程图中移除一个或多个操作。
另外,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
需要说明的是,本公开实施例中将会用到术语“包括”,用于指出其后所声明的特征的存在,但并不排除增加其它的特征。
首先,对于本公开可能涉及的一些名词术语进行解释:
1、SDF:Signed Distance Field,有向距离场,给定任意空间的位置点,返回这个位置点离场景物体最近的距离,如果位置点在物体外部返回正值,在物体内部则返回负值。
2、RayMarching:光线步进。从相机向屏幕各像素发射射线,射线通过逐步前进的方式与场景求交,每步前进的距离由当前射线位置决定,若步进长度过大,则可能跳过较薄的物体,错过实际交点。取得到交点后可根据交点位置获取场景表面的材质、纹理等信息,并结合光源信息计算光照。
3、场景:游戏场景空间中各虚拟模型构成场景。
4、SDF纹理贴图的贴图像素:指的是纹理贴图中的像素格子,每个像素格子对应一定范围的游戏场景空间。
5、直接光照:光源的光直接照射到场景上所产生的光照信息。
6、间接光照:一次或多次反射光源的光照射到场景上所产生的光照信息。
图1为本公开实施例提供的光照渲染方法流程示意图一;该方法的执行主体可以是终端或服务器。如图1所示,该方法可包括:
S101、确定当前场景着色点。
当前场景可以指待渲染游戏场景中的目标渲染模型。通常,待渲染游戏场景中可包括多个待渲染模型,待渲染模型可以为待渲染游戏场景中的虚拟建筑、虚拟人物、虚拟地形等。
可选地,可由虚拟相机向当前场景发射虚拟步进光线,确定虚拟步进光线所打到当前场景上的点为当前场景着色点,着色点也即需要进行光照渲染的点,通过对当前场景的所有着色点进行光照渲染后,可完成对当前场景的渲染需要说明的是,着色点可以是场景中的一块区域,也可以是具体的一个点,具体根据实际情况进行确定。
S102、根据当前场景着色点发出虚拟步进光线。
游戏的一些高级渲染效果如漫反射全局光、镜面反射、软阴影、环境光遮蔽等可通过对场景进行间接光照渲染实现。在本实施方式中,本方法主要应用于对场景间接光照的计算,而对于直接光照的计算可以是根据已知的直接光照信息进行计算。
对于当前场景间接光照的计算,需要结合游戏场景中其他场景反射至当前场景所产生的光照信息综合进行计算。而其他场景反射至当前场景所产生的光照信息可根据步进光线与其他场景的交点位置处所获取的场景表面材质纹理等信息计算得到。
可选地,可根据当前场景着色点,由当前场景着色点向场景中随机发出虚拟步进光线,以使得虚拟步进光线通过步进最终确定与其他场景的交点。在一可选的实施示例中,以当前场景着色点为圆心的球形范围内,随机的发射虚拟步进光线,随机发送的光线中可以存在部分与场景相交,也可以全部与场景相交,也可以全部与场景都不相交,当虚拟虚拟光线与场景相交,则说明游戏场景中的光源发射的光线经过该场景反射至当前场景着色点。
S103、根据更新的当前帧对应的场景有向距离场信息确定虚拟步进光线对应的步进长度。
可选地,虚拟步进光线由当前场景着色点向前发射时,可以是根据步进长度逐步前进,而步进长度取决于虚拟步进光线当前所在位置距离场景的最短距离,也即虚拟步进光线当 前所在位置的有向距离场信息。由于虚拟步进光线是逐步前进的,每一帧对应行进一步,每一步的行进长度可根据当前帧场景的有向距离场信息确定,直到虚拟步进光线与场景相交。需要说明的是,在本公开中,虚拟步进光线与场景相交指的是光线当前所处的位置与场景的距离满足预设距离或是处于场景的表面处,也就是说,只要光线步进的位置与场景足够近,或是间距为可以接受的情况,即认为虚拟步进光线与场景相交。
在一些实施例中,由于场景位置的变化,例如场景为虚拟载具,其位置可能是实时变化的,这种情况下,当前帧对应的场景有向距离场信息也是变化的,则可实时对当前帧对应的场景有向距离场信息进行更新,根据更新后的当前帧对应的场景有向距离场信息确定虚拟步进光线对应的步进长度,从而保证了获取的步进长度的精确性。
S104、根据步进长度控制虚拟步进光线向场景中延伸以确定至少一场景交点。
可选地,如上述所说的,每一帧均可确定当前帧虚拟步进光线的步进长度,从而可根据每帧所确定的步进长度控制虚拟步进光线向前延伸,以使得虚拟步进光线与至少一个场景相交,确定交点。其中,至少一个场景为除当前场景(目标场景)之外的其他场景。
S105、根据入射光信息和场景交点的材质信息,确定场景交点的颜色信息,其中,颜色信息用于进行间接光照计算。
可选地,入射光信息可以包括:光线颜色、强度、角度、材质、衰减度等。场景交点的材质信息可以包括场景交点位置场景表面的材质、纹理等信息。可选地,可根据入射光信息和各场景交点的材质信息,确定各场景交点的颜色信息。
在一种可实现的方式中,可根据各场景交点的颜色信息、以及当前场景的颜色信息,采用间接光照渲染算法,对当前场景进行间接光照渲染。其中,间接光照渲染算法可以参照现有的光照渲染算法执行。
综上,本实施例提供的光照渲染方法,包括:确定当前场景着色点;根据当前场景着色点发出虚拟步进光线;根据更新的当前帧对应的场景有向距离场信息确定虚拟步进光线对应的步进长度;根据步进长度控制虚拟步进光线向场景中延伸以确定至少一场景交点;根据入射光信息和场景交点的材质信息,确定场景交点的颜色信息,其中,颜色信息用于进行间接光照计算。本方案中,所获取的当前帧对应的场景有向距离场信息为实时更新后的信息,可以提高获取的有向距离场信息的精确性,实现了在动态场景下获取准确的有向距离场信息,从而基于获取的较为精确的有向距离场信息可以控制虚拟步进光线的准确步进,使得得到的场景交点较为精确,获取到准确的间接光照,从而提升了动态场景下间接光照的渲染效果。
可选地,更新的当前帧对应的场景有向距离场信息为通过读取更新的当前帧对应的有向距离场纹理贴图获得的信息,其中,有向距离场纹理贴图中的不同贴图像素对应根据虚拟相机的位置确定的游戏世界空间中的不同空间范围,贴图像素中存储对应的空间范围的场景有向距离场信息。
在一种可实现的方式中,场景有向距离场信息可以是存储在有向距离场纹理贴图,也 即SDF纹理贴图中的,以使得在游戏运行的过程中可以更加高效的从SDF纹理贴图中读取场景的有向距离场信息,以进行光线步进。
通常,SDF纹理贴图是由多个贴图像素构成的,一个贴图像素也即一个像素格子,每个贴图像素用于映射游戏世界空间的不同空间范围,从而每个贴图像素中存储所对应的空间范围的场景有向距离场信息。其中,该空间范围内任一点的有向距离场信息均可以用该空间范围的场景有向距离场信息表示。也即,将一个小空间范围内的所有点的有向距离场信息统一表示。
可选地,有向距离场纹理贴图包括多层纹理贴图,每层纹理贴图采用相同像素尺寸存储对应空间范围的场景有向距离场信息,每层纹理贴图所对应的空间范围根据游戏世界空间的空间范围与虚拟相机的距离确定。
图2为本公开实施例提供的一种有向距离场纹理贴图示意图。本公开中,所建立的SDF纹理贴图可以为多层纹理贴图,一种可实现的方式中,如图2所示,SDF纹理贴图可以为围绕虚拟相机建立的三层3D纹理贴图,分别映射距离虚拟相机不同距离的游戏世界空间的空间范围。
如图2所示,每一层为SDF纹理贴图映射的空间范围。每一层纹理贴图存储的场景有向距离场信息所属的空间范围由虚拟相机位置决定,当虚拟相机位置发生变化时,纹理贴图对应的空间范围也随之变化,其中,每层纹理贴图的数量不发生变化,每层纹理贴图采用相同像素尺寸存储对应空间范围的场景有向距离场信息,也即,每层纹理贴图均为长宽高相等的纹理贴图,因而SDF纹理贴图所占用的显存始终固定。本实施例中,三层纹理贴图采用相同的像素尺寸,保存对应不同密度的场景,以实现。
可选地,不同的空间范围使用相同大小(相同像素尺寸)的SDF纹理贴图进行映射,以实现不同映射精度。距虚拟相机越近,所需精度越大,SDF纹理贴图的映射精度越大,反之距虚拟相机越远,所需精度越小,SDF纹理贴图的映射精度越小。如此避免了因存储全场景均匀(远近皆映射相同的空间范围)SDF纹理贴图造成的过高显存占用,同时覆盖方位较远,且对于间接光照的计算结果与全场景精细有向距离场信息的差异不会太大。
可选地,在虚拟步进光线行进的过程中,可以通过从SDF纹理贴图中读取目标空间范围对应的有向距离场信息,以确定步进长度。
在一种可实现的方式中,为通过SDF纹理贴图实现访问游戏世界空间中某空间范围下某位置的有向距离场信息,现定义SDF纹理贴图在游戏世界空间中的覆盖盒子范围(AABB:axis-aligned bounding box),包围盒坐标最小值为BBMin,包围盒坐标最大值为BBMax,则SDF纹理贴图的每个贴图像素(纹理坐标UVW)就可以准确的映射到一个空间范围(WorldPos),反之亦然,映射公式可如下:
WorldPos=BBMin+(BBMax-BBMin)*UVW
UVW=(WorldPos-BBMin)/(BBMax-BBMin)
图3为本公开实施例提供的光照渲染方法流程示意图二;可选地,上述步骤中,有向 距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息采用如下方式计算得到:
S301、获取虚拟相机的当前位置信息。
可选地,虚拟相机的当前位置信息可实时获取,虚拟相机与光源的结合可以相当于观察者的眼睛。
虚拟相机向当前场景发射的虚拟步进光线可通过屏幕的近裁面后打到场景上,其中近裁面可以由多个像素点构成,近裁面的尺寸可以与终端的显示区间的尺寸相同,也可以与显示区间的尺寸不同。显示在近裁面上的画面可相当于显示在手机上的画面。
S302、根据虚拟相机的当前位置信息生成多层场景深度信息。
可选地,可根据上述所确定的虚拟相机的当前位置信息,生成虚拟相机在当前位置的多层场景深度信息,生成方法可采用depth peeling(深度剥离算法)技术,按照场景距离虚拟相机由近到远,生成正面层深度信息和反面层深度信息,具体实现可参照下面的实施例。
S303、根据多层场景深度信息、以及各贴图像素所对应的空间范围的深度值,或者根据上一帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息,确定当前帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息。
如前述实施例中所描述的,SDF纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息可能会因场景的移动或者是虚拟相机的移动而发生变化,也即,每一帧各贴图像素所存储的对应空间范围的场景有向距离场信息并非是固定不变的,若不及时对所存储的场景有向距离场信息进行更新,则将导致读取的场景有向距离场信息不准确,从而使得最终求得的场景交点出现错误。
在一些情况下,当所要计算的贴图像素所对应的空间范围不在当前虚拟相机的视野范围内时,则可根据上一帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息,确定当前帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息。
在另一些情况下,当所要计算的贴图像素所对应的空间范围在当前虚拟相机的视野范围内时,则可根据上述生成的多层场景深度信息、以及各贴图像素所对应的空间范围的深度值,确定当前帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息。其中,各贴图像素所对应的空间范围的深度值可根据空间范围与虚拟相机当前位置的距离确定。
图4为本公开实施例提供的光照渲染方法流程示意图三;图5为本公开实施例提供的一种多层场景深度示意图。可选地,步骤S302中,根据虚拟相机的当前位置信息生成多层场景深度信息,可以包括:
S401、根据当前位置信息确定虚拟相机捕捉的各场景。
可选地,受虚拟相机性能的影响,虚拟相机的镜头捕捉范围有限,并不能将游戏世界空间中所有场景捕捉到,例如,只能捕捉到距离虚拟相机当前位置100米内的各场景。基于虚拟相机的当前位置,可确定虚拟相机所捕捉的各场景。
S402、分别确定各场景的第一深度层和第二深度层,第一深度层用于指示场景靠近虚拟相机的深度信息,第二深度层用于指示场景远离虚拟相机的深度信息。
如图5所示,假设虚拟相机F捕捉的场景包括场景A、场景B、场景C,各场景均为三维场景,可确定各场景靠近虚拟相机的一层作为第一深度层,远离虚拟相机的一层作为第二深度层。如图中,场景A、场景B、场景C的第一深度层分别为D1、D2、D3,第二深度层分别为D11、D22、D33。
S403、确定各场景的第一深度层和第二深度层分别距离虚拟相机的第一距离和第二距离。
可选地,由于虚拟相机的当前位置以及各场景的第一深度层和第二深度层的位置已确定,那么,可根据各场景的第一深度层距离虚拟相机当前位置的距离,得到各场景的第一距离,同理,根据各场景的第二深度层距离虚拟相机当前位置的距离,得到各场景的第二距离。
S404、根据各场景的第一距离和第二距离,生成多层场景深度信息。
可选地,可由各场景的第一距离和第二距离,综合生产多层场景深度信息。
图6为本公开实施例提供的光照渲染方法流程示意图四;可选地,步骤S404中,根据各场景的第一距离和第二距离,生成多层场景深度信息,可以包括:
S601、按照由近及远的顺序依次将各场景的第一距离进行排序,确定多层场景深度信息的第一深度信息,第一深度信息用于指示各场景的面向虚拟相机的正面层深度信息。
可选地,多层场景深度信息可包括第一深度信息和第二深度信息。其中,第一深度信息可以是根据各场景距离虚拟相机的远近,由近到远顺序依次将各场景的第一距离排序得到,由图5可知,场景A、场景B、场景C正好按照距离虚拟相机由近到远排列。假设场景A、场景B、场景C的第一距离分别为D1、D2、D3,则多层场景深度信息的第一深度信息则为:D1、D2、D3。
S602、按照由近及远的顺序依次将各场景的第二距离进行排序,确定多层场景深度信息的第二深度信息,第二深度信息用于指示各场景的背向虚拟相机的反面层深度信息。
同理,假设场景A、场景B、场景C的第二距离分别为D11、D22、D33,则多层场景深度信息的第二深度信息则为:D11、D22、D33。
其中,第一深度信息即代表多层场景深度信息的正面层深度信息,第二深度信息即代表多层场景深度信息的反面层深度信息。
本实施例中,使用depthpeeling技术生成了当前虚拟相机位置下的多层场景深度信息,这样可以准确地知道当前虚拟相机视野内任意一个空间点是在场景内部还是外部,即计算得到的场景有向距离场信息的正负是可靠的,依赖多层场景深度信息计算每个场景有向距 离场信息的正负,这样就可以避免因视差问题导致的有向距离场信息偏差。
图7为本公开实施例提供的光照渲染方法流程示意图五;可选地,S303中,根据多层场景深度信息、以及各贴图像素所对应的空间范围的深度值,确定当前帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息,可以包括:
S701、确定当前贴图像素对应的空间范围在虚拟相机的视野范围内。
首先,将当前贴图像素映射回游戏世界空间,确定当前贴图像素对应的空间范围,并计算空间范围的深度值,从而根据深度值,确定当前贴图像素对应的空间范围在虚拟相机的视野范围内。
可选地,当空间范围的深度值小于或等于虚拟相机的捕捉距离时,则可认为该空间范围在虚拟相机的视野范围内,反之,当空间范围的深度值大于虚拟相机的捕捉距离时,则可认为该空间范围不在虚拟相机的视野范围内。其中,空间范围的深度值可以指空间范围距离虚拟相机的直线距离。
S702、将当前贴图像素所对应的空间范围的深度值与多层场景深度信息的第一深度信息和第二深度信息进行比对,当前贴图像素为各贴图像素中的任一贴图像素。
当上述确定当前贴图像素对应的空间范围在虚拟相机的视野范围内时,则可将该空间范围的深度值与上述得到的多层场景深度信息的第一深度信息和第二深度信息进行比对,获取比对结果。
需要说明的是,上述所说的当前贴图像素是指当前正在计算的贴图像素,其可以是SDF纹理贴图的各贴图像素中的任一贴图像素。
S703、根据比对结果,确定当前贴图像素所存储的对应空间范围的场景有向距离场信息。
在一些实施例中,不同的比对结果,可分别对应不同的场景有向距离场信息计算方式,可根据比对结果,采用对应的方式进行当前贴图像素所存储的对应空间范围的场景有向距离场信息的计算。
可选地,步骤S703中,根据比对结果,确定当前贴图像素所存储的对应空间范围的场景有向距离场信息,可以包括:根据比对结果、当前贴图像素所对应的空间范围的深度值、以及所述第一深度信息和所述第二深度信息,确定当前贴图像素所存储的对应空间范围的场景有向距离场信息。
假设当前贴图像素所对应的空间范围的深度值为depth,第一深度信息为:D1、D2、D3…Dn,第二深度信息为:D11、D22、D33…Dm。
若depth<min(D1,D11),则该空间范围在距离虚拟相机最近的场景表面前,那么可根据公式:min(D1,D11)-depth,也即,将D1和D11中的最小值减去该空间范围的深度值,计算得到当前贴图像素所存储的对应空间范围的场景有向距离场信息,并标记当前帧当前贴图像素已赋值。
若depth>max(Dn,Dm),则该空间范围在距离虚拟相机最远的场景表面后,则可 复用上一帧当前贴图像素所存储的对应空间范围的场景有向距离场信息,也即,将上一帧当前贴图像素所存储的对应空间范围的场景有向距离场信息确定为当前帧当前贴图像素所存储的对应空间范围的场景有向距离场信息,并标记当前帧当前贴图像素未赋值。
在一种情况下,空间范围可能位于两个场景之间,可以先从第一深度信息和第二深度信息中确定出距离空间范围最近的层,设其深度为Zn。
若存在i使Di<depth<Dii,例如:存在D1<depth<D11,或者D2<depth<D22,则证明该空间范围处于场景内部,则可根据公式:max(Di-depth,depth-Dii),以Di为D1,Dii为D11为例,也即,假设空间范围处理场景A内部,则可将场景A的第一距离减去空间范围的深度值得到第一结果,将空间范围的深度值减去场景A的第二距离得到第二结果,从第一结果和第二结果中取最大值,作为当前贴图像素所存储的对应空间范围的场景有向距离场信息,并标记当前帧当前贴图像素已赋值。
若不存在i使Di<depth<Dii,则证明该空间范围处于场景外部,则可根据公式:abs(depth-Zn),计算得到当前贴图像素所存储的对应空间范围的场景有向距离场信息。其中,abs指的是取绝对值。
上述对于当前贴图像素对应的空间范围在虚拟相机的视野范围内的情况下几种当前贴图像素所存储的对应空间范围的场景有向距离场信息的计算方法进行了说明。
可选地,步骤S703中,根据上一帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息,确定当前帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息,可以包括:确定当前贴图像素对应的空间范围不在虚拟相机的视野范围内;将上一帧的有向距离场纹理贴图中当前贴图像素所存储的对应空间范围的场景有向距离场信息确定为当前帧的有向距离场纹理贴图中当前贴图像素所存储的对应空间范围的场景有向距离场信息,当前贴图像素为各贴图像素中的任一贴图像素。
在一些情况下,当确定当前贴图像素对应的空间范围不在虚拟相机的视野范围内时,则可直接复用上一帧当前贴图像素所存储的对应空间范围的场景有向距离场信息。
可选地,在当前贴图像素已赋值,而当前贴图像素上一帧的值与当前帧的值正负符号相同的情况下:
若上一帧的值的绝对值小于当前帧的值,则将上一帧的值赋值给当前帧;若当前帧的值的绝对值小于空间范围的大小,则同时注入屏幕光照信息(利用G-Buffer和场景灯光信息计算得到的直接光照颜色结果的漫反射部分)到SDF纹理贴图中,并最终将计算得到的当前帧的值更新到SDF纹理贴图中,对于SDF纹理贴图中每个贴图像素,均可采用上述相同的方法,计算得到当前帧各贴图像素所存储的对应空间范围的场景有向距离场信息,从而得到更新后的SDF纹理贴图。其中,贴图像素格子的大小可以指贴图像素映射回游戏世界空间后,对应的空间范围的大小,假设空间范围为100x100x100,则像素格子的大小则为100x100x100。
图8为本公开实施例提供的光照渲染方法流程示意图六;可选地,本公开的方法还可包括:
S801、根据当前贴图像素所存储的对应空间范围的场景有向距离场信息、以及当前贴图像素的邻居贴图像素所存储的对应空间范围的场景有向距离场信息,确定对当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数。
S802、根据修正参数,修正当前贴图像素所存储的对应空间范围的场景有向距离场信息。
可选地,上述计算得到的当前帧当前贴图像素所存储的对应空间范围的场景有向距离场信息可能会存在视差问题,从而影响虚拟步进光线进行光线步进时步长的取值,从而造成跳过场景实际表面的错误。
本实施例中可通过计算各贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数,从而对上述所计算得到的各贴图像素所存储的对应空间范围的场景有向距离场信息进行修正,提高得到的各贴图像素所存储的对应空间范围的场景有向距离场信息的准确性。
可选地,可将当前贴图像素所存储的对应空间范围的场景有向距离场信息、与当前贴图像素的邻居贴图像素所存储的对应空间范围的场景有向距离场信息的正负符号进行比对,从而确定当前贴图像素所对应的修正参数。其中,邻居贴图像素可以指与当前贴图像素相邻的所有贴图像素。
可选地,步骤S801中,根据当前贴图像素所存储的对应空间范围的场景有向距离场信息、以及当前贴图像素的邻居贴图像素所存储的对应空间范围的场景有向距离场信息,确定对当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数,可以包括:
若存在一邻居贴图像素所存储的对应空间范围的场景有向距离场信息的符号与当前贴图像素所存储的对应空间范围的场景有向距离场信息的符号相反,则确定当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数为空间范围的尺寸的一半。
在一些实施例中,若存在一邻居贴图像素所存储的对应空间范围的场景有向距离场信息的符号与当前贴图像素所存储的对应空间范围的场景有向距离场信息的符号相反,则说明该邻居贴图像素与当前贴图像素分别处于某一场景的内外侧,此时,可确定修正参数为:当前贴图像素所存储的对应空间范围的场景有向距离场信息乘以0.5倍的贴图像素长度,其中,贴图像素长度可以指贴图像素所对应的空间范围的尺寸,如上述的100x100x100。
可选地,步骤S801中,根据当前贴图像素所存储的对应空间范围的场景有向距离场信息、以及当前贴图像素的邻居贴图像素所存储的对应空间范围的场景有向距离场信息,确定对当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数,可以包括:
若所有邻居贴图像素所存储的对应空间范围的场景有向距离场信息的符号与当前贴图像素所存储的对应空间范围的场景有向距离场信息的符号均相同,则遍历各邻居贴图像素,分别计算各邻居贴图像素到当前贴图像素的距离与各邻居贴图像素所存储的对应空间 范围的场景有向距离场信息的绝对值之和,从各邻居贴图像素对应的和值中确定最小和值。其中,邻居贴图像素到当前贴图像素的距离可以为贴图像素长度。
若最小和值小于当前贴图像素所存储的对应空间范围的场景有向距离场信息的绝对值,则确定当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数为最小和值。
也即,若由邻居贴图像素到当前贴图像素的距离加上当前贴图像素到场景的距离之和小于当前贴图像素直接到场景的距离,则可将当前贴图像素所存储的对应空间范围的场景有向距离场信息修正为该最小和值。其中,贴图像素到场景的距离也即贴图像素所存储的对应空间范围的场景有向距离场信息。这样可以使得贴图像素所存储的对应空间范围的场景有向距离场信息最小,满足有向距离场信息的条件。
可选地,在一些实施例中,本公开的SDF纹理贴图中还可存储有各场景对应的直接光照信息,例如:直接把场景的直接光照信息存储到SDF纹理贴图的RGB通道,A通道存放各贴图像素对应空间范围的场景有向距离场信息,从而在对场景进行全局光照渲染时,可从SDF纹理贴图中分别读取间接光照信息和直接光照信息,以完成全局光的计算。
综上,本公开实施例提供的光照渲染方法,包括:确定当前场景着色点;根据当前场景着色点发出虚拟步进光线;根据更新的当前帧对应的场景有向距离场信息确定虚拟步进光线对应的步进长度;根据步进长度控制虚拟步进光线向场景中延伸以确定至少一场景交点;根据入射光信息和场景交点的材质信息,确定场景交点的颜色信息,其中,颜色信息用于进行间接光照计算。本方案中,所获取的当前帧对应的场景有向距离场信息为实时更新后的信息,可以提高获取的有向距离场信息的精确性,实现了在动态场景下获取准确的有向距离场信息,从而基于获取的较为精确的有向距离场信息可以控制虚拟步进光线的准确步进,使得得到的场景交点较为精确,获取到准确的间接光照,从而提升了动态场景下间接光照的渲染效果。
其次,本公开采用多层SDF纹理贴图进行有向距离场信息的存储,不同的空间范围使用相同大小(相同像素尺寸)的SDF纹理贴图进行映射,以实现不同映射精度。距虚拟相机越近,所需精度越大,SDF纹理贴图的映射精度越大,反之距虚拟相机越远,所需精度越小,SDF纹理贴图的映射精度越小。如此避免了因存储全场景均匀(远近皆映射相同的空间范围)SDF纹理贴图造成的过高显存占用,同时覆盖方位较远,且对于间接光照的计算结果与全场景精细有向距离场信息的差异不会太大。
另外,使用depthpeeling技术生成了当前虚拟相机位置下的多层场景深度信息,这样可以准确地知道当前虚拟相机视野内任意一个空间点是在场景内部还是外部,即计算得到的场景有向距离场信息的正负是可靠的,依赖多层场景深度信息计算每个场景有向距离场信息的正负,这样就可以避免因视差问题导致的有向距离场信息偏差。
最后,将场景的直接光照信息也存储在SDF纹理贴图中,可高效的实现场景的全局光计算。
下述对用以执行本公开所提供的光照渲染方法的装置、电子设备及存储介质等进行说明,其具体的实现过程以及技术效果参见上述,下述不再赘述。
图9为本公开实施例提供的一种光照渲染装置的示意图,该光照渲染装置实现的功能对应上述方法执行的步骤。该装置可以理解为上述终端或服务器,或服务器的处理器,也可以理解为独立于上述服务器或处理器之外的在服务器控制下实现本公开功能的组件,如图9所示,该装置可包括:确定模块910、发射模块920;
确定模块910,用于确定当前场景着色点;
发射模块920,用于根据当前场景着色点发出虚拟步进光线;
确定模块910,用于根据更新的当前帧对应的场景有向距离场信息确定虚拟步进光线对应的步进长度;
确定模块910,用于根据步进长度控制虚拟步进光线向场景中延伸以确定至少一场景交点;
确定模块910,用于根据入射光信息和场景交点的材质信息,确定场景交点的颜色信息,其中,颜色信息用于进行间接光照计算。
可选地,更新的当前帧对应的场景有向距离场信息为通过读取更新的当前帧对应的有向距离场纹理贴图获得的信息,其中,有向距离场纹理贴图中的不同贴图像素对应根据虚拟相机的位置确定的游戏世界空间中的不同空间范围,贴图像素中存储对应的空间范围的场景有向距离场信息。
可选地,有向距离场纹理贴图包括多层纹理贴图,每层纹理贴图采用相同像素尺寸存储对应空间范围的场景有向距离场信息,每层纹理贴图所对应的空间范围根据游戏世界空间的空间范围与虚拟相机的距离确定。
可选地,该装置还包括:获取模块、生成模块;
获取模块,用于获取虚拟相机的当前位置信息;
生成模块,用于根据虚拟相机的当前位置信息生成多层场景深度信息;
确定模块910,还用于根据多层场景深度信息、以及各贴图像素所对应的空间范围的深度值,或者根据上一帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息,确定当前帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息。
可选地,生成模块,具体用于根据当前位置信息确定虚拟相机捕捉的各场景;分别确定各场景的第一深度层和第二深度层,第一深度层用于指示场景靠近虚拟相机的深度信息,第二深度层用于指示场景远离虚拟相机的深度信息;确定各场景的第一深度层和第二深度层分别距离虚拟相机的第一距离和第二距离;根据各场景的第一距离和第二距离,生成多层场景深度信息。
可选地,生成模块,具体用于按照由近及远的顺序依次将各场景的第一距离进行排序,确定多层场景深度信息的第一深度信息,第一深度信息用于指示各场景的面向虚拟相机的 正面层深度信息;按照由近及远的顺序依次将各场景的第二距离进行排序,确定多层场景深度信息的第二深度信息,第二深度信息用于指示各场景的背向虚拟相机的反面层深度信息。
可选地,确定模块910,具体用于确定当前贴图像素对应的空间范围在虚拟相机的视野范围内;将当前贴图像素所对应的空间范围的深度值与多层场景深度信息的第一深度信息和第二深度信息进行比对,当前贴图像素为各贴图像素中的任一贴图像素;根据比对结果,确定当前贴图像素所存储的对应空间范围的场景有向距离场信息。
可选地,确定模块910,具体用于根据比对结果、当前贴图像素所对应的空间范围的深度值、以及第一深度信息和第二深度信息,确定当前贴图像素所存储的对应空间范围的场景有向距离场信息。
可选地,确定模块910,具体用于确定当前贴图像素对应的空间范围不在虚拟相机的视野范围内;将上一帧的有向距离场纹理贴图中当前贴图像素所存储的对应空间范围的场景有向距离场信息确定为当前帧的有向距离场纹理贴图中当前贴图像素所存储的对应空间范围的场景有向距离场信息,当前贴图像素为各贴图像素中的任一贴图像素。
可选地,该装置还包括:修正模块;
确定模块910,还用于根据当前贴图像素所存储的对应空间范围的场景有向距离场信息、以及当前贴图像素的邻居贴图像素所存储的对应空间范围的场景有向距离场信息,确定对当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数;
修正模块,用于根据修正参数,修正当前贴图像素所存储的对应空间范围的场景有向距离场信息。
可选地,确定模块910,具体用于若存在一邻居贴图像素所存储的对应空间范围的场景有向距离场信息的符号与当前贴图像素所存储的对应空间范围的场景有向距离场信息的符号相反,则确定当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数为空间范围的尺寸的一半。
可选地,确定模块910,具体用于若所有邻居贴图像素所存储的对应空间范围的场景有向距离场信息的符号与当前贴图像素所存储的对应空间范围的场景有向距离场信息的符号均相同,则遍历各邻居贴图像素,分别计算各邻居贴图像素到当前贴图像素的距离与各邻居贴图像素所存储的对应空间范围的场景有向距离场信息的绝对值之和,从各邻居贴图像素对应的和值中确定最小和值;若最小和值小于当前贴图像素所存储的对应空间范围的场景有向距离场信息的绝对值,则确定当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数为最小和值。
上述装置用于执行前述实施例提供的方法,其实现原理和技术效果类似,在此不再赘述。
以上这些模块可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(Application Specific Integrated Circuit,简称ASIC),或,一个或多个微 处理器(digital singnal processor,简称DSP),或,一个或者多个现场可编程门阵列(Field Programmable Gate Array,简称FPGA)等。再如,当以上某个模块通过处理元件调度程序代码的形式实现时,该处理元件可以是通用处理器,例如中央处理器(Central Processing Unit,简称CPU)或其它可以调用程序代码的处理器。再如,这些模块可以集成在一起,以片上系统(system-on-a-chip,简称SOC)的形式实现。
上述模块可以经由有线连接或无线连接彼此连接或通信。有线连接可以包括金属线缆、光缆、混合线缆等,或其任意组合。无线连接可以包括通过LAN、WAN、蓝牙、ZigBee、或NFC等形式的连接,或其任意组合。两个或更多个模块可以组合为单个模块,并且任何一个模块可以分成两个或更多个单元。所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考方法实施例中的对应过程,本公开中不再赘述。
需要说明的是,以上这些模块可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(Application Specific Integrated Circuit,简称ASIC),或,一个或多个微处理器(Digital Singnal Processor,简称DSP),或,一个或者多个现场可编程门阵列(Field Programmable Gate Array,简称FPGA)等。再如,当以上某个模块通过处理元件调度程序代码的形式实现时,该处理元件可以是通用处理器,例如中央处理器(Central Processing Unit,简称CPU)或其它可以调用程序代码的处理器。再如,这些模块可以集成在一起,以片上系统(System-on-a-chip,简称SOC)的形式实现。
图10为本公开实施例提供的一种电子设备的结构示意图,该设备可以是具备数据处理功能的计算设备。
该设备可包括:处理器801、存储器802。
存储器802用于存储程序,处理器801调用存储器802存储的程序,以执行上述方法实施例。具体实现方式和技术效果类似,这里不再赘述。
其中,存储器802存储有程序代码,当程序代码被处理器801执行时,使得处理器801执行本说明书上述“示例性方法”部分中描述的根据本公开各种示例性实施方式的光照渲染方法中的各种步骤。
处理器801可以是通用处理器,例如中央处理器(CPU)、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本公开实施例中公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本公开实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
存储器802作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块。存储器可以包括至少一种类型的存储介质,例如可 以包括闪存、硬盘、多媒体卡、卡型存储器、随机访问存储器(Random Access Memory,RAM)、静态随机访问存储器(Static Random Access Memory,SRAM)、可编程只读存储器(Programmable Read Only Memory,PROM)、只读存储器(Read Only Memory,ROM)、带电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、磁性存储器、磁盘、光盘等等。存储器是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。本公开实施例中的存储器802还可以是电路或者其它任意能够实现存储功能的装置,用于存储程序指令和/或数据。
可选地,本公开还提供一种程序产品,例如计算机可读存储介质,包括程序,该程序在被处理器执行时用于执行上述方法实施例。
在本公开所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(英文:processor)执行本公开各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(英文:Read-Only Memory,简称:ROM)、随机存取存储器(英文:Random Access Memory,简称:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。

Claims (15)

  1. 一种光照渲染方法,包括:
    确定当前场景着色点;
    根据所述当前场景着色点发出虚拟步进光线;
    根据更新的当前帧对应的场景有向距离场信息确定所述虚拟步进光线对应的步进长度;
    根据所述步进长度控制所述虚拟步进光线向场景中延伸以确定至少一场景交点;
    根据入射光信息和所述场景交点的材质信息,确定所述场景交点的颜色信息,其中,所述颜色信息用于进行间接光照计算。
  2. 根据权利要求1所述的方法,其中,所述更新的当前帧对应的场景有向距离场信息为通过读取更新的当前帧对应的有向距离场纹理贴图获得的信息,其中,所述有向距离场纹理贴图中的不同贴图像素对应根据虚拟相机的位置确定的游戏世界空间中的不同空间范围,所述贴图像素中存储对应的空间范围的场景有向距离场信息。
  3. 根据权利要求2所述的方法,其中,所述有向距离场纹理贴图包括多层纹理贴图,每层纹理贴图采用相同像素尺寸存储对应空间范围的场景有向距离场信息,每层纹理贴图所对应的空间范围根据游戏世界空间的空间范围与所述虚拟相机的距离确定。
  4. 根据权利要求3所述的方法,其中,所述有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息采用如下方式计算得到:
    获取所述虚拟相机的当前位置信息;
    根据所述虚拟相机的当前位置信息生成多层场景深度信息;
    根据所述多层场景深度信息、以及各贴图像素所对应的空间范围的深度值,或者根据上一帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息,确定当前帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息。
  5. 根据权利要求4所述的方法,其中,所述根据所述虚拟相机的当前位置信息生成多层场景深度信息,包括:
    根据所述当前位置信息确定所述虚拟相机捕捉的各场景;
    分别确定所述各场景的第一深度层和第二深度层,所述第一深度层用于指示场景靠近所述虚拟相机的深度信息,所述第二深度层用于指示场景远离所述虚拟相机的深度信息;
    确定所述各场景的第一深度层和第二深度层分别距离所述虚拟相机的第一距离和第二距离;
    根据所述各场景的第一距离和第二距离,生成所述多层场景深度信息。
  6. 根据权利要求5所述的方法,其中,所述根据所述各场景的第一距离和第二距离,生成所述多层场景深度信息,包括:
    按照由近及远的顺序依次将所述各场景的第一距离进行排序,确定所述多层场景深度信息的第一深度信息,所述第一深度信息用于指示所述各场景的面向所述虚拟相机的正面层深度信息;
    按照由近及远的顺序依次将所述各场景的第二距离进行排序,确定所述多层场景深度信息的第二深度信息,所述第二深度信息用于指示所述各场景的背向所述虚拟相机的反面层深度信息。
  7. 根据权利要求6所述的方法,其中,所述根据所述多层场景深度信息、以及各贴图像素所对应的空间范围的深度值,确定当前帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息,包括:
    确定当前贴图像素对应的空间范围在所述虚拟相机的视野范围内;
    将所述当前贴图像素所对应的空间范围的深度值与所述多层场景深度信息的第一深度信息和第二深度信息进行比对,所述当前贴图像素为所述各贴图像素中的任一贴图像素;
    根据比对结果,确定所述当前贴图像素所存储的对应空间范围的场景有向距离场信息。
  8. 根据权利要求7所述的方法,其中,所述根据比对结果,确定所述当前贴图像素所存储的对应空间范围的场景有向距离场信息,包括:
    根据比对结果、当前贴图像素所对应的空间范围的深度值、以及所述第一深度信息和所述第二深度信息,确定当前贴图像素所存储的对应空间范围的场景有向距离场信息。
  9. 根据权利要求4所述的方法,其中,所述根据上一帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息,确定当前帧的有向距离场纹理贴图中各贴图像素所存储的对应空间范围的场景有向距离场信息,包括:
    确定当前贴图像素对应的空间范围不在所述虚拟相机的视野范围内;
    将上一帧的有向距离场纹理贴图中所述当前贴图像素所存储的对应空间范围的场景有向距离场信息确定为当前帧的有向距离场纹理贴图中所述当前贴图像素所存储的对应空间范围的场景有向距离场信息,所述当前贴图像素为所述各贴图像素中的任一贴图像素。
  10. 根据权利要求8所述的方法,其中,所述方法还包括:
    根据所述当前贴图像素所存储的对应空间范围的场景有向距离场信息、以及所述当前贴图像素的邻居贴图像素所存储的对应空间范围的场景有向距离场信息,确定对所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数;
    根据所述修正参数,修正当前贴图像素所存储的对应空间范围的场景有向距离场信息。
  11. 根据权利要求10所述的方法,其中,所述根据所述当前贴图像素所存储的对应空间范围的场景有向距离场信息、以及所述当前贴图像素的邻居贴图像素所存储的对应空间范围的场景有向距离场信息,确定对所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数,包括:
    若存在一所述邻居贴图像素所存储的对应空间范围的场景有向距离场信息的符号与所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的符号相反,则确定所述 当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数为所述空间范围的尺寸的一半。
  12. 根据权利要求10所述的方法,其中,所述根据所述当前贴图像素所存储的对应空间范围的场景有向距离场信息、以及所述当前贴图像素的邻居贴图像素所存储的对应空间范围的场景有向距离场信息,确定对所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数,包括:
    若所有所述邻居贴图像素所存储的对应空间范围的场景有向距离场信息的符号与所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的符号均相同,则遍历各所述邻居贴图像素,分别计算各所述邻居贴图像素到所述当前贴图像素的距离与各所述邻居贴图像素所存储的对应空间范围的场景有向距离场信息的绝对值之和,从各所述邻居贴图像素对应的和值中确定最小和值;
    若所述最小和值小于所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的绝对值,则确定所述当前贴图像素所存储的对应空间范围的场景有向距离场信息的修正参数为所述最小和值。
  13. 一种光照渲染装置,包括:确定模块、发射模块;
    所述确定模块,用于确定当前场景着色点;
    所述发射模块,用于根据所述当前场景着色点发出虚拟步进光线;
    所述确定模块,用于根据更新的当前帧对应的场景有向距离场信息确定所述虚拟步进光线对应的步进长度;
    所述确定模块,用于根据所述步进长度控制所述虚拟步进光线向场景中延伸以确定至少一场景交点;
    所述确定模块,用于根据入射光信息和所述场景交点的材质信息,确定所述场景交点的颜色信息,其中,所述颜色信息用于进行间接光照计算。
  14. 一种电子设备,包括:处理器、存储介质和总线,所述存储介质存储有所述处理器可执行的程序指令,当电子设备运行时,所述处理器与所述存储介质之间通过总线通信,所述处理器执行所述程序指令,以执行时执行如权利要求1至12任一所述的方法的步骤。
  15. 一种计算机可读存储介质,所述存储介质上存储有计算机程序,所述计算机程序被处理器运行时执行如权利要求1至12任一所述的方法的步骤。
PCT/CN2021/131872 2021-03-09 2021-11-19 光照渲染方法、装置、电子设备及存储介质 WO2022188460A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/256,055 US20240062449A1 (en) 2021-03-09 2021-11-19 Illumination rendering method and apparatus, and electronic device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110258473.1A CN115115747A (zh) 2021-03-09 2021-03-09 光照渲染方法、装置、电子设备及存储介质
CN202110258473.1 2021-03-09

Publications (1)

Publication Number Publication Date
WO2022188460A1 true WO2022188460A1 (zh) 2022-09-15

Family

ID=83226301

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/131872 WO2022188460A1 (zh) 2021-03-09 2021-11-19 光照渲染方法、装置、电子设备及存储介质

Country Status (3)

Country Link
US (1) US20240062449A1 (zh)
CN (1) CN115115747A (zh)
WO (1) WO2022188460A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830208A (zh) * 2023-01-09 2023-03-21 腾讯科技(深圳)有限公司 全局光照渲染方法、装置、计算机设备和存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115546389A (zh) * 2022-10-08 2022-12-30 网易(杭州)网络有限公司 软阴影生成方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886636A (zh) * 2014-01-28 2014-06-25 浙江大学 基于光线投射步进补偿的实时烟雾渲染算法
CN104392478A (zh) * 2014-10-31 2015-03-04 无锡梵天信息技术股份有限公司 一种屏幕空间中体积雾的算法
US9202291B1 (en) * 2012-06-27 2015-12-01 Pixar Volumetric cloth shader
CN107452048A (zh) * 2016-05-30 2017-12-08 网易(杭州)网络有限公司 全局光照的计算方法及装置
CN110310356A (zh) * 2019-06-26 2019-10-08 北京奇艺世纪科技有限公司 一种场景渲染方法和装置
CN111915712A (zh) * 2020-08-28 2020-11-10 网易(杭州)网络有限公司 光照渲染方法、装置、计算机可读介质及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9202291B1 (en) * 2012-06-27 2015-12-01 Pixar Volumetric cloth shader
CN103886636A (zh) * 2014-01-28 2014-06-25 浙江大学 基于光线投射步进补偿的实时烟雾渲染算法
CN104392478A (zh) * 2014-10-31 2015-03-04 无锡梵天信息技术股份有限公司 一种屏幕空间中体积雾的算法
CN107452048A (zh) * 2016-05-30 2017-12-08 网易(杭州)网络有限公司 全局光照的计算方法及装置
CN110310356A (zh) * 2019-06-26 2019-10-08 北京奇艺世纪科技有限公司 一种场景渲染方法和装置
CN111915712A (zh) * 2020-08-28 2020-11-10 网易(杭州)网络有限公司 光照渲染方法、装置、计算机可读介质及电子设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830208A (zh) * 2023-01-09 2023-03-21 腾讯科技(深圳)有限公司 全局光照渲染方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN115115747A (zh) 2022-09-27
US20240062449A1 (en) 2024-02-22

Similar Documents

Publication Publication Date Title
WO2022188460A1 (zh) 光照渲染方法、装置、电子设备及存储介质
US11494970B2 (en) Importance sampling for determining a light map
US20190311471A1 (en) Inconsistency detecting system, mixed-reality system, program, and inconsistency detecting method
WO2021147408A1 (zh) 像素点识别及光照渲染方法、装置、电子设备和存储介质
US8610712B2 (en) Object selection in stereo image pairs
CN116897326A (zh) 人工现实中虚拟对象的手部锁定渲染
US20200043219A1 (en) Systems and Methods for Rendering Optical Distortion Effects
US20190318530A1 (en) Systems and Methods for Reducing Rendering Latency
CN109840949A (zh) 基于光学定位的增强现实图像处理方法和装置
US20100141652A1 (en) System and Method for Photorealistic Imaging Using Ambient Occlusion
KR20150108623A (ko) 영상 처리 장치 및 방법
US11315313B2 (en) Methods, devices and computer program products for generating 3D models
JP2019536162A (ja) シーンのポイントクラウドを表現するシステム及び方法
US20190318528A1 (en) Computer-Graphics Based on Hierarchical Ray Casting
WO2020024684A1 (zh) 三维场景建模方法及装置、电子装置、可读存储介质及计算机设备
CN108242076A (zh) 二次曲面的快速渲染及其轮廓的标记
US11651569B2 (en) System and method for mapping
CN108495113A (zh) 用于双目视觉系统的控制方法和装置
CN107233134B (zh) 显示三维医学模型内部标记点的方法、装置和医疗设备
US20190362541A1 (en) System, device and method for creating three-dimensional models
CN117974856A (zh) 渲染方法、计算设备及计算机可读存储介质
CN117250956A (zh) 一种多观测源融合的移动机器人避障方法和避障装置
WO2023088127A1 (zh) 室内导航方法、服务器、装置和终端
CN106204604A (zh) 投影触控显示装置及其交互方法
CN114967170B (zh) 基于柔性裸眼三维显示设备的显示处理方法及其装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21929923

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18256055

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21929923

Country of ref document: EP

Kind code of ref document: A1