WO2022127242A1 - 游戏图像处理方法、装置、程序和可读介质 - Google Patents

游戏图像处理方法、装置、程序和可读介质 Download PDF

Info

Publication number
WO2022127242A1
WO2022127242A1 PCT/CN2021/119152 CN2021119152W WO2022127242A1 WO 2022127242 A1 WO2022127242 A1 WO 2022127242A1 CN 2021119152 W CN2021119152 W CN 2021119152W WO 2022127242 A1 WO2022127242 A1 WO 2022127242A1
Authority
WO
WIPO (PCT)
Prior art keywords
reflection
information
pixel
map
texture
Prior art date
Application number
PCT/CN2021/119152
Other languages
English (en)
French (fr)
Inventor
姜博耀
Original Assignee
成都完美时空网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都完美时空网络技术有限公司 filed Critical 成都完美时空网络技术有限公司
Publication of WO2022127242A1 publication Critical patent/WO2022127242A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects

Definitions

  • the present application relates to the technical field of image processing, and in particular, to a game image processing method, apparatus, program and readable medium.
  • the reflection effect is an indispensable part of the game screen.
  • it can be used to simulate the mirror reflection effect of materials such as metal or glass on the surrounding environment, or the reflection effect of the water surface on the sky and surrounding scenery.
  • Fake Reflection technology can be used to make the reflection plane transparent, and symmetrical objects can be placed directly in symmetrical places, so that through the main camera once The reflection result is rendered.
  • a game image processing method comprising:
  • Image rendering of the game scene is performed by using the texture information.
  • a game image processing device comprising:
  • the acquisition module is used to acquire the reflection plane defined in the game scene
  • an acquisition module which is also used to acquire the structural data of the reflection plane
  • a calculation module configured to perform reflection calculation according to the structural data, the projection data of the current camera, the pixel information of the current screen and the depth map information of the current screen space, and obtain the texture information including the reflection result;
  • a rendering module configured to perform image rendering of the game scene by using the texture information.
  • a computer device/equipment/system comprising a memory, a processor, and a computer program/instruction stored on the memory, the processor implements the above-mentioned first step when executing the computer program/instruction The steps of the method in one aspect.
  • a computer-readable medium on which computer programs/instructions are stored, and when the computer programs/instructions are executed by a processor, implement the steps of the method in the first aspect.
  • a computer program product comprising computer programs/instructions, when the computer program/instructions are executed by a processor, the steps of the method of the first aspect above are implemented.
  • the present application provides a game image processing method, device, program and readable medium.
  • the present application proposes a new reflection
  • the processing scheme can specifically calculate the reflection according to the structure data of the reflection plane defined in the game scene, the projection data of the current camera, the pixel information of the current screen and the depth map information of the current screen space, and obtain the texture information including the reflection result, which can be used later.
  • the texture information is used for image rendering of the game scene.
  • the present application is not limited by a specific angle in a specific place in the game scene, and there is no need to additionally place symmetrical objects in places where the reflection plane is symmetrical, which can save the rendering overhead of the main camera, thereby saving the cost of game image rendering. Being able to draw the reflection results of multiple planes at one time can not only improve the efficiency of game image rendering, but also ensure the correctness of the reflection results.
  • FIG. 1 shows a schematic flowchart of a game image processing method provided by an embodiment of the present application
  • FIG. 2 shows a schematic flowchart of another game image processing method provided by an embodiment of the present application
  • FIG. 3 shows a schematic diagram of an example of a reflection calculation effect provided by an embodiment of the present application
  • FIG. 4 shows a schematic diagram of an example of a Gaussian blur noise reduction effect provided by an embodiment of the present application
  • FIG. 5 shows a schematic diagram of an example of a reflection plane material map provided by an embodiment of the present application
  • FIG. 6 shows a schematic diagram of an effect comparison example of using a roughness sampling Mipmap provided by an embodiment of the present application
  • FIG. 7 shows a schematic diagram of a comparative example of RT effects using normal disturbance reflection provided by an embodiment of the present application
  • FIG. 8 shows a schematic structural diagram of a game image processing apparatus provided by an embodiment of the present application.
  • Figure 9 schematically shows a block diagram of a computer apparatus/apparatus/system for implementing the method according to the present invention.
  • Figure 10 schematically shows a block diagram of a computer program product implementing the method according to the invention.
  • This embodiment provides a game image processing method, as shown in FIG. 1 , the method includes:
  • Step 101 Acquire a reflection plane defined in the game scene.
  • the reflection plane to be defined in the game scene can be determined according to the actual situation of the game scene.
  • the water surface, smooth road, glass of buildings, mirrors at the position of the sink, etc. appearing in the game scene can all be defined as reflective planes, which can make the game scene more realistic and improve the game player's game experience.
  • the execution subject may be an apparatus or device for game image processing, which may be configured on the client side or the server side.
  • any number of reflection planes can be defined in the game scene according to actual requirements, such as one, two or more reflection planes. If there are multiple reflection planes defined in the acquired game scene, the following processing procedures are performed for each reflection plane, that is, the procedures shown in steps 102 to 103 .
  • Step 102 Acquire structural data of the reflection plane.
  • the structure data of the reflection plane may include: specular reflection transformation matrix of the vertex relative to the reflection plane, plane normal of the reflection plane, position of the bounding box of the reflection plane, reflection threshold of the reflection plane, noise intensity and other detailed data.
  • Step 103 Perform reflection calculation according to the structure data of the reflection plane, the projection data of the current camera, the pixel information of the current screen, and the depth map information of the current screen space, and obtain map information including the reflection result.
  • the pixel information of the current screen may include relevant information of each pixel point of the current screen.
  • the depth map information of the current screen space may include the depth information of the current screen. The depth represents the distance of the pixel from the camera in the 3D world. The greater the depth value of the pixel, the farther the pixel is from the camera.
  • the reflection result can be calculated in the forward rendering pipeline, that is, the forward ray is used to find the current The pixel to which the pixel will be reflected.
  • the projection data of the current camera, the pixel information of the current screen, and the depth map information of the current screen space first, the world coordinates of each pixel are calculated through the depth map information of the current screen space, Then, for each reflection plane, calculate the world coordinate point where the world coordinate of the pixel point is located through the plane reflection, and finally calculate the pixel point back to the current screen space. After the depth test, the color of the pixel point whose depth test is successful is calculated. and depth are written into the texture information, and the texture information containing the reflection result is obtained.
  • Step 104 using the calculated texture information to perform image rendering of the game scene.
  • GPU Graphics Processing Unit
  • this embodiment proposes a new reflection processing solution, which can be specifically based on the structural data of the reflection plane defined in the game scene, the projection of the current camera
  • the data, the pixel information of the current screen, and the depth map information of the current screen space are used for reflection calculation to obtain texture information including the reflection result, and then the texture information is used to render the image of the game scene.
  • This embodiment is not limited by a specific angle in a specific place in the game scene, and there is no need to additionally place symmetrical objects in places where the reflection plane is symmetrical, which can save the rendering overhead of the main camera, thereby saving the cost of game image rendering. Being able to draw the reflection results of multiple planes at one time can not only improve the efficiency of game image rendering, but also ensure the correctness of the reflection results.
  • this embodiment also provides another game image processing method, as shown in FIG. 2 , the method includes:
  • Step 201 Acquire a reflection plane defined in the game scene.
  • this embodiment may be implemented based on the Unity game engine.
  • PPV Post-Process-Volume
  • reflection planes such as water surface, smooth road surface, glass, etc.
  • the most important N reflection planes can be selected according to the configuration, and the structure data of these selected reflection planes can be calculated. Specifically, the processes shown in steps 202 to 203 can be performed.
  • Step 202 Determine the distance between the reflection plane defined in the game scene and the camera.
  • Step 203 Acquire structural data of the reflection plane whose distance from the camera meets the preset distance condition.
  • the preset distance conditions may be preset according to actual needs. For example, order the distance between the reflection plane and the camera from near to far.
  • the reflection planes in the front row are the reflection planes with a closer distance to the camera. Calculating the reflection results of these reflection planes will make The reflection effect is more prominent and obvious, and for the reflection planes in the back, because the distance from the camera is far, if these reflection planes are also calculated for reflection, not only the actual rendering effect is not obvious, but also the cost of calculating reflections will increase. Therefore, it is preferable to take the top preset number (the top N) of reflection planes as the reflection planes whose distances meet the preset distance condition, and then calculate the structure data of these reflection planes.
  • a certain distance threshold can also be preset, and the reflection planes whose distance from the camera is less than or equal to the preset distance threshold are regarded as reflection planes that meet the preset distance condition, and then the structural data of these reflection planes are calculated.
  • CPU Central Processing Unit
  • the reflection calculation in this embodiment can combine the advantages of two reflection processing technologies, Planar Reflection and Screen-Space-Reflection (SSR), which is equivalent to the Screen-Space-Planar -Reflection, SSPR) reflection calculation.
  • SSR Planar Reflection and Screen-Space-Reflection
  • SSPR Screen-Space-Planar -Reflection
  • the deferred rendering pipeline needs to write data to multiple texture maps in each rendering command, generally called GBuffer0, GBuffer1... .
  • a technology called Multi-Render-Target (MRT) needs to be used.
  • MRT Multi-Render-Target
  • the forward rendering pipeline only needs to use one or a few GBuffers, and the bandwidth overhead is relatively low.
  • the reflection calculation of SSPR in this embodiment is performed in the forward rendering pipeline. Using the forward rendering pipeline is more affordable and popular than the deferred rendering pipeline. Therefore, the reflection calculation and image rendering of SSPR in this embodiment can be more suitable for mobile terminals. .
  • Step 204 Perform reflection calculation according to the structure data of the reflection plane, the projection data of the current camera, the pixel information of the current screen, and the depth map information of the current screen space, and obtain map information including the reflection result.
  • the reflection calculation of the SSPR that is, the calculation of the calculation shader (ComputeShader, CS) is performed.
  • the input data of this module includes the DS1 array (reflection plane structure data) calculated in the CPU in the previous step 203, the pixel Color0 of the current screen (pixel information of the current screen), the current screen space depth Depth0 (the depth map of the current screen space) information), and the current camera's projection data CameraData (the current camera's projection data), the output data is a texture (including the texture information of the reflection result), named Ans0.
  • step 204 may specifically include: calculating the world coordinates of each pixel in the pixel information of the current screen by using the depth map information of the current screen space and the projection data of the current camera; referring to the relative reflection of the vertex in the structural data.
  • the specular reflection transformation matrix of the plane changes the world coordinates of the pixels in the world space to obtain the world coordinates of the reflection result; through the projection data of the current camera, the world coordinates of the reflection result are converted into the current screen space, so that according to the current screen
  • the depth map information of the space and the reflection result are used for depth test; the texture information is generated according to the pixels whose depth test is successful.
  • the depth test is performed according to the depth map information of the current screen space and the reflection result, which may specifically include: comparing the depth value of the target pixel in the depth map information of the current screen space with the target pixel in the reflection result RGBA. Compare the values of the A channel in the middle; correspondingly, generate map information according to the pixels that have passed the test, which may specifically include: if the depth test is determined to be successful according to the comparison result, then write the color value and depth value of the target pixel to the data containing the reflection. In the texture information of the result; if it is determined that the depth test fails according to the comparison result, the target pixel will be discarded.
  • the calculation process and steps of the reflection calculation of SSPR include:
  • step 204 may further include: in the forward rendering pipeline, using GPU multi-core to perform reflection calculation to obtain texture information.
  • GPU multi-core to perform reflection calculation to obtain texture information.
  • the multi-core feature of GPU is used to accelerate the calculation of reflections.
  • Step 205 using the texture information including the reflection result to perform image rendering of the game scene.
  • step 205 may specifically include: first, performing noise reduction processing on the reflection results in the map information through a Gaussian blur algorithm; then using noise reduction
  • the processed texture information is used for image rendering of the game scene.
  • the noise reduction processing is performed through the open source Gaussian blur algorithm, and in order to improve the noise reduction processing effect, multiple Gaussian blurs can be used to obtain a higher quality texture effect, as shown in Figure 4, in order to use the Gaussian blur algorithm for noise reduction. Processed renderings.
  • step 205 may further include: obtaining a target map including roughness and normal information of the material corresponding to each pixel point of the reflection plane; correspondingly, step 205 may specifically include Including: firstly Mipmap the texture information to obtain multiple Mipmap textures of different resolutions; then read the roughness of each pixel material in the target texture; then determine the sampling of each pixel according to the roughness of each pixel material The Mipmap texture corresponding to the definition, wherein, the materials with different roughness have their own corresponding definition Mipmap texture; using the texture information of the Mipmap texture containing the respective sampling of each pixel point, the image rendering of the game scene is performed.
  • the material of the reflection plane can be individually drawn in advance through the RenderFeature of the Universal Renderer Pipeline (Universal-Renderer-Pipeline, URP, a programmable rendering pipeline in Unity) to obtain the reflection plane
  • URP Universal-Renderer-Pipeline
  • Unity a programmable rendering pipeline in Unity
  • the RT which refers to a texture used for direct instruction drawing
  • the RT needs to have Mipmaps with different definitions.
  • different Mipmaps are selected.
  • the target map containing the roughness and normal information of the reflective plane material is drawn separately in advance.
  • a texture map containing the roughness and normal on the material map of its pixels is rendered, as shown in Figure 5 shown.
  • the target map can be read later to get the roughness of the corresponding pixel material.
  • Mipmap technology will automatically generate a lower resolution (that is, lower definition) number of textures for a map.
  • a map that determines which level of sharpness each pixel is sampled from, based on roughness As shown in Figure 6, the left picture in Figure 6 is the rendering of the Mipmap without roughness sampling, and the right picture is the rendering of the Mipmap with roughness sampling. Obviously, the reflection effect of the right picture is better.
  • the normal of the material can be used to perturb the reflection result RT to further improve the reflection effect.
  • the image rendering of the game scene is performed by using the texture information of the Mipmap textures containing the respective sampling of each pixel, which may specifically include: first reading the normal information of each pixel material in the target texture (drawn separately in advance) ; According to the normal information of the material of each pixel point, perform perturbation processing on the texture information of the Mipmap texture containing the respective sampling of each pixel point; and then use the perturbed texture information to render the image of the game scene.
  • the perturbation processing is performed on the map information of the Mipmap map containing the respective sampling of each pixel, which may specifically include: first, according to the current screen coordinates of the pixel, calculate the texture map coordinates; Then, according to the normal direction and noise intensity of the pixel material, the overlay calculation is performed on the texture map coordinates; finally, the map information is sampled based on the texture map coordinates after the overlay calculation.
  • a texture map coordinate (UV) will be calculated according to the screen coordinates of the pixel for sampling; according to the direction and strength of the normal, a (-1, 1) can be simply obtained. result.
  • DetailUV Normal.RG*2-1, and finally superimpose DetailUV on the UV, and then sample the reflection map, you will get the disturbed result.
  • the left picture in Figure 7 shows the effect of RT without normal perturbation
  • the right picture shows the effect of RT with normal perturbation. Obviously, the reflection effect of the right picture is better.
  • the R and G channels of the target map can record the normal information of the pixel material
  • the B channel of the target map can record the roughness of the pixel material; correspondingly, the roughness of each pixel material in the target map is read.
  • it may include: obtaining the roughness of each pixel material by reading the B channel of the target texture; correspondingly, reading the normal information of each pixel material in the target texture, which may specifically include: reading the target texture by reading the target texture.
  • the R and G channels of get the normal information of each pixel material.
  • the R and G channels record the normal direction (such as X, Y direction vectors), and the B channel records the roughness.
  • the roughness and normal information of the reflective plane material can be accurately obtained, which is convenient for further superposition optimization of the reflection effect.
  • the method of this embodiment may further include: performing frame-by-frame image rendering on the game scene, so that each frame of image performs reflection calculation, or image noise reduction processing, or image Overlay optimization processing.
  • one SSPR rendering can be divided into three parts, the first part is the reflection calculation as shown in Figure 3, and the second part is the image noise reduction, which is the Gaussian blur as shown in Figure 4.
  • the third part is image overlay optimization, that is, the image overlay optimization process shown in Figure 6 and Figure 7.
  • a counter may be set, and only one step of the above-mentioned three parts is executed in each frame to perform a cycle, and the purpose of optimizing the overhead is achieved by reducing the amount of calculation per frame.
  • the forward rendering pipeline is used instead of the deferred rendering pipeline. It also needs to be able to efficiently render reflection results from multiple planes at the same time. And on the premise of ensuring that these problems in the prior art are solved, it is necessary to improve the efficiency as much as possible.
  • This embodiment combines the advantages of two reflection processing technologies, Planar Reflection and SSR, and is equivalent to the reflection calculation of SSPR. Specifically in the forward rendering pipeline, the reflection can be calculated through the screen space.
  • SSPR uses the forward ray to find which pixel the current pixel will reflect to.
  • SSR SSR reverse ray finds which pixel is reflected to the current pixel
  • the multi-core feature of GPU is used to speed up the calculation of reflection and improve the efficiency of reflection calculation.
  • a higher quality Mipmap is obtained, and through the RenderFeature of URP, the material of the reflection plane is drawn separately in advance, and the roughness and material normal on the reflection plane are obtained. The result is perturbed to make the reflection look better.
  • the overhead of optimizing reflection can be calculated by frame.
  • this embodiment provides a game image processing apparatus.
  • the apparatus includes: an acquisition module 31 , a calculation module 32 , and a rendering module 33 .
  • an acquisition module 31 used for acquiring the reflection plane defined in the game scene
  • the acquisition module 31 is also used to acquire the structural data of the reflection plane
  • the calculation module 32 is configured to perform reflection calculation according to the structural data, the projection data of the current camera, the pixel information of the current screen and the depth map information of the current screen space, and obtain the texture information including the reflection result;
  • the rendering module 33 is configured to perform image rendering of the game scene by using the texture information.
  • the structural data includes: a specular reflection transformation matrix of the vertex relative to the reflection plane; correspondingly, the calculation module 32 is specifically configured to pass the depth map information of the current screen space and The projection data of the current camera is used to calculate the world coordinate of each pixel in the pixel information of the current screen; with reference to the specular reflection transformation matrix, the world coordinate of the pixel is changed in the world space to obtain a reflection result The world coordinates; through the projection data of the current camera, the world coordinates of the reflection result are converted into the current screen space, so as to perform a depth test according to the depth map information of the current screen space and the reflection result; The texture information is generated according to the pixels whose depth test is successful.
  • the calculation module 32 is further configured to compare the depth value of the target pixel in the depth map information of the current screen space with the A channel of the target pixel in the reflection result RGBA value to compare;
  • the calculation module 32 is further configured to write the color value and depth value of the target pixel into the map information including the reflection result if the depth test is determined to be successful according to the comparison result; if the depth test is determined according to the comparison result If it fails, the target pixel is discarded.
  • the rendering module 33 is specifically configured to perform noise reduction processing on the reflection result in the texture information through a Gaussian blur algorithm; Image rendering.
  • the acquiring module 31 is further configured to acquire roughness and normal information including the material of each pixel corresponding to the reflection plane before the image rendering of the game scene is performed by using the texture information the target map;
  • the rendering module 33 is further configured to perform Mipmap processing on the texture information to obtain multiple Mipmap textures of different definitions; read the roughness of each pixel material in the target texture; Roughness, determine the Mipmap maps of the corresponding resolutions sampled by the respective pixel points, wherein, materials of different roughness have their respective Mipmap maps of the corresponding resolution; use the Mipmap maps containing the respective samples of the respective pixel points
  • the texture information is used to render the image of the game scene.
  • the rendering module 33 is further configured to read the normal information of each pixel point material in the target map;
  • the texture information of the Mipmap textures sampled respectively is subjected to perturbation processing; and the image rendering of the game scene is performed by using the texture information after the perturbation processing.
  • the rendering module 33 is further configured to calculate the texture map coordinates according to the current screen coordinates of the pixel points; and superimpose on the texture map coordinates according to the normal direction and noise intensity of the pixel point material Calculation; sampling map information based on the texture map coordinates after overlay calculation.
  • the R and G channels of the target map record the normal information of the pixel material, and the B channel of the target map records the roughness of the pixel material;
  • the rendering module 33 is further configured to obtain the roughness of each pixel material by reading the B channel of the target map;
  • the rendering module 33 is further configured to acquire the normal information of each pixel point material by reading the R and G channels of the target texture.
  • the rendering module 33 is further configured to perform frame-by-frame image rendering on the game scene, so that each frame of image performs reflection calculation, image noise reduction processing, or image overlay optimization processing.
  • the calculation module 32 is further configured to obtain the texture information by performing reflection calculation using multiple GPU cores in the forward rendering pipeline.
  • the acquiring module 31 is further configured to determine the distance between the reflection plane defined in the game scene and the camera; acquire the distance between the reflection plane whose distance from the camera meets the preset distance condition. structured data.
  • Various component embodiments of the present invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all functions of some or all components of the game image processing apparatus according to the embodiments of the present invention.
  • the present invention can also be implemented as a program/instruction (eg, computer program/instruction and computer program product) for an apparatus or apparatus for performing some or all of the methods described herein.
  • Such programs/instructions implementing the present invention may be stored on a computer readable medium, or may exist in the form of one or more signals, such signals may be downloaded from an Internet website, or provided on a carrier signal, or in any form Available in other formats.
  • Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridges, disk storage, quantum memory, graphene-based storage media or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • PRAM phase-change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read only memory
  • EEPROM Electrically Erasable Programm
  • FIG. 9 schematically shows a computer device/device/system that can implement the game image processing method according to the present invention, the computer device/device/system including a processor 410 and a computer-readable medium in the form of a memory 420 .
  • Memory 420 is an example of a computer-readable medium having storage space 430 for storing computer programs/instructions 431 .
  • the computer program/instructions 431 are executed by the processor 410, various steps in the game image processing method described above can be implemented.
  • Figure 10 schematically shows a block diagram of a computer program product implementing the method according to the invention.
  • the computer program product includes a computer program/instruction 510, which when executed by a processor such as the processor 410 shown in FIG. 9, can implement the game image processing method described above. of the various steps.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Image Generation (AREA)

Abstract

一种游戏图像处理方法、装置、程序和可读介质,涉及图像处理技术领域。其中方法包括:首先获取游戏场景中定义的反射平面(101);再获取所述反射平面的结构数据(102);然后根据所述结构数据、当前相机的投影数据以及当前屏幕的像素信息和当前屏幕空间的深度图信息进行反射计算,得到包含反射结果的贴图信息(103);最后利用所述贴图信息进行所述游戏场景的图像渲染(103)。该方法可节省主相机的渲染开销,进而可节省游戏图像渲染成本。能够一次绘制多个平面的反射结果,不但可提高游戏图像渲染的效率,而且能够保证反射结果的正确性。

Description

游戏图像处理方法、装置、程序和可读介质
交叉引用
本申请要求于2020年12月18日提交的申请号为202011499787.2、发明名称为“游戏图像处理方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,尤其是涉及到一种游戏图像处理方法、装置、程序和可读介质。
背景技术
随着游戏行业的发展,游戏爱好者越来越多。为了提升游戏玩家的游戏体验,游戏制作越来越趋于场景真实化。其中,反射效果是游戏画面中不可缺少的一部分,一般可以用于模拟金属或者玻璃等材质对于周围环境的镜面反射效果,或者水面对于天空和周围景色的反射效果等。
为了让游戏中模拟出尽可能真实的反射效果,目前,可利用假反射(Fake Reflection)技术,让反射平面透明,直接在对称的地方摆放对称的物体,这样通过主相机一次就可以将需要的反射结果渲染出来。
然而,假反射只能够用于特定地方的特定角度,如果需要得到的结果越精确,就需要摆放越多的物体,这样主相机的渲染开销较高。
发明内容
本发明提出以下技术方案以克服或者至少部分地解决或者减缓上述问题:
依据本申请的一个方面,提供了一种游戏图像处理方法,该方法包括:
获取游戏场景中定义的反射平面;
获取所述反射平面的结构数据;
根据所述结构数据、当前相机的投影数据以及当前屏幕的像素信息和当前屏幕空间的深度图信息进行反射计算,得到包含反射结果的 贴图信息;
利用所述贴图信息进行所述游戏场景的图像渲染。
依据本申请的另一方面,提供了一种游戏图像处理装置,该装置包括:
获取模块,用于获取游戏场景中定义的反射平面;
获取模块,还用于获取所述反射平面的结构数据;
计算模块,用于根据所述结构数据、当前相机的投影数据以及当前屏幕的像素信息和当前屏幕空间的深度图信息进行反射计算,得到包含反射结果的贴图信息;
渲染模块,用于利用所述贴图信息进行所述游戏场景的图像渲染。
根据本发明的又一个方面,提供了一种计算机装置/设备/系统,包括存储器、处理器及存储在存储器上的计算机程序/指令,所述处理器执行所述计算机程序/指令时实现上述第一方面所述方法的步骤。
根据本发明的再一个方面,提供了一种计算机可读介质,其上存储有计算机程序/指令,所述计算机程序/指令被处理器执行时实现上述第一方面所述方法的步骤。
根据本发明的再一个方面,提供了一种计算机程序产品,包括计算机程序/指令,所述计算机程序/指令被处理器执行时实现上述第一方面所述方法的步骤。
本发明的有益效果为:借由上述技术方案,本申请提供的一种游戏图像处理方法、装置、程序和可读介质,与现有的反射处理技术相比,本申请提出一种新的反射处理方案,具体可根据游戏场景中定义的反射平面的结构数据、当前相机的投影数据以及当前屏幕的像素信息和当前屏幕空间的深度图信息进行反射计算,得到包含反射结果的贴图信息,后续利用该贴图信息进行游戏场景的图像渲染。本申请不受游戏场景中特定地方的特定角度的限制,不需要再额外在反射平面对称的地方摆放对称的物体,可节省主相机的渲染开销,进而可节省游戏图像渲染成本。能够一次绘制多个平面的反射结果,不但可提高游戏图像渲染的效率,而且能够保证反射结果的正确性。
附图说明
通过阅读下文优选实施方式的详细描述,本发明的上述及各种其 他的优点和益处对于本领域普通技术人员将变得清楚明了。在附图中:
图1示出了本申请实施例提供的一种游戏图像处理方法的流程示意图;
图2示出了本申请实施例提供的另一种游戏图像处理方法的流程示意图;
图3示出了本申请实施例提供的反射计算效果实例的示意图;
图4示出了本申请实施例提供的高斯模糊降噪效果实例的示意图;
图5示出了本申请实施例提供的反射平面材质贴图实例的示意图;
图6示出了本申请实施例提供的使用粗糙度采样Mipmap的效果对比实例的示意图;
图7示出了本申请实施例提供的使用法线扰动反射RT效果对比实例的示意图;
图8示出了本申请实施例提供的一种游戏图像处理装置的结构示意图;
图9示意性地示出了用于实现根据本发明的方法的计算机装置/设备/系统的框图;以及
图10示意性地示出了实现根据本发明的方法的计算机程序产品的框图。
具体实施例
下面结合附图和具体的实施方式对本发明作进一步的描述。以下描述仅为说明本发明的基本原理而并非对其进行限制。
针对改善目前现有的反射处理技术中会受到特定地方的特定角度的限制,且会增加游戏图像渲染成本的技术问题。本实施例提供了一种游戏图像处理方法,如图1所示,该方法包括:
步骤101、获取游戏场景中定义的反射平面。
在本实施例中,可根据游戏场景的实际情况,确定游戏场景中需要定义的反射平面。例如,对于游戏场景中出现的水面、光滑的路面、建筑的玻璃、洗手台位置的镜子等,均可定义为反射平面,进而可使得游戏场景更加趋于真实化,提升游戏玩家的游戏体验。
对于本实施例的执行主体可为游戏图像处理的装置或设备,可配 置在客户端侧或者服务端侧。
需要说明的是,游戏场景中可根据实际需求定义任意数量的反射平面,如1个、2个或更多个反射平面等。若获取到的游戏场景中定义的反射平面为多个,则针对每一反射平面执行以下处理过程,即步骤102至步骤103所示的过程。
步骤102、获取反射平面的结构数据。
其中,反射平面的结构数据可包括:顶点相对反射平面的镜面反射变换矩阵、反射平面的平面法线、反射平面的包围盒位置、反射平面的反射阈值、噪音强度等细节数据。
步骤103、根据反射平面的结构数据、当前相机的投影数据以及当前屏幕的像素信息和当前屏幕空间的深度图信息进行反射计算,得到包含反射结果的贴图信息。
当前屏幕的像素信息可包含当前屏幕各个像素点的相关信息。当前屏幕空间的深度图信息可包含当前屏幕的深度信息,深度表示像素点在3D世界中距离相机的距离,像素点的深度值越大,则该像素点距离相机越远。
对于本实施例,基于反射平面的结构数据、当前相机的投影数据以及当前屏幕的像素信息和当前屏幕空间的深度图信息,可在前向渲染管线中计算反射结果,即通过正向光线寻找当前像素将要反射到哪个像素。例如,基于反射平面的结构数据、当前相机的投影数据以及当前屏幕的像素信息和当前屏幕空间的深度图信息,首先通过当前屏幕空间的深度图信息,计算出当前每个像素点的世界坐标,然后针对每个反射平面,计算像素点的世界坐标经过平面反射所处于的世界坐标点,最后将该像素点再回算到当前屏幕空间,经过深度测试以后,将深度测试成功的像素点的颜色和深度写入到贴图信息中,得到包含反射结果的贴图信息。
步骤104、利用计算得到的贴图信息进行游戏场景的图像渲染。
基于贴图信息利用图形处理器(Graphics Processing Unit,GPU)进行游戏场景的图像渲染。
本实施例提供的游戏图像处理方法,与现有的反射处理技术相比,本实施例提出一种新的反射处理方案,具体可根据游戏场景中定义的反射平面的结构数据、当前相机的投影数据以及当前屏幕的像素信息 和当前屏幕空间的深度图信息进行反射计算,得到包含反射结果的贴图信息,后续利用该贴图信息进行游戏场景的图像渲染。本实施例不受游戏场景中特定地方的特定角度的限制,不需要再额外在反射平面对称的地方摆放对称的物体,可节省主相机的渲染开销,进而可节省游戏图像渲染成本。能够一次绘制多个平面的反射结果,不但可提高游戏图像渲染的效率,而且能够保证反射结果的正确性。
进一步的,作为上述实施例具体实施方式的细化和扩展,为了完整说明本实施例的实施方式,本实施例还提供了另一种游戏图像处理方法,如图2所示,该方法包括:
步骤201、获取游戏场景中定义的反射平面。
示例性的,本实施例可基于Unity游戏引擎中实现。首先可自定义新的后处理组件,加入到PPV(Post-Process-Volume的简称,是Unity的URP中所支持的一种后处理栈)中,在后处理的屏幕空间计算反射,具体可执行步骤202至204所示的过程。
在本实施例中,可在游戏场景中预先定义若干个反射平面,如水面,光滑的路面,玻璃等。
由于游戏场景中可能会存在大量的反射平面,而这些反射平面会存在一些距离摄像机较远的反射平面,如果将这些反射平面也进行反射计算,不但实际呈现效果不明显,而且会增加计算反射的开销。因此可根据配置选择当前最重要的N个反射平面,计算选择的这些反射平面的结构数据。具体可执行步骤202至203所示的过程。
步骤202、确定游戏场景中定义的反射平面与摄像机之间的距离。
步骤203、获取与摄像机之间的距离符合预设距离条件的反射平面的结构数据。
其中,预设距离条件可根据实际需求预先设定。例如,按照反射平面与摄像机之间的距离从近到远进行排序,靠前排的反射平面均是与摄像机之间距离较近的反射平面,计算这些反射平面的反射结果会使得游戏图像中的反射效果更加突出明显,而对于排在后面的反射平面,由于距离摄像机较远,如果将这些反射平面也进行反射计算,不但实际呈现效果不明显,而且会增加计算反射的开销。所以可取排名靠前的预设个数(前N个)的反射平面,作为距离符合预设距离条件的反射平面,然后计算这些反射平面的结构数据。
再例如,也可以预设一定的距离阈值,将与摄像机之间的距离小于或等于预设距离阈值的反射平面作为符合预设距离条件的反射平面,然后计算这些反射平面的结构数据。
利用中央处理器(Central Processing Unit,CPU),计算每个反射平面的数据结构DS1,可包含如下数据:
(a)顶点相对平面的镜面反射变换矩阵;
(b)反射平面的平面法线;
(c)反射平面的包围盒位置;
(d)反射平面的反射阈值、噪音强度等细节数据。
本实施例的反射计算,可结合平面反射(Planar Reflection)和屏幕空间反射技术(Screen-Space-Reflection,SSR)两种反射处理技术的优点,相当于屏幕空间平面反射技术(Screen-Space-Planar-Reflection,SSPR)的反射计算。在前向渲染管线中,仅通过屏幕空间计算反射。和SSR相反,SSR逆向光线寻找哪个像素反射到当前像素,SSPR是通过正向光线寻找当前像素将要反射到哪个像素。相对SSR而言,开销更低,无需SSR的光线步进技术(RayMarching)与延迟渲染管线。
延迟渲染管线,需要在每次渲染指令中,将数据写到多张贴图中,一般称之为GBuffer0,GBuffer1……。在这个过程中,需要使用一个名为多重渲染目标(Multi-Render-Target,MRT)的技术,所需的GBuffer越多,每一次渲染指令带来的带宽开销就越大。而前向渲染管线只需要使用一个或少个GBuffer,带宽开销相对更低。本实施例SSPR的反射计算是在前向渲染管线中进行,使用前向渲染管线比延迟渲染管线更加实惠,也更加普及,因此本实施例SSPR的反射计算以及图像渲染等可更适用于移动端。
下面具体介绍反射计算过程:
步骤204、根据反射平面的结构数据、当前相机的投影数据以及当前屏幕的像素信息和当前屏幕空间的深度图信息进行反射计算,得到包含反射结果的贴图信息。
例如,通过ComputeShader,进行SSPR的反射计算,即计算着色器(ComputeShader,CS)的计算。这个模块的输入数据,包括上一步骤203在CPU中计算的DS1数组(反射平面结构数据),当前屏幕的像素Color0(当前屏幕的像素信息),当前屏幕空间深度Depth0(当 前屏幕空间的深度图信息),以及当前相机的投影数据CameraData(当前相机的投影数据),输出数据是一张贴图(包含反射结果的贴图信息),名为Ans0。
可选的,步骤204具体可包括:通过当前屏幕空间的深度图信息和所述当前相机的投影数据,计算当前屏幕的像素信息中每个像素点的世界坐标;参照结构数据中的顶点相对反射平面的镜面反射变换矩阵,在世界空间将像素点的世界坐标进行变化,得到反射结果的世界坐标;通过当前相机的投影数据,将反射结果的世界坐标转换到当前屏幕空间中,以便根据当前屏幕空间的深度图信息和反射结果进行深度测试;依据深度测试成功的像素点生成贴图信息。
示例性的,根据当前屏幕空间的深度图信息和反射结果进行深度测试,具体可包括:将目标像素点在所述当前屏幕空间的深度图信息中的深度值,与目标像素点在反射结果RGBA中A通道的值进行比较;相应的,依据测试通过的像素点生成贴图信息,具体可包括:若根据比较结果判定深度测试成功,则将目标像素点的颜色值和深度值写入到包含反射结果的贴图信息中;若根据比较结果判定深度测试失败,则抛弃该目标像素点。
例如,SSPR的反射计算的计算过程和步骤包括:
(1)通过深度图Depth0和相机投影数据CameraData,算出每个像素点的世界坐标;
(2)通过DS1中的反射变换矩阵,在世界空间将像素点的世界坐标进行变化,得到反射结果的世界坐标;
(3)通过相机投影数据CameraData,将反射结果的世界坐标转换到屏幕空间,然后将深度图中的深度值和Ans0的A通道进行比较,也就是深度测试;如果深度测试成功,则将该像素点的颜色和深度写到Ans0中,如果失败,则抛弃此像素点;
(4)针对每个反射平面,也即DS1数组中的元素,在步骤(2)和步骤(3)之间循环,直至每个反射平面的SSPR的反射计算完成时终止,最终得到的Ans0,即反射计算得到的包含反射结果的贴图信息,如图3所示,为SSPR的反射计算后的结果,即CS直接计算出的反射的结果。
为了提高反射计算效率,可选的,步骤204具体还可包括:在前 向渲染管线中,利用GPU多核进行反射计算得到贴图信息。例如,通过ComputeShader,利用GPU多核特性加速反射的计算。
步骤205、利用包含反射结果的贴图信息进行游戏场景的图像渲染。
通过SSPR的反射计算后的反射结果中可能存在盲点,因为为了去除盲点,可选的,步骤205具体可包括:首先通过高斯模糊算法对贴图信息中的反射结果进行降噪处理;然后利用降噪处理后的贴图信息进行游戏场景的图像渲染。例如,通过开源的高斯模糊算法进行降噪处理,并没有为了提高降噪处理效果,可使用多次高斯模糊得到质量更高的贴图效果,如图4所示,为利用高斯模糊算法进行降噪处理的效果图。
为了得到更好的优化处理效果,进一步可选的,在步骤205之前,还可包括:获取包含反射平面对应各个像素点材质的粗糙度和法线信息的目标贴图;相应的,步骤205具体可包括:首先将贴图信息进行Mipmap处理,得到不同清晰度的多个Mipmap贴图;再读取目标贴图中各个像素点材质的粗糙度;然后根据各个像素点材质的粗糙度,确定各个像素点各自采样的对应清晰度的Mipmap贴图,其中,不同粗糙度的材质均有各自对应清晰度的Mipmap贴图;利用包含各个像素点各自采样的Mipmap贴图的贴图信息,进行游戏场景的图像渲染。
在本可选实施例中,可通过通用渲染管线(Universal-Renderer-Pipeline,URP,Unity中的一个可编程的渲染管线)的RenderFeature,对反射平面的材质进行一次提前的单独绘制,得到反射平面上的粗糙度和材质法线,通过这些信息对反射RT的结果进行扰动,使反射的效果更好。
例如,对于粗糙的材质和光滑的材质,在实际需求中希望它们有不同的反射结果。因此,需要CS得到的RT(RenderTexture,指一张用于直接指令绘制的纹理)有不同清晰度的Mipmap,对于不同粗糙度的像素,选择不同的Mipmap。具体的,提前单独绘制的包含反射平面材质的粗糙度和法线信息的目标贴图,如对于反射平面,渲染出一张包含其像素的材质贴图上的粗糙度和法线的贴图,如图5所示。后续可读取目标贴图,得到对应像素点材质的粗糙度。Mipmap技术,会自动对一张贴图,生成它阶梯状的更低分辨率(也就是更低清晰度)的 数张贴图。根据粗糙度,决定每个像素点采样哪个级别的清晰度的贴图。如图6所示,图6中的左侧图为不使用粗糙度采样Mipmap的效果图,右侧图为使用粗糙度采样Mipmap的效果图,很明显右侧图反射效果更好。
进一步的,在采样反射RT的时候,可以再使用材质的法线对反射的结果RT进行扰动,以进一步提升反射效果。相应可选的,利用包含各个像素点各自采样的Mipmap贴图的贴图信息,进行游戏场景的图像渲染,具体可包括:首先读取目标贴图(提前单独绘制的)中各个像素点材质的法线信息;根据各个像素点材质的法线信息,对包含各个像素点各自采样的Mipmap贴图的贴图信息进行扰动处理;然后利用扰动处理后的贴图信息,进行游戏场景的图像渲染。
示例性的,根据各个像素点材质的法线信息,对包含各个像素点各自采样的Mipmap贴图的贴图信息进行扰动处理,具体可包括:首先根据像素点的当前屏幕坐标,计算出纹理贴图坐标;然后依据像素点材质的法线方向和噪音强度,在纹理贴图坐标上进行叠加计算;最后基于叠加计算后的纹理贴图坐标采样贴图信息。
例如,采样反射RT的时候,会根据像素的屏幕坐标,计算出一个纹理贴图坐标(UV),用于采样;根据法线的方向和强弱,可以简单地得到一个(-1,1)的结果。DetailUV=Normal.RG*2–1,最后在UV上叠加上DetailUV,再采样反射贴图,就会得到扰动的结果。如图7所示,图7中的左侧图为不使用法线扰动反射RT的效果图,右侧图为使用法线扰动反射RT的效果图,很明显右侧图反射效果更好。
为了便于读取目标贴图中反射平面材质的粗糙度和法线信息,可在目标贴图的R、G、B通道进行相应记录。示例性的,目标贴图的R和G通道可记录像素点材质的法线信息,而目标贴图的B通道可记录像素点材质的粗糙度;相应的,读取目标贴图中各个像素点材质的粗糙度,具体可包括:通过读取目标贴图的B通道,获取各个像素点材质的粗糙度;相应的,读取目标贴图中各个像素点材质的法线信息,具体可包括:通过读取目标贴图的R和G通道,获取各个像素点材质的法线信息。如图5所示,R、G通道记录是法线方向(如X、Y方向向量),B通道记录的是粗糙度。通过这种可选方式,可准确获取反射平面材质的粗糙度和法线信息,便于进一步反射效果叠加优化。
基于上述实施例如图3所示的反射计算过程、如图4所示的高斯模糊的图像降噪过程,以及如图6和图7所示的图像叠加优化过程,如果这些操作在每帧图像中全部执行,会增加每帧图像的计算量,进而会增加渲染开销。因此本实施例为了降低每帧图像的计算量,可选的,本实施例方法还可包括:对游戏场景进行分帧图像渲染,使得每帧图像执行反射计算、或图像降噪处理、或图像叠加优化处理。例如,对于SSPR的分帧优化,可将一次SSPR渲染分成了三部分,第一部分是如图3所示的反射计算,第二部分是图像降噪,也就是如图4所示的高斯模糊的图像降噪过程,第三部分是图像叠加优化,也就是如图6和图7所示的图像叠加优化过程。在本实施例中可设置一个计数器,每帧只执行上述三个部分中的一个步骤,来进行循环,通过降低每帧的计算量,达到优化开销的目的。
为了解决现有反射处理技术中的问题,需要尽可能保证反射结果的正确性,至少是关键镜头下的正确性。由于需要落地到移动端,所以使用前向渲染管线来代替延迟渲染管线。同时需要能够同时高效地渲染多个平面的反射结果。以及在保证现有技术中这些问题得到解决的前提下,需要尽可能提高效率。本实施例结合Planar Reflection和SSR两种反射处理技术的优点,相当于SSPR的反射计算。具体在前向渲染管线中,可通过屏幕空间计算反射,和SSR(SSR逆向光线寻找哪个像素反射到当前像素)相反,SSPR通过正向光线寻找当前像素将要反射到哪个像素,相对SSR而言,开销更低,无需SSR的RayMarching与延迟渲染管线。并且通过ComputeShader,利用GPU多核特性加速反射的计算,提高反射计算效率。通过多次高斯模糊,得到质量更高的Mipmap,以及通过URP的RenderFeature,对反射平面的材质进行一次提前的单独绘制,得到反射平面上的粗糙度和材质法线,通过这些信息对反射RT的结果进行扰动,使反射的效果更好。最后还可通过分帧计算优化反射的开销。通过应用本实施例中的方案,不需要使用单独的反射摄像机渲染场景。可以一次绘制多个平面的反射结果。不依赖延迟渲染管线。可以体现不同材质对反射的采样结果。
进一步的,作为图1和图2所示方法的具体实现,本实施例提供了一种游戏图像处理装置,如图8所示,该装置包括:获取模块31、计算模块32、渲染模块33。
获取模块31,用于获取游戏场景中定义的反射平面;
获取模块31,还用于获取所述反射平面的结构数据;
计算模块32,用于根据所述结构数据、当前相机的投影数据以及当前屏幕的像素信息和当前屏幕空间的深度图信息进行反射计算,得到包含反射结果的贴图信息;
渲染模块33,用于利用所述贴图信息进行所述游戏场景的图像渲染。
在具体的应用场景中,可选的,所述结构数据包括:顶点相对所述反射平面的镜面反射变换矩阵;相应的,计算模块32,具体用于通过所述当前屏幕空间的深度图信息和所述当前相机的投影数据,计算所述当前屏幕的像素信息中每个像素点的世界坐标;参照所述镜面反射变换矩阵,在世界空间将所述像素点的世界坐标进行变化,得到反射结果的世界坐标;通过所述当前相机的投影数据,将所述反射结果的世界坐标转换到所述当前屏幕空间中,以便根据所述当前屏幕空间的深度图信息和所述反射结果进行深度测试;依据深度测试成功的像素点生成所述贴图信息。
在具体的应用场景中,计算模块32,具体还用于将目标像素点在所述当前屏幕空间的深度图信息中的深度值,与所述目标像素点在所述反射结果RGBA中A通道的值进行比较;
计算模块32,具体还用于若根据比较结果判定深度测试成功,则将所述目标像素点的颜色值和深度值写入到包含所述反射结果的贴图信息中;若根据比较结果判定深度测试失败,则抛弃所述目标像素点。
在具体的应用场景中,渲染模块33,具体用于通过高斯模糊算法对所述贴图信息中的所述反射结果进行降噪处理;利用降噪处理后的所述贴图信息进行所述游戏场景的图像渲染。
在具体的应用场景中,获取模块31,还用于在所述利用所述贴图信息进行所述游戏场景的图像渲染之前,获取包含所述反射平面对应各个像素点材质的粗糙度和法线信息的目标贴图;
渲染模块33,具体还用于将所述贴图信息进行Mipmap处理,得到不同清晰度的多个Mipmap贴图;读取所述目标贴图中各个像素点材质的粗糙度;根据所述各个像素点材质的粗糙度,确定所述各个像素点各自采样的对应清晰度的Mipmap贴图,其中,不同粗糙度的材质均 有各自对应清晰度的Mipmap贴图;利用包含所述各个像素点各自采样的所述Mipmap贴图的贴图信息,进行所述游戏场景的图像渲染。
在具体的应用场景中,渲染模块33,具体还用于读取所述目标贴图中各个像素点材质的法线信息;根据所述各个像素点材质的法线信息,对包含所述各个像素点各自采样的所述Mipmap贴图的贴图信息进行扰动处理;利用扰动处理后的贴图信息,进行所述游戏场景的图像渲染。
在具体的应用场景中,渲染模块33,具体还用于根据像素点的当前屏幕坐标,计算出纹理贴图坐标;依据像素点材质的法线方向和噪音强度,在所述纹理贴图坐标上进行叠加计算;基于叠加计算后的纹理贴图坐标采样贴图信息。
在具体的应用场景中,可选的,所述目标贴图的R和G通道记录像素点材质的法线信息,所述目标贴图的B通道记录像素点材质的粗糙度;
渲染模块33,具体还用于通过读取所述目标贴图的B通道,获取各个像素点材质的粗糙度;
渲染模块33,具体还用于通过读取所述目标贴图的R和G通道,获取各个像素点材质的法线信息。
在具体的应用场景中,渲染模块33,还用于对所述游戏场景进行分帧图像渲染,使得每帧图像执行反射计算、或图像降噪处理、或图像叠加优化处理。
在具体的应用场景中,计算模块32,具体还用于在前向渲染管线中,利用GPU多核进行反射计算得到所述贴图信息。
在具体的应用场景中,获取模块31,具体还用于确定所述游戏场景中定义的反射平面与摄像机之间的距离;获取与所述摄像机之间的距离符合预设距离条件的反射平面的结构数据。
需要说明的是,本实施例提供的一种游戏图像处理装置所涉及各功能单元的其它相应描述,可以参考图1和图2中的对应描述,在此不再赘述。
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器 (DSP)来实现根据本发明实施例的游戏图像处理装置中的一些或者全部部件的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置的程序/指令(例如,计算机程序/指令和计算机程序产品)。这样的实现本发明的程序/指令可以存储在计算机可读介质上,或者可以一个或者多个信号的形式存在,这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带、磁盘存储、量子存储器、基于石墨烯的存储介质或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
图9示意性地示出了可以实现根据本发明的游戏图像处理方法的计算机装置/设备/系统,该计算机装置/设备/系统包括处理器410和以存储器420形式的计算机可读介质。存储器420是计算机可读介质的一个示例,其具有用于存储计算机程序/指令431的存储空间430。当所述计算机程序/指令431由处理器410执行时,可实现上文所描述的游戏图像处理方法中的各个步骤。
图10示意性地示出了实现根据本发明的方法的计算机程序产品的框图。所述计算机程序产品包括计算机程序/指令510,当所述计算机程序/指令510被诸如图9所示的处理器410之类的处理器执行时,可实现上文所描述的游戏图像处理方法中的各个步骤。
上文对本说明书特定实施例进行了描述,其与其它实施例一并涵盖于所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定遵循示出的特定顺 序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可行的或者有利的。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
应可理解,以上所述实施例仅为举例说明本发明之目的而并非对本发明进行限制。在不脱离本发明基本精神及特性的前提下,本领域技术人员还可以通过其他方式来实施本发明。本发明的范围当以后附的权利要求为准,凡在本说明书一个或多个实施例的精神和原则之内所做的任何修改、等同替换、改进等,皆应涵盖其中。

Claims (15)

  1. 一种游戏图像处理方法,其特征在于,包括:
    获取游戏场景中定义的反射平面;
    获取所述反射平面的结构数据;
    根据所述结构数据、当前相机的投影数据以及当前屏幕的像素信息和当前屏幕空间的深度图信息进行反射计算,得到包含反射结果的贴图信息;
    利用所述贴图信息进行所述游戏场景的图像渲染。
  2. 根据权利要求1所述的方法,其特征在于,所述结构数据包括:顶点相对所述反射平面的镜面反射变换矩阵;
    所述根据所述结构数据、当前相机的投影数据以及当前屏幕的像素信息和当前屏幕空间的深度图信息进行反射计算,得到包含反射结果的贴图信息,具体包括:
    通过所述当前屏幕空间的深度图信息和所述当前相机的投影数据,计算所述当前屏幕的像素信息中每个像素点的世界坐标;
    参照所述镜面反射变换矩阵,在世界空间将所述像素点的世界坐标进行变化,得到反射结果的世界坐标;
    通过所述当前相机的投影数据,将所述反射结果的世界坐标转换到所述当前屏幕空间中,以便根据所述当前屏幕空间的深度图信息和所述反射结果进行深度测试;
    依据深度测试成功的像素点生成所述贴图信息。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述当前屏幕空间的深度图信息和所述反射结果进行深度测试,具体包括:
    将目标像素点在所述当前屏幕空间的深度图信息中的深度值,与所述目标像素点在所述反射结果RGBA中A通道的值进行比较;
    所述依据测试通过的像素点生成所述贴图信息,具体包括:
    若根据比较结果判定深度测试成功,则将所述目标像素点的颜色值和深度值写入到包含所述反射结果的贴图信息中;
    若根据比较结果判定深度测试失败,则抛弃所述目标像素点。
  4. 根据权利要求1所述的方法,其特征在于,所述利用所述贴图信息进行所述游戏场景的图像渲染,具体包括:
    通过高斯模糊算法对所述贴图信息中的所述反射结果进行降噪处理;
    利用降噪处理后的所述贴图信息进行所述游戏场景的图像渲染。
  5. 根据权利要求1所述的方法,其特征在于,在所述利用所述贴图信息进行所述游戏场景的图像渲染之前,所述方法还包括:
    获取包含所述反射平面对应各个像素点材质的粗糙度和法线信息的目标贴图;
    所述利用所述贴图信息进行所述游戏场景的图像渲染,具体包括:
    将所述贴图信息进行Mipmap处理,得到不同清晰度的多个Mipmap贴图;
    读取所述目标贴图中各个像素点材质的粗糙度;
    根据所述各个像素点材质的粗糙度,确定所述各个像素点各自采样的对应清晰度的Mipmap贴图,其中,不同粗糙度的材质均有各自对应清晰度的Mipmap贴图;
    利用包含所述各个像素点各自采样的所述Mipmap贴图的贴图信息,进行所述游戏场景的图像渲染。
  6. 根据权利要求5所述的方法,其特征在于,所述利用包含所述各个像素点各自采样的所述Mipmap贴图的贴图信息,进行所述游戏场景的图像渲染,具体包括:
    读取所述目标贴图中各个像素点材质的法线信息;
    根据所述各个像素点材质的法线信息,对包含所述各个像素点各自采样的所述Mipmap贴图的贴图信息进行扰动处理;
    利用扰动处理后的贴图信息,进行所述游戏场景的图像渲染。
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述各个像素点材质的法线信息,对包含所述各个像素点各自采样的所述Mipmap贴图的贴图信息进行扰动处理,具体包括:
    根据像素点的当前屏幕坐标,计算出纹理贴图坐标;
    依据像素点材质的法线方向和噪音强度,在所述纹理贴图坐标上进行叠加计算;
    基于叠加计算后的纹理贴图坐标采样贴图信息。
  8. 根据权利要求6所述的方法,其特征在于,所述目标贴图的R和G通道记录像素点材质的法线信息,所述目标贴图的B通道记录像 素点材质的粗糙度;
    所述读取所述目标贴图中各个像素点材质的粗糙度,具体包括:
    通过读取所述目标贴图的B通道,获取各个像素点材质的粗糙度;
    所述读取所述目标贴图中各个像素点材质的法线信息,具体包括:
    通过读取所述目标贴图的R和G通道,获取各个像素点材质的法线信息。
  9. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    对所述游戏场景进行分帧图像渲染,使得每帧图像执行反射计算、或图像降噪处理、或图像叠加优化处理。
  10. 根据权利要求1所述的方法,其特征在于,所述根据所述结构数据、当前相机的投影数据以及当前屏幕的像素信息和当前屏幕空间的深度图信息进行反射计算,得到包含反射结果的贴图信息,具体包括:
    在前向渲染管线中,利用GPU多核进行反射计算得到所述贴图信息。
  11. 根据权利要求1所述的方法,其特征在于,所述获取所述反射平面的结构数据,具体包括:
    确定所述游戏场景中定义的反射平面与摄像机之间的距离;
    获取与所述摄像机之间的距离符合预设距离条件的反射平面的结构数据。
  12. 一种游戏图像处理装置,其特征在于,包括:
    获取模块,用于获取游戏场景中定义的反射平面;
    获取模块,还用于获取所述反射平面的结构数据;
    计算模块,用于根据所述结构数据、当前相机的投影数据以及当前屏幕的像素信息和当前屏幕空间的深度图信息进行反射计算,得到包含反射结果的贴图信息;
    渲染模块,用于利用所述贴图信息进行所述游戏场景的图像渲染。
  13. 一种计算机装置/设备/系统,包括存储器、处理器及存储在存储器上的计算机程序/指令,所述处理器执行所述计算机程序/指令时实现根据权利要求1-11中任一项所述的游戏图像处理方法的步骤。
  14. 一种计算机可读介质,其上存储有计算机程序/指令,所述计算机程序/指令被处理器执行时实现根据权利要求1-11中任一项所述的 游戏图像处理方法的步骤。
  15. 一种计算机程序产品,包括计算机程序/指令,所述计算机程序/指令被处理器执行时实现根据权利要求1-11中任一项所述的游戏图像处理方法的步骤。
PCT/CN2021/119152 2020-12-18 2021-09-17 游戏图像处理方法、装置、程序和可读介质 WO2022127242A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011499787.2 2020-12-18
CN202011499787.2A CN112233216B (zh) 2020-12-18 2020-12-18 游戏图像处理方法、装置及电子设备

Publications (1)

Publication Number Publication Date
WO2022127242A1 true WO2022127242A1 (zh) 2022-06-23

Family

ID=74124910

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/119152 WO2022127242A1 (zh) 2020-12-18 2021-09-17 游戏图像处理方法、装置、程序和可读介质

Country Status (2)

Country Link
CN (1) CN112233216B (zh)
WO (1) WO2022127242A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797226A (zh) * 2023-01-09 2023-03-14 腾讯科技(深圳)有限公司 图像降噪方法、装置、计算机设备和存储介质
CN116630486A (zh) * 2023-07-19 2023-08-22 山东锋士信息技术有限公司 一种基于Unity3D渲染的半自动化动画制作方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233216B (zh) * 2020-12-18 2021-03-02 成都完美时空网络技术有限公司 游戏图像处理方法、装置及电子设备
CN112973121B (zh) * 2021-04-30 2021-07-20 成都完美时空网络技术有限公司 反射效果生成方法及装置、存储介质、计算机设备
CN113283543B (zh) * 2021-06-24 2022-04-15 北京优锘科技有限公司 一种基于WebGL的图像投影融合方法、装置、存储介质和设备
CN113570696B (zh) * 2021-09-23 2022-01-11 深圳易帆互动科技有限公司 动态模型的镜像图像处理方法、装置及可读存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150146971A1 (en) * 2013-11-27 2015-05-28 Autodesk, Inc. Mesh reconstruction from heterogeneous sources of data
CN106056661A (zh) * 2016-05-31 2016-10-26 钱进 基于Direct3D 11的三维图形渲染引擎
CN112233216A (zh) * 2020-12-18 2021-01-15 成都完美时空网络技术有限公司 游戏图像处理方法、装置及电子设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7768523B2 (en) * 2006-03-09 2010-08-03 Microsoft Corporation Shading using texture space lighting and non-linearly optimized MIP-maps
CN103544731B (zh) * 2013-09-30 2016-08-17 北京航空航天大学 一种基于多相机的快速反射绘制方法
CN104463944B (zh) * 2014-07-10 2017-09-29 无锡梵天信息技术股份有限公司 一种基于物理的高光计算方法
CN104240286A (zh) * 2014-09-03 2014-12-24 无锡梵天信息技术股份有限公司 基于屏幕空间的实时反射方法
CN105261059B (zh) * 2015-09-18 2017-12-12 浙江大学 一种基于在屏幕空间计算间接反射高光的渲染方法
WO2017168038A1 (en) * 2016-03-31 2017-10-05 Umbra Software Oy Virtual reality streaming
CN109064533B (zh) * 2018-07-05 2023-04-07 奥比中光科技集团股份有限公司 一种3d漫游方法及系统
CN111768473B (zh) * 2020-06-28 2024-03-22 完美世界(北京)软件科技发展有限公司 图像渲染方法、装置及设备

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150146971A1 (en) * 2013-11-27 2015-05-28 Autodesk, Inc. Mesh reconstruction from heterogeneous sources of data
CN106056661A (zh) * 2016-05-31 2016-10-26 钱进 基于Direct3D 11的三维图形渲染引擎
CN112233216A (zh) * 2020-12-18 2021-01-15 成都完美时空网络技术有限公司 游戏图像处理方法、装置及电子设备

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DEMON COUSIN: "Screen Space Planar Reflection (SSPR) trips for Unity URP mobile platforms", 3 August 2020 (2020-08-03), pages 1 - 3, XP055944446, Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/150890059> *
TIANSHU SOUL SEAT_XUEFENG: "The self-study HLSL road of the URP pipeline 35th SSPR screen space plane reflection", 25 August 2020 (2020-08-25), pages 1 - 16, XP055944450, Retrieved from the Internet <URL:https://www.bilibili.com/read/cv7332339/> *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797226A (zh) * 2023-01-09 2023-03-14 腾讯科技(深圳)有限公司 图像降噪方法、装置、计算机设备和存储介质
CN115797226B (zh) * 2023-01-09 2023-04-25 腾讯科技(深圳)有限公司 图像降噪方法、装置、计算机设备和存储介质
CN116630486A (zh) * 2023-07-19 2023-08-22 山东锋士信息技术有限公司 一种基于Unity3D渲染的半自动化动画制作方法
CN116630486B (zh) * 2023-07-19 2023-11-07 山东锋士信息技术有限公司 一种基于Unity3D渲染的半自动化动画制作方法

Also Published As

Publication number Publication date
CN112233216B (zh) 2021-03-02
CN112233216A (zh) 2021-01-15

Similar Documents

Publication Publication Date Title
WO2022127242A1 (zh) 游戏图像处理方法、装置、程序和可读介质
US9569885B2 (en) Technique for pre-computing ambient obscurance
KR101082215B1 (ko) 하이브리드 광선 추적 시스템을 위한 프래그먼트 쉐이더 및 동작 방법
US9342919B2 (en) Image rendering apparatus and method for preventing pipeline stall using a buffer memory unit and a processor
CN111127623B (zh) 模型的渲染方法、装置、存储介质及终端
US7924281B2 (en) System and method for determining illumination of a pixel by shadow planes
US7551178B2 (en) Apparatuses and methods for processing graphics and computer readable mediums storing the methods
US9501860B2 (en) Sparse rasterization
WO2022057598A1 (zh) 图像渲染的方法和装置
US20110141112A1 (en) Image processing techniques
US20080143715A1 (en) Image Based Rendering
US11954830B2 (en) High dynamic range support for legacy applications
US8854392B2 (en) Circular scratch shader
US20070097118A1 (en) Apparatus and method for a frustum culling algorithm suitable for hardware implementation
US20210012562A1 (en) Probe-based dynamic global illumination
KR20180023856A (ko) 그래픽 처리 시스템 및 그래픽 프로세서
KR20230073222A (ko) 깊이 버퍼 프리-패스
KR102477265B1 (ko) 그래픽스 프로세싱 장치 및 그래픽스 파이프라인의 텍스쳐링을 위한 LOD(level of detail)를 결정하는 방법
TWI417808B (zh) 可重建幾何陰影圖的方法
WO2022193941A1 (zh) 图像渲染方法、装置、设备、介质和计算机程序产品
CN113129420B (zh) 一种基于深度缓冲加速的光线追踪渲染方法
US20240177394A1 (en) Motion vector optimization for multiple refractive and reflective interfaces
EP1227443A2 (en) System and method for fast and smooth rendering of lit, textured spheres
US11875478B2 (en) Dynamic image smoothing based on network conditions
US10212406B2 (en) Image generation of a three-dimensional scene using multiple focal lengths

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21905174

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21905174

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21905174

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21905174

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18.01.2024)