WO2023134625A1 - 特效优化方法、装置、存储介质及程序产品 - Google Patents

特效优化方法、装置、存储介质及程序产品 Download PDF

Info

Publication number
WO2023134625A1
WO2023134625A1 PCT/CN2023/071311 CN2023071311W WO2023134625A1 WO 2023134625 A1 WO2023134625 A1 WO 2023134625A1 CN 2023071311 W CN2023071311 W CN 2023071311W WO 2023134625 A1 WO2023134625 A1 WO 2023134625A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
special effect
rendering
texture
frame image
Prior art date
Application number
PCT/CN2023/071311
Other languages
English (en)
French (fr)
Inventor
张志博
李婷
徐良成
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023134625A1 publication Critical patent/WO2023134625A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6646Methods for processing data by generating or executing the game program for rendering three dimensional images for the computation and display of the shadow of an object or character

Definitions

  • the present application relates to the field of image processing, and in particular to a special effect optimization method, device, storage medium and program product.
  • Game frame rate and power consumption have always been the difficulty and focus of game manufacturers.
  • many popular 3D games such as Yuanshinshin, Honkai 3, NBA2019, etc. usually set multiple rendering channels for rendering shadows when rendering each frame of the image to be displayed in the game ( shadow), bloom (bloom) and other special effect textures, and then obtain each frame of image based on the special effect texture.
  • shadow shadow
  • bloom bloom
  • other special effect textures and then obtain each frame of image based on the special effect texture.
  • setting multiple rendering channels for each frame of image will consume a lot of The memory bandwidth and a small amount of processor load will lead to an increase in game power consumption, and a decrease in the game frame rate, which will affect the user's game experience.
  • the present application proposes a special effect optimization method, device, storage medium, and program product.
  • the special effect optimization method of the embodiment of the present application when the object in the image has no motion or the motion range is small, for each When the frame image is rendered with special effects, the frame rate of the game image is increased and the power consumption of the game is reduced.
  • the embodiment of the present application provides a method for optimizing special effects, the method including: obtaining a rendering instruction and identifying the purpose of the rendering instruction; when the rendering instruction is used to obtain multiple frames of images, and the multiple When the frame image includes a special effect texture, select two adjacent frames of images in the multi-frame images, and identify the similarity of the special effect texture of the two adjacent frame images; when it is determined that the similarity meets the condition, the adjacent The special effect texture of the next frame image in the two frame images is used as the special effect texture of the next frame image of the two adjacent frame images.
  • the special effect optimization method of the embodiment of the present application by obtaining the rendering instruction and identifying the purpose of the rendering instruction, it can be determined whether the rendering instruction is used to obtain a multi-frame image and whether the multi-frame image includes a special effect texture, that is, it can be identified and determined whether the rendering instruction is Used for rendering special effects scenes; when the rendering instruction is used to obtain multiple frames of images, and the multi-frame images include special effect textures, select two adjacent frames of images in the multi-frame images to identify the similarity of the special effect textures of adjacent two frame images , when the rendering instruction is used to render the special effect scene, the similarity degree of the special effect texture can be determined by comparison; when the similarity meets the condition, the special effect texture of the next frame image in the adjacent two frame images is used as the two adjacent frame images
  • the special effect texture of the next frame image makes it possible to obtain the special effect texture of the next frame image of two adjacent frames without using the traditional rendering channel method, which can avoid the memory occupation and the consumption of processor load. Due to the reduced memory usage and processor load consumption,
  • the special effect optimization method of the present application can effectively increase the running frame rate of the game, and may reduce the power consumption of the game at the same time. If the running frame rate of the game has reached the maximum frame rate of the terminal device, the special effect optimization method of this application can effectively reduce the power consumption of the game, thereby effectively prolonging the duration of the game running on the terminal device at the maximum frame rate of the terminal device.
  • obtaining a rendering instruction and identifying the purpose of the rendering instruction includes: intercepting the rendering instruction from an application program; For each instruction in the method, analyze at least one of the parameter, type, and semantics of the instruction segment where the instruction is located; and determine the purpose of the rendering instruction according to the analysis result.
  • the rendering instruction is an instruction for obtaining multi-frame images
  • the multi-frame images include special effect textures, so as to determine whether the rendering instruction is for rendering a special effect scene, and complete the recognition of the special effect scene.
  • there are various kinds of data in the rendering instruction that can be analyzed to determine the usage of the rendering instruction, which can improve the flexibility of the manner of determining the usage of the rendering instruction.
  • identifying the similarity of the special effect textures of the two adjacent frames of images includes: : comparing and determining the grayscale difference value of the special effect texture of the two adjacent frames of images; determining the similarity value of the special effect texture of the two adjacent frame images according to the grayscale difference value.
  • the gray value of the special effect texture is relatively easy to obtain. In this way, it is simpler to determine the similarity of the special effect texture of two adjacent frames of images, and the efficiency of the special effect optimization method is higher.
  • the condition includes: the similarity value of the special effect textures of the two adjacent frames of images is lower than first threshold.
  • the similarity value of the special effect texture of two adjacent frames of images is lower than the first threshold value, when the similarity meets the condition, it can be determined that the object in the image has no motion or the motion amplitude is low, thereby ensuring that the adjacent
  • the special effect texture of the next frame image of the two adjacent frame images is used as the special effect texture of the next frame image of the adjacent two frame images, which has little influence on the accuracy of the next frame image of the adjacent two frame images, thereby ensuring image rendering the quality of.
  • selecting two adjacent frames of images in the multi-frame images includes: : selecting the first two frames of images in the multiple frames of images as the two adjacent frames of images.
  • the method when it is determined that the similarity meets the condition, the method further includes: Sampling the special effect texture of the next frame image in the two adjacent frame images to the display buffer; in the display buffer, the special effect texture of the next frame image in the two adjacent frame images and the The main scene texture of the next frame image of the two adjacent frame images is used for rendering to obtain the next frame image of the two adjacent frame images.
  • the rendering of the image of the next frame of the adjacent two frames of images can be completed without using the rendering channel to obtain the special effect texture of the image of the next frame of the two adjacent frames of images.
  • This allows the next frame of images between two adjacent frames to be displayed normally without affecting the user's gaming experience.
  • the method when it is determined that the similarity meets the condition, the method further includes: Among the rendering instructions, a rendering instruction for creating a rendering channel to obtain a special effect texture of a next frame image of the two adjacent frames of images is intercepted, so that the intercepted instruction is not passed to an execution object of the rendering instruction.
  • the method when it is determined that the similarity does not meet the condition, the method further includes : repeatedly perform the step of selecting two adjacent frames of images in the multi-frame images and thereafter; wherein, when repeatedly performing the step of selecting two adjacent frames of images in the multi-frame images, the previous two adjacent frames selected The next frame image in the image and the next frame image of the adjacent two frame images are used as the newly selected adjacent two frame images.
  • the similarity does not meet the condition, it can be determined that the object in the image has a large range of motion, and the subsequent image can continue to be compared with the special effect texture similarity until the similarity meets the condition. It is determined that the object in the image has no If the motion or motion range is small, the multiplexing of special effect textures is performed to further ensure the quality of image rendering.
  • the special effect texture includes bloom special effect texture and shadow special effect texture at least one.
  • an embodiment of the present application provides a device for optimizing special effects, the device comprising: a usage identification module, configured to acquire rendering instructions and identify the usage of the rendering instructions; a texture identification module, configured to The instruction is used to obtain multiple frames of images, and when the multiple frames of images include special effect textures, select two adjacent frames of images in the multiple frames of images, and identify the similarity of the special effect textures of the two adjacent frames of images; A module for determining that the similarity meets the condition, using the special effect texture of the next frame image of the two adjacent frame images as the special effect texture of the next frame image of the two adjacent frame images.
  • acquiring a rendering instruction and identifying the usage of the rendering instruction includes: intercepting the rendering instruction from an application program; targeting the rendering instruction For each instruction in the method, analyze at least one of the parameter, type, and semantics of the instruction segment where the instruction is located; and determine the purpose of the rendering instruction according to the analysis result.
  • identifying the similarity of the special effect textures of the two adjacent frames of images includes: : comparing and determining the grayscale difference value of the special effect texture of the two adjacent frames of images; determining the similarity value of the special effect texture of the two adjacent frame images according to the grayscale difference value.
  • the conditions include: the similarity value of the special effect textures of the two adjacent frames of images is lower than first threshold.
  • selecting two adjacent frames of images in the multi-frame images includes : selecting the first two frames of images in the multiple frames of images as the two adjacent frames of images.
  • the device further includes: A sampling module, configured to sample the special effect texture of the next frame image in the two adjacent frames of images to the display buffer; in the display buffer, the The special effect texture and the main scene texture of the next frame of the two adjacent frames of images are used for rendering to obtain the next frame of the two adjacent frames of images.
  • a sampling module configured to sample the special effect texture of the next frame image in the two adjacent frames of images to the display buffer; in the display buffer, the The special effect texture and the main scene texture of the next frame of the two adjacent frames of images are used for rendering to obtain the next frame of the two adjacent frames of images.
  • the device further includes: An instruction interception module, configured to intercept, among the rendering instructions, a rendering instruction for creating a rendering channel to obtain the special effect texture of the next frame image of the two adjacent frames of images, so that the intercepted instructions are not passed to the rendering The object to execute the command.
  • An instruction interception module configured to intercept, among the rendering instructions, a rendering instruction for creating a rendering channel to obtain the special effect texture of the next frame image of the two adjacent frames of images, so that the intercepted instructions are not passed to the rendering The object to execute the command.
  • the device when it is determined that the similarity does not meet the condition, the device repeatedly executes The function of the texture recognition module and subsequent modules; wherein, when the texture recognition module repeatedly executes the step of selecting two adjacent frames of images in the multi-frame images, the last frame image and the previous selected two adjacent frames of images The next frame of images of the two adjacent frames of images is used as the newly selected adjacent two frames of images.
  • the special effect texture includes a flood special effect texture and a shadow special effect texture at least one.
  • an embodiment of the present application provides a device for optimizing special effects, including: a processor; a memory for storing processor-executable instructions; wherein, the processor is configured to implement the above-mentioned first method when executing the instructions.
  • a method for optimizing special effects in one aspect or in one or several possible implementation manners of the first aspect.
  • the embodiments of the present application provide a non-volatile computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above-mentioned first aspect or the first aspect can be realized One or several special effect optimization methods in various possible implementation modes.
  • the embodiments of the present application provide a computer program product, including computer readable code, or a non-volatile computer readable storage medium bearing computer readable code, when the computer readable code is stored in an electronic
  • the processor in the electronic device executes the above-mentioned first aspect or one or more special effect optimization methods in multiple possible implementation manners of the first aspect.
  • FIG. 1 shows a schematic diagram of the architecture of a game system in the first prior art.
  • FIG. 2 shows a schematic flowchart of an image rendering method in the first prior art.
  • Fig. 3a shows a schematic diagram of a rendering channel created when a blooming special effect texture is obtained in the prior art.
  • Fig. 3b shows a schematic diagram of a rendering channel created when a blooming special effect texture is obtained in the prior art.
  • Fig. 4 shows a schematic diagram of an exemplary architecture of a game system according to an embodiment of the present application.
  • Fig. 5 shows an exemplary flowchart of a special effect optimization method according to an embodiment of the present application.
  • Fig. 6 shows an application example of the special effect optimization method according to the embodiment of the present application.
  • Fig. 7a shows a schematic diagram of a rendering channel created when an N+2th frame image is obtained in the prior art.
  • Fig. 7b shows a schematic diagram of a rendering channel created when an N+2th frame image is obtained according to an embodiment of the present application.
  • Fig. 8 shows an exemplary structural diagram of an apparatus for optimizing special effects according to an embodiment of the present application.
  • Fig. 9 shows an exemplary structural diagram of an apparatus for optimizing special effects according to an embodiment of the present application.
  • Bloom A computer graphics effect used in video games, presentations, and high dynamic range rendering to reproduce the imaging artifacts of a real-world camera.
  • Shadow A shadow effect used in game rendering.
  • Render pass Describes a rendering process by specifying information about the framebuffer attachment used for rendering.
  • Frame rate The frequency (rate) at which images in units of frames appear continuously on the display.
  • Draw command Instruction call involving the rendering pass.
  • FIG. 1 shows a schematic diagram of the architecture of a game system in the first prior art.
  • the game system of prior art 1 mainly includes the following parts: game application program (Application), game engine (Game Engine, common have Unity, Unreal etc.), system frame layer (Framework), graphic driver layer (driver development kit, DDK) and the graphics hardware layer (graphics processing unit, GPU) that executes rendering instructions.
  • game application program Application
  • Game Engine common have Unity, Unreal etc.
  • system frame layer Framework
  • graphic driver layer driver development kit, DDK
  • graphics hardware layer graphics hardware layer
  • GPU graphics hardware layer
  • the user executes rendering instructions.
  • the user operates through the game application program, and the game system performs image rendering in response to the user's operation.
  • the game application program can issue OpenGL rendering instructions to the system framework layer.
  • the system framework layer intercepts OpenGL rendering instructions, establishes a connection with the local window system and continues to pass OpenGL rendering instructions to the graphics driver layer.
  • the graphics driver layer processes the OpenGL rendering instructions to obtain instructions that can be executed by the graphics hardware layer, and the processed instructions are finally submitted to
  • FIG. 2 shows a schematic flowchart of an image rendering method in the first prior art.
  • the OpenGL rendering instruction indicates that the special effects rendering starts from the Nth frame
  • the game system when the game system renders the image of the Nth frame, it first obtains the Nth frame image Then get the main scene texture of the Nth frame image (also called non-special effect texture), then get the floodlight effect texture of the Nth frame image, and finally the shadow effect texture and floodlight effect of the Nth frame image
  • the texture and the main scene texture are respectively sampled to the display buffer, overlaid and drawn in the display buffer, and then the graphics hardware layer displays the image overlaid and drawn in the display buffer.
  • the game system may sequentially perform image rendering of frame N+1, image rendering of frame N+2, etc., in the same manner as performing image rendering of frame N.
  • FIG. 3a and Fig. 3b respectively show schematic diagrams of rendering channels created when obtaining a flood special effect texture in the prior art.
  • the game system creates 4 rendering passes to generate the bloom effect texture.
  • the floodlight special effect texture is obtained from the main scene texture.
  • the resolution of the main scene texture can be 1920*1080.
  • the main scene texture is sampled to obtain the first intermediate texture Level1.
  • the resolution of the first intermediate texture Level1 may be 960*540.
  • the first intermediate texture Level1 is sampled to obtain the second intermediate texture Level2, and the resolution of the second intermediate texture Level2 may be 480*270.
  • the second intermediate texture Level2 is sampled to obtain the third intermediate texture Level3, and the resolution of the third intermediate texture Level3 may be 240*135.
  • the resolution of the textures obtained by every two adjacent rendering passes in the 1st to 3rd rendering passes is 1/2 in turn.
  • the first intermediate texture Level1 and the third intermediate texture Level3 are mixed and sampled to obtain the flood effect texture texture0.
  • the resolution of the flood effect texture texture0 can be 960*540, which is the same as that of the first intermediate texture Level1.
  • the resolution is the same.
  • the flood effect texture texture0 can be sampled separately from the main scene texture to the display buffer, and when superimposed and drawn in the display buffer, a texture to be displayed with a resolution equal to the resolution of the main scene texture 1920*1080 is obtained, which is to be displayed
  • the texture will be displayed by the graphics hardware layer.
  • the main scene texture, each intermediate special effect texture and textures to be displayed are shown in Figure 3b.
  • the image rendering scheme of prior art 1 has the following disadvantages: on the one hand, the increase in the number of rendering channels is extremely consuming the memory bandwidth of the electronic device running the game system; on the other hand, the objects in the image have no motion or motion In the case of a small range, in the rendering process of each frame of the image, it is necessary to call a certain number of drawing commands (such as the sampling command in Figure 3a) to repeat the drawing of the special effect texture with small differences, which will consume part of the processing of the electronic device processors (such as central processing unit CPU and graphics processing unit GPU) load.
  • drawing commands such as the sampling command in Figure 3a
  • the above shortcomings will eventually cause problems such as a decrease in the game image frame rate and an increase in game power consumption, which will directly affect the user's game experience.
  • the present application proposes a special effect optimization method, device, storage medium, and program product.
  • the special effect optimization method of the embodiment of the present application when the object in the image has no motion or the motion range is small, for each When the frame image is rendered with special effects, the frame rate of the game image is increased and the power consumption of the game is reduced.
  • the special effect optimization method of the embodiment of the present application can be applied to a game system, and the game system can be set on a terminal device.
  • the terminal device of the present application can be a smart phone, a netbook, a tablet computer, a notebook computer, a TV, a virtual reality equipment and more.
  • Fig. 4 shows a schematic diagram of an exemplary architecture of a game system according to an embodiment of the present application.
  • the system architecture of the present application includes an application program 1001 , a system framework 1002 , a special effect optimization framework 1003 , a driver 1004 and a graphics processor 1005 .
  • the application program 1001 can be implemented based on the existing technology, and can include a game application to be optimized or other application programs, and the application program 1001 can use the rendering graphics application programming interface (application programming interface, API) of OpenGL or OpenGL ES. According to the user's operation on the application program 1001, the application program 1001 may generate a corresponding rendering instruction.
  • API application programming interface
  • the system framework 1002 can be implemented based on the existing technology, and can be a framework structure based on the Android operating system and its framework code, and can be used to intercept rendering instructions generated by the application program 1001 .
  • the special effect optimization framework 1003 is mainly used to implement the special effect optimization method of the embodiment of the present application, mainly used for special effect scene recognition, special effect texture similarity comparison and special effect optimization, and realizes the reuse of special effects such as floodlight and shadow in some scenes .
  • special effect optimization method of the embodiment of the present application mainly used for special effect scene recognition, special effect texture similarity comparison and special effect optimization, and realizes the reuse of special effects such as floodlight and shadow in some scenes .
  • special effect optimization method of the embodiment of the present application mainly used for special effect scene recognition, special effect texture similarity comparison and special effect optimization, and realizes the reuse of special effects such as floodlight and shadow in some scenes .
  • the driver 1004 can be implemented based on existing technologies, and can include drivers for rendering graphics application programming interfaces such as OpenGL/OpenGL ES/Vulkan.
  • the graphics processor 1005 may be implemented based on existing technologies, and may include a specific hardware graphics processor, GPU, etc. on the terminal device, for completing specific rendering tasks.
  • Fig. 5 shows an exemplary flowchart of a special effect optimization method according to an embodiment of the present application.
  • the present application proposes a special effect optimization method, the method comprising:
  • the rendering instruction may be a rendering instruction intercepted by the system framework 1002 and generated by the application program 1001 .
  • the purpose of the rendering instruction may indicate whether the rendering instruction is used to obtain multi-frame images, and whether the multi-frame images include special effect textures. Optionally, it may also indicate what kind of special effect texture (such as shadow special effect texture, blooming special effect texture, etc.) included in the multi-frame image.
  • special effect texture such as shadow special effect texture, blooming special effect texture, etc.
  • the rendering instruction may have more uses, and the application does not limit the application of the rendering instruction that can be identified in step S1.
  • the rendering instruction When the rendering instruction is used to obtain a multi-frame image, and the multi-frame image includes a special effect texture, select two adjacent frames of the multi-frame image, and identify the special effect texture of the two adjacent frame images similarity.
  • the multi-frame image including the special effect texture obtained by the rendering instruction may be a continuous multi-frame image or a discontinuous multi-frame image, which is not limited in the present application.
  • it may be selected according to the time sequence of the multi-frame images, such as selecting from the earliest two frame images, or it may be selected from the two frames at preset positions in the multi-frame images.
  • this application does not limit the selection method of two adjacent frame images.
  • the motion range of the object in the two adjacent frames of images is relatively small.
  • the motion range of the object in the next frame of the two adjacent frames of images is also relatively small.
  • the rendering channel is used to generate the special effect texture of the next frame image of the two adjacent frame images, the difference between the special effect texture of the next frame image of the two adjacent frame images and the next frame image of the two adjacent frame images may be Also very small. Therefore, the special effect texture of the image of the next frame of the two adjacent frames of images does not need to be generated by the rendering channel, and the special effect texture of the image of the next frame of the two adjacent frames of images can be used directly.
  • the motion range of the object in the two adjacent frames of images is relatively large.
  • the motion range of the object in the next frame of the two adjacent frames of images is It is also relatively large, and the special effect texture of the next frame image of two adjacent frames of images can be generated using the rendering pass.
  • the special effect optimization method of the embodiment of the present application by obtaining the rendering instruction and identifying the purpose of the rendering instruction, it can be determined whether the rendering instruction is used to obtain a multi-frame image and whether the multi-frame image includes a special effect texture, that is, it can be identified and determined whether the rendering instruction is Used for rendering special effects scenes; when the rendering instruction is used to obtain multiple frames of images, and the multi-frame images include special effect textures, select two adjacent frames of images in the multi-frame images to identify the similarity of the special effect textures of adjacent two frame images , when the rendering instruction is used to render the special effect scene, the similarity degree of the special effect texture can be determined by comparison; when the similarity meets the condition, the special effect texture of the next frame image in the adjacent two frame images is used as the two adjacent frame images
  • the special effect texture of the next frame image makes it possible to obtain the special effect texture of the next frame image of two adjacent frames without using the traditional rendering channel method, which can avoid the memory occupation and the consumption of processor load. Due to the reduced memory usage and processor load consumption,
  • the special effect optimization method of the present application can effectively increase the running frame rate of the game, and may reduce the power consumption of the game at the same time. If the running frame rate of the game has reached the maximum frame rate of the terminal device, the special effect optimization method of this application can effectively reduce the power consumption of the game, thereby effectively prolonging the duration of the game on the terminal device continuously running at the maximum frame rate of the terminal device.
  • the special effect texture includes at least one of a floodlight special effect texture and a shadow special effect texture.
  • Blooming special effects and shadow special effects are common special effects used in game images, so the special effect optimization method in the embodiment of the present application is also mainly optimized for the acquisition methods of these two special effect textures.
  • the optimized special effect texture supported by the special effect optimization method of the embodiment of the present application should not be limited to this, as long as the special effect texture used in the game image or other images is supported by the special effect optimization method of the embodiment of the present application Within the scope of optimization, the present application does not limit this.
  • step S1, step S2, and step S3 are described below.
  • step S1 includes:
  • For each instruction in the rendering instructions analyze at least one of the parameter, type, and semantics of the instruction segment where the instruction is located;
  • the purpose of the rendering instruction is determined.
  • the rendering instruction is an instruction for obtaining multi-frame images
  • the multi-frame images include special effect textures, so as to determine whether the rendering instruction is for rendering a special effect scene, and complete the recognition of the special effect scene.
  • there are various kinds of data in the rendering instruction that can be analyzed to determine the usage of the rendering instruction, which can improve the flexibility of the manner of determining the usage of the rendering instruction.
  • the intercepted rendering instructions from the application may include OpenGL rendering instructions, OpenGL ES rendering instructions, Vulkan rendering instructions, and the like.
  • OpenGL rendering instructions OpenGL ES rendering instructions
  • Vulkan rendering instructions Vulkan rendering instructions
  • the semantics of the rendering instructions generated can be to create multiple One rendering pass is used to obtain the shadow special effect texture, and then according to the game logic, multiple rendering passes are created in turn to obtain the main scene texture (also called non-special effect texture) and the flood special effect texture, and finally the shadow special effect texture, the main scene texture
  • the texture and the bloom effect texture are sampled to the display buffer, and the drawing of the Nth frame image is completed in the display buffer.
  • step S2 and step S3 are executed, for the N+1th frame image and subsequent images, the rendering process operation of the game system can be the same as that of the Nth frame, and the shadow special effect texture, main scene texture and The bloom effect texture is finally sampled to the display buffer, and a complete image is drawn in the display buffer.
  • the purpose of each instruction can be determined by analyzing at least one of the parameter, type, and semantics of the instruction segment where the instruction is located.
  • the parameter of the instruction may indicate the resolution of the texture obtained in the rendering pass
  • the type of the instruction may indicate the object involved in executing the instruction, such as the specific first intermediate texture or the second intermediate texture or the third intermediate texture and the second intermediate texture.
  • An intermediate texture, etc., the semantics of the instruction segment where the instruction is located can indicate which rendering channel sampling process is completed when the current instruction segment is executed. Take the process of obtaining the texture of the flood effect as an example. Generally, the texture of the flood effect is realized by sequentially sampling several textures with proportional resolutions.
  • Each rendering channel can correspond to one sampling, so it can be obtained according to the size of the resolution. Identify bloom effect textures. That is to say, if the result of the analysis is that the relationship between the resolutions of multiple textures indicated by multiple instructions in the rendering instruction satisfies the rule of the flood effect texture (after sorting according to the resolution size, the resolution of every two adjacent textures In the same proportion), it can be determined that the purpose of the rendering instruction is to obtain multiple frames of images, and the multiple frames of images include blooming special effect textures.
  • an analysis result is obtained by analyzing the parameters of the rendering instruction to determine the purpose of the rendering instruction as an example.
  • analyzing the type of rendering instruction or analyzing the semantics of the rendering instruction can also determine the purpose of the rendering instruction.
  • step S2 selecting two adjacent frames of images in the multiple frames of images includes:
  • step S2 the purpose of selecting two adjacent frames of images in multiple frames of images is to identify the similarity of the special effect texture of the two adjacent frames of images, and further, in step S3, when the similarity satisfies the condition , use the special effect texture of the next frame image of the adjacent two frame images as the special effect texture of the next frame image of the adjacent two frame images, so that it is not necessary to use the rendering channel to obtain the special effect texture of the next frame image of the adjacent two frame images . Therefore, if the special effect texture of the next frame image of the selected two adjacent frame images has been obtained through the rendering channel, the execution of step S2 and step S3 will have no effect on the reduction of power consumption and memory of the game system.
  • the selection condition of the two adjacent frames of images may be: the game system has not obtained the special effect texture of the image of the next frame of the two adjacent frames of images.
  • the special effect optimization method of the embodiment of the present application select the first two frame images (such as the Nth frame image and the N+1th frame image) in the multi-frame image as the adjacent two frame images, then The third frame image (such as the N+2 frame image) and subsequent images in the multi-frame image have the opportunity to obtain the corresponding special effect texture without using the rendering channel.
  • This method improves the effect of the special effect optimization method, that is, for the game
  • the reduction of power consumption and memory of the system is relatively large for the improvement of the frame rate.
  • selecting two adjacent frames of images in the multi-frame images may also select two frames of images at preset positions in the multi-frame images, such as the N+5th frame and the th frame N+6 frames of images. In this manner, the data processing cost required for executing the special effect optimization method of the embodiment of the present application is relatively low.
  • the present application does not limit the selection method of two adjacent frames of images in step S2.
  • step S2 identifying the similarity of the special effect textures of the two adjacent frames of images includes:
  • the similarity value of the special effect texture of the two adjacent frames of images is determined according to the gray level difference value.
  • the gray value of the special effect texture is relatively easy to obtain. In this way, it is simpler to determine the similarity of the special effect texture of two adjacent frames of images, and the efficiency of the special effect optimization method is higher.
  • step S2 what is to be identified in step S2 is the special effect texture of the Nth frame image
  • the special effect textures that can be compared and identified may be the same type of special effect textures, for example, both are shadow special effect textures or both are all floodlight special effect textures.
  • An exemplary manner of determining the similarity is to perform Y-diff detection on the Y component value (ie gray value) of the special effect texture of the Nth frame image and the N+1th frame image's special effect texture. Y-diff detection can be implemented based on existing technologies.
  • the result of the Y-diff detection is the grayscale difference between the special effect texture of the Nth frame image and the special effect texture of the N+1th frame image.
  • the grayscale difference value can be used as the similarity value between the special effect texture of the Nth frame image and the special effect texture of the N+1th frame image, and is used to determine whether the subsequent frame image of the N+1th frame image can be reused. Effect texture for 1 frame image. In this case, the smaller the similarity value, the greater the similarity between the special effect texture of the Nth frame image and the N+1th frame image.
  • the conditions include:
  • the similarity values of the special effect textures of the two adjacent frames of images are lower than a first threshold.
  • the similarity value of the special effect texture of two adjacent frames of images is lower than the first threshold value, when the similarity meets the condition, it can be determined that the object in the image has no motion or the motion amplitude is low, thereby ensuring that the adjacent
  • the special effect texture of the next frame image of the two adjacent frame images is used as the special effect texture of the next frame image of the adjacent two frame images, which has little influence on the accuracy of the next frame image of the adjacent two frame images, thereby ensuring image rendering the quality of.
  • the first threshold is set so that the similarity value is lower than the first threshold as a condition in step S3.
  • the degree of similarity of the special effect textures of the adjacent two frame images indicated by the similarity value that satisfies the condition is relatively large, and it can be considered that the adjacent two frame images and the next frame image of the adjacent two frame images
  • the range of motion of the object is small, and the special effect texture of the next frame image (such as the N+1th frame image) of the adjacent two frame images can be used as the next frame image of the adjacent two frame images (such as the N+2th frame image ), that is, the next frame image of two adjacent frames of images multiplexes the special effect texture of the next frame image of the two adjacent frames of images.
  • the special effect texture includes more than two types
  • the next frame image of the two adjacent frames of images may respectively multiplex the various types of special effect textures of the next frame image of the two adjacent frames of images.
  • the similarity value can also be obtained in other ways, such as detecting the similarity of two textures through a neural network, which is not limited in the present application. It should be noted that if the smaller the similarity value indicates the greater the similarity of the textures of the two frames, the similarity value of the special effect texture of the adjacent two frames of images can be set to be lower than the first threshold as the condition in step S3; otherwise, If the smaller the similarity value indicates that the similarity between the textures of the two frames is smaller, then the similarity value of the special effect texture of two adjacent frames of images can be set higher than the second threshold as the condition in step S3.
  • the specific setting method of the condition can be adjusted according to the relationship between the similarity value and the similarity degree, which is not limited in this application.
  • the first threshold (or the second threshold) can be preset according to the hardware condition of the terminal device and user requirements, and the application does not limit the selection of the specific value of the first threshold (or the second threshold).
  • the method when it is determined that the similarity meets a condition, the method further includes:
  • the special effect texture of the next frame image of the two adjacent frame images and the main scene texture of the next frame image of the two adjacent frame images are used for rendering to obtain the two adjacent frame images The next frame of the frame image.
  • the rendering of the image of the next frame of the adjacent two frames of images can be completed without using the rendering channel to obtain the special effect texture of the image of the next frame of the two adjacent frames of images.
  • This allows the next frame of images between two adjacent frames to be displayed normally without affecting the user's gaming experience.
  • the special effect texture of the next frame image (such as the N+1th frame image) in the adjacent two frame images is used as the next frame image of the adjacent two frame images (such as the N+2th frame image) Therefore, when the special effect texture corresponding to the next frame image (for example, the N+2th frame image) of two adjacent frames of images is sampled to the display buffer, it can be the next frame of the adjacent two frame images
  • the special effect texture of the image (such as the N+1 frame image) is sampled to the display buffer; in the display buffer, the special effect texture of the next frame image (such as the N+2 frame image) of two adjacent frames
  • the main scene texture corresponding to the frame image is overlaid and drawn, which can be the special effect texture of the next frame image (such as the N+1th frame image) in the adjacent two frame images and the next frame image of the adjacent two frame images (such as the N+1th frame image)
  • the main scene textures of N+2 frame images) are superimposed and drawn together, and the next frame image (
  • the method when it is determined that the similarity meets a condition, the method further includes:
  • a rendering instruction for creating a rendering channel to obtain a special effect texture of a next frame image of the two adjacent frames of images is intercepted, so that the intercepted instruction is not passed to an execution object of the rendering instruction.
  • the special effect texture of the next frame image (such as the N+1th frame image) in the two adjacent frames of images can be directly used as the next frame of the two adjacent frames of images.
  • the special effect texture of one frame image (for example, N+2 frame image), and sample it to the display buffer, and obtain the corresponding The image of the next frame adjacent to the two frames of images (for example, the image of the N+2th frame). Therefore, in the original rendering instruction, the rendering instruction for creating a rendering channel for rendering to obtain the special effect texture of the next frame image (for example, N+2th frame image) of two adjacent frame images may not be executed.
  • this part of the instructions can be intercepted so that they are not delivered to the execution object of the rendering instruction (the graphics processor in the embodiment of the present application).
  • the drawing command corresponding to this part of the command does not need to be submitted to its execution object (the graphics processing unit and the central processing unit in the embodiment of the present application).
  • the special effect optimization framework layer intercepts the rendering instructions used to create the rendering channel to obtain the special effect texture of the N+2 frame image and skips them all, that is The step of creating and switching the rendering channel related to the special effect texture of the N+2 frame image is omitted.
  • the special effect texture of the next frame image (such as the N+1th frame image) in the adjacent two frame images is directly used as the next frame image (such as the N+2th frame image) of the adjacent two frame images Image) special effect texture, which is equivalent to completing a special effect texture multiplexing.
  • the next frame image such as the N+2th frame image
  • more times of special effect texture multiplexing can be completed by selecting two adjacent frames of images, identifying the similarity of the special effect texture, and judging whether the similarity meets the conditions.
  • the selection condition of the two adjacent frames of images may further include: the special effect texture of at least one frame of the selected two adjacent frames of images is obtained through a rendering channel.
  • the N+2th frame image and the N+3th frame image the special effect texture of the N+2 frame image is obtained by multiplexing, and the special effect texture of the N+3 frame image is obtained through the rendering channel
  • the recognition of the similarity of the special effect texture and the judgment of whether the similarity meets the conditions are carried out.
  • the selection of two adjacent frames of images, the identification of the similarity of the special effect texture, and the judgment of whether the similarity meets the conditions can be carried out, so that the special effect texture of multiple frames of images can be realized. multiple reuse.
  • the method when it is determined that the similarity does not meet the condition, the method further includes:
  • the next frame image of the two adjacent frames of images selected last time and the next frame image of the two adjacent frames of images are used as the new frame image. Selected two adjacent frames of images.
  • the similarity does not meet the condition, it can be determined that the object in the image has a large range of motion, and the subsequent image can continue to be compared with the special effect texture similarity until the similarity meets the condition. It is determined that the object in the image has no If the motion or motion range is small, the multiplexing of special effect textures is performed to further ensure the quality of image rendering.
  • the condition of step S3 is that the similarity value of the special effect texture of two adjacent frames of images is lower than the first threshold as an example, the result of the Y-diff detection is the similarity If the value is higher than or equal to the first threshold, the similarity does not meet the condition. It can be considered that the object motion range in the adjacent two frame images and the next frame image of the adjacent two frame images may be relatively large, and the existing The next frame image of the two adjacent frames of images is obtained by means of the rendering pass of the technology.
  • the rendering instruction for creating a rendering pass to obtain the special effect texture of the next frame image of the two adjacent frames of images, Still passed to the execution object of the rendering instruction.
  • the rendering instruction for creating a rendering pass to obtain the special effect texture of the next frame image of the two adjacent frames of images, Still passed to the execution object of the rendering instruction.
  • step S2 and step S3 are repeatedly performed.
  • step S2 when repeatedly performing the step of selecting two adjacent frames of images in multiple frames of images, the next frame image of the previously selected adjacent two frames of images and the next frame image of the two adjacent frames of images are used as the newly selected Two adjacent frames of images.
  • the adjacent The image of the next frame of the two frames of images for example, the image of the N+2th frame.
  • the image that has not been obtained by the game system is updated to the N+3th frame image, that is, the next frame image of the N+1th frame image and the N+2th frame image.
  • the newly selected adjacent two frames of images can include the previously selected adjacent two frames of images.
  • Fig. 6 shows an application example of the special effect optimization method according to the embodiment of the present application.
  • the application program (1001) may be a game application on the Android platform, and correspondingly, the system framework (1002) may be the Android framework.
  • the game application loads the Android framework and the special effect optimization framework, it first executes step S1 to complete the acquisition of rendering commands (ie, command interception) and the identification of the purpose of rendering commands (ie, command analysis and scene recognition).
  • step S1 can identify, for example, a rendering scene that is a special effect image through the resolution indicated by the parameter of the rendering instruction, that is, the rendering instruction is used to obtain multiple frames of images, and the multiple frames of images include special effect textures.
  • the game system can first obtain the Nth frame by rendering through the rendering channel. Frame image and N+1th frame image, and get N+2th frame image temporarily without rendering through the rendering channel.
  • step S2 is performed to select the Nth frame image and the N+1th frame image as two adjacent frames of images in the multi-frame image, and identify the similarity of the special effect texture of the adjacent two frame images (that is, the special effect similarity comparison).
  • the special effect texture of two adjacent frames of images can be respectively obtained in the process of obtaining the Nth frame image and the N+1th frame image through rendering through the rendering channel.
  • the special effect similarity comparison method may be Y-diff detection, and the result of the Y-diff detection is used as the similarity value.
  • the floodlight special effect texture of the Nth frame image is texture1
  • the floodlight special effect texture of the N+1th frame image is texture2
  • the difference value of the Y component (ie gray value) of the texture texture1 and texture2 is obtained through Y-diff detection. as a similarity value.
  • step S3 After identifying the similarity of the special effect texture of two adjacent frames of images, that is, the Nth frame image and the N+1th frame image, step S3 can be performed.
  • the similarity satisfies the condition, for example, when the similarity value is lower than the first threshold
  • Skip the rendering command and the related drawing commands originally used to obtain the floodlight special effect texture of the N+2 frame image, and the special effect texture of the next frame image of the N+1 frame image in two adjacent frames of images It is used as the special effect texture of the image of the next frame of the two adjacent frames of images, that is, the image of the N+2th frame (that is, special effect optimization). Otherwise, the special effect texture of the N+2th frame image is still obtained through rendering pass rendering.
  • the special effect texture of the N+2 frame image is sampled to the display buffer, and overlaid and drawn with the main scene texture sampled to the display buffer to complete the rendering of the N+2 frame image.
  • the driver (corresponding to the driver 1004 in FIG. 4 ) is called to convert the rendering instruction into a hardware GPU instruction, and submitted to the GPU (corresponding to the graphics processor 1005 in FIG. 4 ), and the GPU completes the rendering task.
  • Fig. 7a and Fig. 7b respectively show the schematic diagrams of rendering channels created when obtaining the N+2th frame of image according to the first prior art and the embodiment of the present application.
  • the special effect optimization method of the embodiment of the present application can reduce the number of rendering passes (the reduced part is, for example, the rendering pass in the box in Fig. 7a). Therefore, the special effect optimization method of the embodiment of the present application can greatly reduce the memory bandwidth consumption in the scene of heavy memory load, and at the same time can skip the drawing command of some frame images and the submission of the rendering command used to create the rendering channel to obtain the special effect texture , reduce processor load, achieve the effect of improving game performance and reducing power consumption of the system on chip. According to the test, under the high-definition picture quality of some games, the benefit of the special effect optimization method of the embodiment of the present application in different game scenes is about to increase the game frame rate by 3-5 frames, and reduce the game power consumption by 30-80mA.
  • Fig. 8 shows an exemplary structural diagram of an apparatus for optimizing special effects according to an embodiment of the present application.
  • the present application provides a device for optimizing special effects, which includes:
  • a usage identification module 101 configured to acquire a rendering instruction and identify the usage of the rendering instruction
  • the texture recognition module 102 is configured to select two adjacent frames of images in the multiple frames of images when the rendering instruction is used to obtain multiple frames of images, and the multiple frames of images include special effect textures, and identify the The similarity of the special effect texture of the frame image;
  • the texture multiplexing module 103 is configured to use the special effect texture of the next frame image of the two adjacent frame images as the special effect texture of the next frame image of the two adjacent frame images when determining that the similarity meets the condition .
  • obtaining the rendering instruction and identifying the purpose of the rendering instruction includes: intercepting the rendering instruction from the application program; analyzing the parameters of the instruction for each instruction in the rendering instruction , at least one of the type and the semantics of the instruction segment where the instruction is located; according to the analysis result, determine the purpose of the rendering instruction.
  • identifying the similarity of the special effect textures of the two adjacent frames of images includes: comparing and determining the grayscale difference values of the special effect textures of the two adjacent frames of images; The value determines the similarity value of the special effect texture of the two adjacent frames of images.
  • the condition includes: the similarity value of the special effect texture of the two adjacent frames of images is lower than a first threshold.
  • selecting two adjacent frames of images in the multiple frames of images includes: selecting the first two frames of images in the multiple frames of images as the two adjacent frames of images.
  • the device when it is determined that the similarity satisfies the condition, the device further includes: a sampling module, configured to sample the special effect texture of the next frame image in the two adjacent frame images to the display buffer area; in the display buffer, the special effect texture of the next frame image of the two adjacent frame images and the main scene texture of the next frame image of the two adjacent frame images are used for rendering to obtain the corresponding The image of the next frame of the adjacent two frames of images.
  • a sampling module configured to sample the special effect texture of the next frame image in the two adjacent frame images to the display buffer area; in the display buffer, the special effect texture of the next frame image of the two adjacent frame images and the main scene texture of the next frame image of the two adjacent frame images are used for rendering to obtain the corresponding The image of the next frame of the adjacent two frames of images.
  • the device when it is determined that the similarity meets the condition, the device further includes: an instruction interception module, configured to intercept the rendering instruction, and to create a rendering pass to obtain the two adjacent frames A rendering instruction of the special effect texture of the next frame image of the image, so that the intercepted instruction is not passed to the execution object of the rendering instruction.
  • an instruction interception module configured to intercept the rendering instruction, and to create a rendering pass to obtain the two adjacent frames A rendering instruction of the special effect texture of the next frame image of the image, so that the intercepted instruction is not passed to the execution object of the rendering instruction.
  • the device when it is determined that the similarity does not meet the condition, the device repeatedly executes the functions of the texture identification module and subsequent modules; During the step of two adjacent frames of images, the next frame of the previously selected adjacent two frames of images and the next frame of the two adjacent frames of images are used as the newly selected adjacent two frames of images.
  • the special effect texture includes at least one of a floodlight special effect texture and a shadow special effect texture.
  • Fig. 9 shows an exemplary structural diagram of an apparatus for optimizing special effects according to an embodiment of the present application.
  • an embodiment of the present application provides a device for optimizing special effects, including: a processor and a memory for storing processor-executable instructions; wherein, the processor is configured to implement the above method.
  • the special effect optimization device can be set in electronic equipment, which can include mobile phones, foldable electronic equipment, tablet computers, desktop computers, laptop computers, handheld computers, notebook computers, speakers with screens, ultra-mobile personal computers (ultra-mobile personal computers) computer, UMPC), netbook, augmented reality (augmented reality, AR) equipment, virtual reality (virtual reality, VR) equipment, artificial intelligence (artificial intelligence, AI) equipment, drone, vehicle equipment, smart home equipment, or smart At least one of the city equipment.
  • electronic equipment can include mobile phones, foldable electronic equipment, tablet computers, desktop computers, laptop computers, handheld computers, notebook computers, speakers with screens, ultra-mobile personal computers (ultra-mobile personal computers) computer, UMPC), netbook, augmented reality (augmented reality, AR) equipment, virtual reality (virtual reality, VR) equipment, artificial intelligence (artificial intelligence, AI) equipment, drone, vehicle equipment, smart home equipment, or smart At least one of the city equipment.
  • the embodiment of the present application does not specifically limit the specific type of the special effect optimization device.
  • the device for optimizing special effects may include a processor 110, an internal memory 121, a communication module 160, and the like.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors. For example, the processor 110 may execute the Y-diff detection in the embodiment of the present application, etc., so as to implement the special effect optimization method in the embodiment of the present application.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 may be a cache memory.
  • the memory may store instructions or data used or frequently used by the processor 110 , such as rendering instructions and special effect textures in the embodiment of the present application. If the processor 110 needs to use the instruction or data, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, a universal asynchronous receiver/transmitter (universal asynchronous receiver/transmitter, UART) interface, a general-purpose input/output (general-purpose input/output, GPIO) interface, and the like.
  • the processor 110 may be connected to modules such as a wireless communication module, a display, and a camera through at least one of the above interfaces.
  • the memory 121 may be used to store computer-executable program code including instructions.
  • the memory 121 may include an area for storing programs and an area for storing data.
  • the storage program area may store an operating system, at least one application program required by a function (such as an application program for identifying the purpose of a rendering instruction, etc.) and the like.
  • the storage data area can store data created during the use of the special effect optimization device (such as special effect textures, etc.).
  • the memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the processor 110 executes various functional methods or data processing of the special effect optimization device by executing the instructions stored in the memory 121 and/or the instructions stored in the memory provided in the processor.
  • the communication module 160 can be used to receive data from other devices or devices (such as the application program 1001 in the embodiment of this application) through wired communication or wireless communication, or send data to other devices or devices (such as the application program 1001 in the embodiment of this application).
  • graphics processor 1005) For example, it can provide WLAN (such as Wi-Fi network), Bluetooth (Bluetooth, BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), short-distance Wireless communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN such as Wi-Fi network
  • Bluetooth Bluetooth (Bluetooth, BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), short-distance Wireless communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the communication module 160 may also use a wired communication scheme.
  • the structure shown in the embodiment of the present application does not constitute a specific limitation on the special effect optimization device.
  • the device for optimizing special effects may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • An embodiment of the present application provides a non-volatile computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the foregoing method is implemented.
  • An embodiment of the present application provides a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium bearing computer-readable codes, when the computer-readable codes are stored in a processor of an electronic device When running in the electronic device, the processor in the electronic device executes the above method.
  • a computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.
  • a computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer-readable storage media include: portable computer disk, hard disk, random access memory (Random Access Memory, RAM), read only memory (Read Only Memory, ROM), erasable Electrically Programmable Read-Only-Memory (EPROM or flash memory), Static Random-Access Memory (Static Random-Access Memory, SRAM), Portable Compression Disk Read-Only Memory (Compact Disc Read-Only Memory, CD -ROM), Digital Video Disc (DVD), memory sticks, floppy disks, mechanically encoded devices such as punched cards or raised structures in grooves with instructions stored thereon, and any suitable combination of the foregoing .
  • RAM Random Access Memory
  • ROM read only memory
  • EPROM or flash memory erasable Electrically Programmable Read-Only-Memory
  • Static Random-Access Memory SRAM
  • Portable Compression Disk Read-Only Memory Compact Disc Read-Only Memory
  • CD -ROM Compact Disc Read-Only Memory
  • DVD Digital Video Disc
  • Computer readable program instructions or codes described herein may be downloaded from a computer readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, local area network, wide area network, and/or wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for performing the operations of the present application may be assembly instructions, instruction set architecture (Instruction Set Architecture, ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more source or object code written in any combination of programming languages, including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages—such as the “C” language or similar programming languages.
  • Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer can be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or it can be connected to an external computer such as use an Internet service provider to connect via the Internet).
  • electronic circuits such as programmable logic circuits, field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or programmable logic arrays (Programmable Logic Array, PLA), the electronic circuit can execute computer-readable program instructions, thereby realizing various aspects of the present application.
  • These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
  • These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
  • each block in a flowchart or block diagram may represent a module, a portion of a program segment, or an instruction that includes one or more Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block in the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented with hardware (such as circuits or ASIC (Application Specific Integrated Circuit, application-specific integrated circuit)), or it can be realized by a combination of hardware and software, such as firmware.
  • hardware such as circuits or ASIC (Application Specific Integrated Circuit, application-specific integrated circuit)
  • firmware such as firmware

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Display Devices Of Pinball Game Machines (AREA)

Abstract

本申请涉及一种特效优化方法、装置、存储介质及程序产品,所述方法包括: 获取渲染指令并识别所述渲染指令的用途; 在所述渲染指令用于获得多帧图像、且所述多帧图像包括特效纹理时,选取所述多帧图像中的相邻两帧图像,识别所述相邻两帧图像的特效纹理的相似度; 确定所述相似度满足条件时,将所述相邻两帧图像中的后一帧图像的特效纹理作为所述相邻两帧图像的下一帧图像的特效纹理。根据本申请实施例的特效优化方法,能够在图像中的对象无运动或者运动幅度较小的情况下、为每帧图像进行特效相关的渲染时,提升游戏图像帧率、降低游戏功耗。

Description

特效优化方法、装置、存储介质及程序产品
本申请要求于2022年1月11日提交中国专利局、申请号为202210028509.1、申请名称为“特效优化方法、装置、存储介质及程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理领域,尤其涉及一种特效优化方法、装置、存储介质及程序产品。
背景技术
游戏帧率和功耗问题一直是游戏厂商重点关注的难点和重点。为了呈现良好的游戏视觉效果,当前许多流行的3D游戏如原神、崩坏3、NBA2019等,在对游戏需显示的每帧图像做渲染时,通常会设置多个渲染通道用于渲染阴影(shadow)、泛光(bloom)等特效纹理,再基于特效纹理得到每帧图像,在图像中的对象无运动或者运动幅度较小的情况下,为每帧图像都设置多个渲染通道会消耗大量的内存带宽及少量处理器负载,一是导致游戏功耗上升,二是使得游戏帧率下降,影响用户的游戏体验。
发明内容
有鉴于此,本申请提出一种特效优化方法、装置、存储介质及程序产品,根据本申请实施例的特效优化方法,能够在图像中的对象无运动或者运动幅度较小的情况下、为每帧图像进行特效相关的渲染时,提升游戏图像帧率、降低游戏功耗。
第一方面,本申请的实施例提供了一种特效优化方法,所述方法包括:获取渲染指令并识别所述渲染指令的用途;在所述渲染指令用于获得多帧图像、且所述多帧图像包括特效纹理时,选取所述多帧图像中的相邻两帧图像,识别所述相邻两帧图像的特效纹理的相似度;确定所述相似度满足条件时,将所述相邻两帧图像中的后一帧图像的特效纹理作为所述相邻两帧图像的下一帧图像的特效纹理。
根据本申请实施例的特效优化方法,通过获取渲染指令并识别渲染指令的用途,可以确定渲染指令是否用于获得多帧图像、且多帧图像是否包括特效纹理,即可以识别确定渲染指令是否是用于渲染特效场景;通过在渲染指令用于获得多帧图像、且多帧图像包括特效纹理时,选取多帧图像中的相邻两帧图像,识别相邻两帧图像的特效纹理的相似度,可以在渲染指令用于渲染特效场景时,对比确定特效纹理的相似程度;通过确定相似度满足条件时,将相邻两帧图像中的后一帧图像的特效纹理作为相邻两帧图像的下一帧图像的特效纹理,使得不采用传统的渲染通道方式,也能得到相邻两帧图像的下一帧图像的特效纹理,可以避免传统的渲染通道方式生成特效纹理时对内存的占用以及对处理器负载的消耗。由于内存的占用、处理器负载的消耗均降低,因此可以在进行特效图像渲染时,提升游戏图像帧率、降低游戏功耗。
其中,如果游戏的运行帧率未达到终端设备的最大帧率,通过本申请的特效优化方法,能够有效的提高游戏运行帧率,同时可能降低游戏功耗。如果游戏的运行帧率已达到终端设 备的最大帧率,通过本申请的特效优化方法,能够有效的降低游戏功耗,从而有效延长终端设备上游戏持续以终端设备的最大帧率运行的时长。
根据第一方面,在所述特效优化方法的第一种可能的实现方式中,获取渲染指令并识别所述渲染指令的用途,包括:拦截来自应用程序的所述渲染指令;针对所述渲染指令中的每条指令,分析该条指令的参数、类型及该条指令所处的指令段的语义中的至少一种;根据分析的结果,确定所述渲染指令的用途。
通过这种方式,可以确定渲染指令是否为用于获得多帧图像、且多帧图像包括特效纹理的指令,从而能确定渲染指令是否是用于渲染特效场景,完成特效场景的识别。并且,渲染指令中有多种数据可供分析确定渲染指令的用途,可以提升确定渲染指令的用途的方式的灵活性。
根据第一方面,或者第一方面的第一种可能的实现方式,在所述特效优化方法的第二种可能的实现方式中,识别所述相邻两帧图像的特效纹理的相似度,包括:对比确定所述相邻两帧图像的特效纹理的灰度差异值;根据所述灰度差异值确定所述相邻两帧图像的特效纹理的相似度数值。
特效纹理的灰度值是比较容易获取的,通过这种方式,使得确定相邻两帧图像的特效纹理的相似度的实现方式更简单、使得特效优化方法的效率更高。
根据第一方面的第二种可能的实现方式,在所述特效优化方法的第三种可能的实现方式中,所述条件包括:所述相邻两帧图像的特效纹理的相似度数值低于第一阈值。
通过设置相邻两帧图像的特效纹理的相似度数值低于第一阈值的条件,使得相似度满足条件时是可以确定图像中对象是无运动或者运动幅度较低的,从而能保证将相邻两帧图像中的后一帧图像的特效纹理作为相邻两帧图像的下一帧图像的特效纹理,对相邻两帧图像的下一帧图像的准确度的影响很小,进而保证图像渲染的质量。
根据第一方面,或者以上第一方面的任意一种可能的实现方式,在所述特效优化方法的第四种可能的实现方式中,选取所述多帧图像中的相邻两帧图像,包括:选取所述多帧图像中的前两帧图像作为所述相邻两帧图像。
通过这种方式,使得多帧图像中能够有机会实现特效纹理复用的图像更多,可以进一步提升特效优化方法提升游戏图像帧率、降低设备功耗的效果。
根据第一方面,或者以上第一方面的任意一种可能的实现方式,在所述特效优化方法的第五种可能的实现方式中,确定所述相似度满足条件时,所述方法还包括:将所述相邻两帧图像中的后一帧图像的特效纹理采样至显示缓冲区;在所述显示缓冲区中,所述相邻两帧图像中的后一帧图像的特效纹理和所述相邻两帧图像的下一帧图像的主场景纹理用于渲染得到所述相邻两帧图像的下一帧图像。
通过这种方式,可以在不采用渲染通道得到相邻两帧图像的下一帧图像的特效纹理时,完成相邻两帧图像的下一帧图像的渲染。使得相邻两帧图像的下一帧图像可以正常显示,不影响用户游戏体验。
根据第一方面,或者以上第一方面的任意一种可能的实现方式,在所述特效优化方法的第六种可能的实现方式中,确定所述相似度满足条件时,所述方法还包括:拦截所述渲染指令中,用于创建渲染通道以获得所述相邻两帧图像的下一帧图像的特效纹理的渲染指令,以使拦截的指令不传递至所述渲染指令的执行对象。
通过这种方式,一方面可以减少渲染多帧图像的过程中所创建的渲染通道的数量,实现降低内存带宽的效果,另一方面,可以跳过不必要的、重复的、差别微小的与特效纹理的获得相关的绘制命令的提交,实现降低处理器负载的效果。
根据第一方面,或者以上第一方面的任意一种可能的实现方式,在所述特效优化方法的第七种可能的实现方式中,确定所述相似度不满足条件时,所述方法还包括:重复执行选取所述多帧图像中的相邻两帧图像及之后的步骤;其中,重复执行选取所述多帧图像中的相邻两帧图像的步骤时,以前次选取的相邻两帧图像中的后一帧图像和该相邻两帧图像的下一帧图像作为新选取的相邻两帧图像。
通过这种方式,使得相似度不满足条件时是可以确定图像中对象是运动幅度较大的,可以继续对后续的图像进行特效纹理相似度比对,直到相似度满足条件时确定图像中对象无运动或者运动幅度较小,再执行特效纹理的复用,进一步保证图像渲染的质量。
根据第一方面,或者以上第一方面的任意一种可能的实现方式,在所述特效优化方法的第八种可能的实现方式中,所述特效纹理包括泛光特效纹理、阴影特效纹理中的至少一种。
第二方面,本申请的实施例提供了一种特效优化装置,所述装置包括:用途识别模块,用于获取渲染指令并识别所述渲染指令的用途;纹理识别模块,用于在所述渲染指令用于获得多帧图像、且所述多帧图像包括特效纹理时,选取所述多帧图像中的相邻两帧图像,识别所述相邻两帧图像的特效纹理的相似度;纹理复用模块,用于确定所述相似度满足条件时,将所述相邻两帧图像中的后一帧图像的特效纹理作为所述相邻两帧图像的下一帧图像的特效纹理。
根据第二方面,在所述特效优化装置的第一种可能的实现方式中,获取渲染指令并识别所述渲染指令的用途,包括:拦截来自应用程序的所述渲染指令;针对所述渲染指令中的每条指令,分析该条指令的参数、类型及该条指令所处的指令段的语义中的至少一种;根据分析的结果,确定所述渲染指令的用途。
根据第二方面,或者第二方面的第一种可能的实现方式,在所述特效优化装置的第二种可能的实现方式中,识别所述相邻两帧图像的特效纹理的相似度,包括:对比确定所述相邻两帧图像的特效纹理的灰度差异值;根据所述灰度差异值确定所述相邻两帧图像的特效纹理的相似度数值。
根据第二方面的第二种可能的实现方式,在所述特效优化装置的第三种可能的实现方式中,所述条件包括:所述相邻两帧图像的特效纹理的相似度数值低于第一阈值。
根据第二方面,或者以上第二方面的任意一种可能的实现方式,在所述特效优化装置的第四种可能的实现方式中,选取所述多帧图像中的相邻两帧图像,包括:选取所述多帧图像中的前两帧图像作为所述相邻两帧图像。
根据第二方面,或者以上第二方面的任意一种可能的实现方式,在所述特效优化装置的第五种可能的实现方式中,确定所述相似度满足条件时,所述装置还包括:采样模块,用于将所述相邻两帧图像中的后一帧图像的特效纹理采样至显示缓冲区;在所述显示缓冲区中,所述相邻两帧图像中的后一帧图像的特效纹理和所述相邻两帧图像的下一帧图像的主场景纹理用于渲染得到所述相邻两帧图像的下一帧图像。
根据第二方面,或者以上第二方面的任意一种可能的实现方式,在所述特效优化装置的第六种可能的实现方式中,确定所述相似度满足条件时,所述装置还包括:指令拦截模块, 用于拦截所述渲染指令中,用于创建渲染通道以获得所述相邻两帧图像的下一帧图像的特效纹理的渲染指令,以使拦截的指令不传递至所述渲染指令的执行对象。
根据第二方面,或者以上第二方面的任意一种可能的实现方式,在所述特效优化装置的第七种可能的实现方式中,确定所述相似度不满足条件时,所述装置重复执行纹理识别模块及之后的模块的功能;其中,纹理识别模块重复执行选取所述多帧图像中的相邻两帧图像的步骤时,以前次选取的相邻两帧图像中的后一帧图像和该相邻两帧图像的下一帧图像作为新选取的相邻两帧图像。
根据第二方面,或者以上第二方面的任意一种可能的实现方式,在所述特效优化装置的第八种可能的实现方式中,所述特效纹理包括泛光特效纹理、阴影特效纹理中的至少一种。
第三方面,本申请的实施例提供了一种特效优化装置,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令时实现上述第一方面或者第一方面的多种可能的实现方式中的一种或几种的特效优化方法。
第四方面,本申请的实施例提供了一种非易失性计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述第一方面或者第一方面的多种可能的实现方式中的一种或几种的特效优化方法。
第五方面,本申请的实施例提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行上述第一方面或者第一方面的多种可能的实现方式中的一种或几种的特效优化方法。
本申请的这些和其他方面在以下(多个)实施例的描述中会更加简明易懂。
附图说明
包含在说明书中并且构成说明书的一部分的附图与说明书一起示出了本申请的示例性实施例、特征和方面,并且用于解释本申请的原理。
图1示出现有技术一的游戏系统的架构示意图。
图2示出现有技术一的图像渲染方法的流程示意图。
图3a示出现有技术一得到泛光特效纹理时创建的渲染通道的示意图。
图3b示出现有技术一得到泛光特效纹理时创建的渲染通道的示意图。
图4示出根据本申请实施例的游戏系统的示例性架构示意图。
图5示出根据本申请实施例的特效优化方法的示例性流程示意图。
图6示出根据本申请实施例的特效优化方法的一个应用示例。
图7a示出现有技术一得到第N+2帧图像时创建的渲染通道的示意图。
图7b示出根据本申请实施例得到第N+2帧图像时创建的渲染通道的示意图。
图8示出根据本申请实施例的特效优化装置的示例性结构示意图。
图9示出根据本申请实施例的特效优化装置的示例性结构示意图。
具体实施方式
以下将参考附图详细说明本申请的各种示例性实施例、特征和方面。附图中相同的附图 标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
另外,为了更好的说明本申请,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本申请同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本申请的主旨。
下面对本文中出现的术语进行解释。
泛光(Bloom):一种用于视频游戏、演示和高动态范围渲染的计算机图形效果,用于再现真实世界相机的成像工件。
阴影(Shadow):一种用于游戏渲染中的阴影特效。
渲染通道(render pass):通过指定渲染时使用的帧缓冲区附件相关信息来描述一个渲染过程。
帧率(frame rate):以帧为单位的图像连续出现在显示器上的频率(速率)。
绘制命令(draw call):涉及渲染通道的指令调用。
下面介绍现有技术的图像渲染方法。
图1示出现有技术一的游戏系统的架构示意图。
如图1所示,现有技术一的游戏系统主要包括以下部分:游戏应用程序(Application)、游戏引擎(Game Engine,常见的有Unity、Unreal等)、系统框架层(Framework)、图形驱动层(driver development kit,DDK)以及执行渲染指令的图形硬件层(graphics processing unit,GPU)。用户通过游戏应用程序进行操作,游戏系统响应于用户的操作进行图像渲染。首先,用户通过游戏应用程序进行操作时,游戏应用程序可以下发OpenGL渲染指令到系统框架层。系统框架层拦截OpenGL渲染指令,建立与本地窗口系统的联系并继续传递OpenGL渲染指令到图形驱动层。图形驱动层对OpenGL渲染指令进行处理得到能够被图形硬件层执行的指令,处理后的指令最终提交到图形硬件层执行。
当前,大多数的游戏厂商为了保证完美的游戏场景效果,会在每一帧图像的渲染过程中进行大量的重复的阴影和/或泛光等特效的渲染。图2示出现有技术一的图像渲染方法的流程示意图。
如图2所示,以进行阴影和泛光的特效渲染为例,假设OpenGL渲染指令指示从第N帧开始进行特效渲染,则游戏系统进行第N帧的图像渲染时,先得到第N帧图像的阴影特效纹理,再得到第N帧图像的主场景纹理(也可以称为非特效纹理),然后得到第N帧图像的泛光特效纹理,最后第N帧图像的阴影特效纹理、泛光特效纹理和主场景纹理分别采样至显示缓冲区,在显示缓冲区中叠加绘制,再由图形硬件层对显示缓冲区叠加绘制的图像进行显示。之后游戏系统可依次进行第N+1帧的图像渲染、第N+2帧等的图像渲染,方式与进行第N帧的图像渲染相同。
其中,得到每一帧图像的阴影特效纹理、每一帧图像的泛光特效纹理的过程中,分别需要创建多个渲染通道。图3a和图3b分别示出现有技术一得到泛光特效纹理时创建的渲染通道的示意图。
如图3a所示,为了在渲染得到的图像中增加泛光特效,游戏系统创建了4个渲染通道来 生成泛光特效纹理。图3a的示例中,泛光特效纹理是根据主场景纹理得到,主场景纹理的分辨率可以是1920*1080,第1个渲染通道中,对主场景纹理进行采样,得到第一中间纹理Level1,第一中间纹理Level1的分辨率可以是960*540。第2个渲染通道中,对第一中间纹理Level1进行采样,得到第二中间纹理Level2,第二中间纹理Level2的分辨率可以是480*270。第3个渲染通道中,对第二中间纹理Level2进行采样,得到第三中间纹理Level3,第三中间纹理Level3的分辨率可以是240*135。即第1-3个渲染通道中的每相邻两个渲染通道得到的纹理的分辨率依次是1/2的关系。第4个渲染通道中,对第一中间纹理Level1和第三中间纹理Level3混合采样,得到泛光特效纹理texture0,泛光特效纹理texture0的分辨率可以是960*540,与第一中间纹理Level1的分辨率相同。该泛光特效纹理texture0可以与主场景纹理分别采样至显示缓冲区,在显示缓冲区中叠加绘制时,得到分辨率等于主场景纹理分辨率1920*1080的一张待显示的纹理,该待显示的纹理将由图形硬件层进行显示。主场景纹理、各中间特效纹理以及待显示的纹理如图3b所示。
然而,现有技术一的图像渲染方案存在以下缺点:一方面,渲染通道的数量增加是极其消耗运行游戏系统的电子设备的内存器带宽的,另一方面,在图像中的对象无运动或者运动幅度较小的情况下,在每帧图像的渲染过程中,重复差别微小的特效纹理的绘制,需要调用一定数量的绘制命令(例如图3a中的采样指令),这会消耗电子设备的部分处理器(例如中央处理器CPU和图形处理器GPU)的负载。上述缺点最终会引起游戏图像帧率下降以及游戏功耗上升等问题,直接影响用户的游戏体验。
有鉴于此,本申请提出一种特效优化方法、装置、存储介质及程序产品,根据本申请实施例的特效优化方法,能够在图像中的对象无运动或者运动幅度较小的情况下、为每帧图像进行特效相关的渲染时,提升游戏图像帧率、降低游戏功耗。
下面介绍本申请实施例的特效优化方法的示例性应用场景。
本申请实施例的特效优化方法可应用于游戏系统,该游戏系统可以设置在终端设备上,举例来说,本申请的终端设备可以是智能手机、上网本、平板电脑、笔记本电脑、TV、虚拟现实设备等等。图4示出根据本申请实施例的游戏系统的示例性架构示意图。
如图4所示,本申请系统架构包括应用程序1001、系统框架1002、特效优化框架1003、驱动1004和图形处理器1005。
其中,应用程序1001可基于现有技术来实现,可包括待优化的游戏应用或者其它应用程序,应用程序1001可使用OpenGL或OpenGL ES的渲染图形应用程序编程接口(application programming interface,API)。根据用户对应用程序1001的操作,应用程序1001可生成对应的渲染指令。
系统框架1002可基于现有技术来实现,可以是基于Android操作系统和其框架代码的框架结构,可用于对应用程序1001生成的渲染指令进行拦截。
特效优化框架1003主要用于执行本申请实施例的特效优化方法,主要用于特效场景识别、特效纹理相似度比对以及特效优化,实现的是对泛光、阴影等特效在部分场景的复用。其示例性实现方式可以参照下文图5-图6的相关描述。
驱动1004可基于现有技术来实现,可以包括OpenGL/OpenGL ES/Vulkan等渲染图形应用程序编程接口的驱动。
图形处理器1005可基于现有技术来实现,可包括终端设备上的具体的硬件图形处理器 GPU等,用于完成具体的渲染任务。
图5示出根据本申请实施例的特效优化方法的示例性流程示意图。
如图5所示,在一种可能的实现方式中,本申请提出一种特效优化方法,所述方法包括:
S1,获取渲染指令并识别所述渲染指令的用途。
其中,渲染指令可以是系统框架1002所拦截的、应用程序1001生成的渲染指令。渲染指令的用途,可以指示渲染指令是否用于获得多帧图像、且多帧图像是否包括特效纹理。可选地,还可以指示多帧图像包括何种特效纹理(例如阴影特效纹理、泛光特效纹理等)。本领域技术人员应理解,渲染指令可以有更多用途,本申请对于步骤S1能够识别到的渲染指令的用途不作限制。
S2,在所述渲染指令用于获得多帧图像、且所述多帧图像包括特效纹理时,选取所述多帧图像中的相邻两帧图像,识别所述相邻两帧图像的特效纹理的相似度。
其中,渲染指令用于获得的、包括特效纹理的多帧图像,可以是连续的多帧图像,也可以是不连续的多帧图像,本申请对此不作限制。选取所述多帧图像中的相邻两帧图像时,可以是按照多帧图像的时间顺序选取,例如从最早的两帧图像处选取,也可以是从多帧图像中处于预设位置的两帧图像处选取,只要满足在选取时,被选取的相邻两帧图像的下一帧图像的特效纹理尚未获得即可,本申请对于相邻两帧图像的选取方式不作限制。
S3,确定所述相似度满足条件时,将所述相邻两帧图像中的后一帧图像的特效纹理作为所述相邻两帧图像的下一帧图像的特效纹理。
在相似度满足条件时,可以认为该相邻两帧图像中的对象运动幅度比较小,在此情况下,可以认为该相邻两帧图像的下一帧图像中的对象运动幅度也比较小,即使采用渲染通道生成该相邻两帧图像的下一帧图像的特效纹理,该相邻两帧图像中的后一帧图像和该相邻两帧图像的下一帧图像的特效纹理的差别可能也很小。因此,相邻两帧图像的下一帧图像的特效纹理不必采用渲染通道生成,可以直接使用该相邻两帧图像中的后一帧图像的特效纹理。
相应地,在相似度不满足条件时,可以认为该相邻两帧图像中的对象运动幅度比较大,在此情况下,可以认为该相邻两帧图像的下一帧图像中的对象运动幅度也比较大,相邻两帧图像的下一帧图像的特效纹理可以采用渲染通道生成。
根据本申请实施例的特效优化方法,通过获取渲染指令并识别渲染指令的用途,可以确定渲染指令是否用于获得多帧图像、且多帧图像是否包括特效纹理,即可以识别确定渲染指令是否是用于渲染特效场景;通过在渲染指令用于获得多帧图像、且多帧图像包括特效纹理时,选取多帧图像中的相邻两帧图像,识别相邻两帧图像的特效纹理的相似度,可以在渲染指令用于渲染特效场景时,对比确定特效纹理的相似程度;通过确定相似度满足条件时,将相邻两帧图像中的后一帧图像的特效纹理作为相邻两帧图像的下一帧图像的特效纹理,使得不采用传统的渲染通道方式,也能得到相邻两帧图像的下一帧图像的特效纹理,可以避免传统的渲染通道方式生成特效纹理时对内存的占用以及对处理器负载的消耗。由于内存的占用、处理器负载的消耗均降低,因此可以在进行特效图像渲染时,提升游戏图像帧率、降低游戏功耗。
其中,如果游戏的运行帧率未达到终端设备的最大帧率,通过本申请的特效优化方法,能够有效的提高游戏运行帧率,同时可能降低游戏功耗。如果游戏的运行帧率已达到终端设备的最大帧率,通过本申请的特效优化方法,能够有效的降低游戏功耗,从而有效延长终端 设备上游戏持续以终端设备的最大帧率运行的时长。
在一种可能的实现方式中,所述特效纹理包括泛光特效纹理、阴影特效纹理中的至少一种。
泛光特效、阴影特效是游戏图像所使用的常见特效,因此本申请实施例的特效优化方法也主要针对这两种特效纹理的获取方式进行优化。本领域技术人员应理解,本申请实施例的特效优化方法所支持优化的特效纹理应不限于此,只要是游戏图像或其他图像所使用的特效纹理都在本申请实施例的特效优化方法所支持优化的范围内,本申请对此不作限制。
下面分别对步骤S1、步骤S2、步骤S3的示例性实现方式进行描述。
在一种可能的实现方式中,步骤S1,包括:
拦截来自应用程序的所述渲染指令;
针对所述渲染指令中的每条指令,分析该条指令的参数、类型及该条指令所处的指令段的语义中的至少一种;
根据分析的结果,确定所述渲染指令的用途。
通过这种方式,可以确定渲染指令是否为用于获得多帧图像、且多帧图像包括特效纹理的指令,从而能确定渲染指令是否是用于渲染特效场景,完成特效场景的识别。并且,渲染指令中有多种数据可供分析确定渲染指令的用途,可以提升确定渲染指令的用途的方式的灵活性。
举例来说,拦截的来自应用程序的渲染指令,可以包括OpenGL渲染指令、OpenGL ES渲染指令、Vulkan渲染指令等等。执行渲染指令所完成的工作的示例,可以参考上文图3a及相关描述中的在各渲染通道中所完成的工作。
例如,假定游戏系统渲染得到的第N帧图像及之后的图像是包括阴影特效和泛光特效的图像,那么游戏系统对第N帧图像做渲染时,产生的渲染指令的语义可以是先创建多个渲染通道用于得到阴影特效纹理,之后根据游戏逻辑依次创建多个渲染通道分别用于得到主场景纹理(也可以称为非特效纹理)和泛光特效纹理,最终将阴影特效纹理、主场景纹理和泛光特效纹理采样到显示缓冲区,并且在显示缓冲区完成第N帧图像的绘制。在未执行步骤S2和步骤S3之前,对于第N+1帧图像及之后的图像,游戏系统的渲染流程操作与第N帧可以相同,都是按照游戏逻辑依次绘制阴影特效纹理、主场景纹理和泛光特效纹理,最终采样到显示缓冲区上,在显示缓冲区绘制出完整图像。
针对拦截的渲染指令中的每条指令,可以通过分析该条指令的参数、类型以及该条指令所处的指令段的语义中的至少一种,来确定每条指令的用途。其中,指令的参数可以指示渲染通道中得到的纹理的分辨率,指令的类型可以指示执行该指令时所涉及的对象,例如具体的第一中间纹理或者第二中间纹理或者第三中间纹理和第一中间纹理等等,指令所处的指令段的语义可以指示当前指令段在执行时是完成哪一渲染通道的采样过程。以泛光特效纹理的获得过程为例,一般泛光特效的纹理是通过几个分辨率成比例的纹理依次采样实现的,其中每个渲染通道可对应一次采样,因此可根据分辨率的大小来识别泛光特效纹理。也就是说,如果分析的结果是渲染指令中的多条指令指示的多个纹理的分辨率的关系满足泛光特效纹理的规律(按照分辨率大小排序后,每相邻两个纹理的分辨率呈同样的比例),可以确定渲染指令的用途是用于获得多帧图像、且多帧图像包括泛光特效纹理。
在此以通过分析渲染指令的参数得到分析结果、以确定渲染指令的用途作为示例。本领 域技术人员应理解,分析渲染指令的类型或者分析渲染指令的语义也可以确定渲染指令的用途,只要能够通过对渲染指令进行分析确定分析结果是否满足某种特效纹理的规律即可,本申请对于确定渲染指令的用途的具体实现方式不作限制。
在一种可能的实现方式中,步骤S2中,选取所述多帧图像中的相邻两帧图像,包括:
选取所述多帧图像中的前两帧图像作为所述相邻两帧图像。
通过这种方式,使得多帧图像中能够有机会实现特效纹理复用的图像更多,可以进一步提升特效优化方法提升游戏图像帧率、降低设备功耗的效果。
举例来说,步骤S2中,选取多帧图像中的相邻两帧图像的目的是识别相邻两帧图像的特效纹理的相似度,且进一步地,在步骤S3中,在相似度满足条件时,将相邻两帧图像中的后一帧图像的特效纹理作为相邻两帧图像的下一帧图像的特效纹理,使得不必采用渲染通道得到相邻两帧图像的下一帧图像的特效纹理。因此,若选取的相邻两帧图像的下一帧图像的特效纹理已经通过渲染通道的方式得到,那么步骤S2和步骤S3的执行对于游戏系统的功耗及内存的降低将并无作用。基于此,相邻两帧图像的选取条件可以是:游戏系统尚未获得相邻两帧图像的下一帧图像的特效纹理。在此基础上,在开始执行本申请实施例的特效优化方法时,选取多帧图像中的前两帧图像(例如第N帧图像和第N+1帧图像)作为相邻两帧图像,那么多帧图像中的第3帧图像(例如第N+2帧图像)及之后的图像均有机会不采用渲染通道得到对应的特效纹理,这种方式对于特效优化方法的效果的提升,即对于游戏系统的功耗及内存的降低、对于帧率的提升是比较大的。
本领域技术人员应理解,步骤S2中,选取多帧图像中的相邻两帧图像,也可以是选择多帧图像中处于预设位置的两帧图像处选取,例如第N+5帧和第N+6帧图像。这种方式使得执行本申请实施例的特效优化方法所需要的数据处理成本较低。本申请对于步骤S2中相邻两帧图像的选取方式不作限制。
在一种可能的实现方式中,步骤S2中,识别所述相邻两帧图像的特效纹理的相似度,包括:
对比确定所述相邻两帧图像的特效纹理的灰度差异值;
根据所述灰度差异值确定所述相邻两帧图像的特效纹理的相似度数值。
特效纹理的灰度值是比较容易获取的,通过这种方式,使得确定相邻两帧图像的特效纹理的相似度的实现方式更简单、使得特效优化方法的效率更高。
举例来说,假定选取多帧图像中的前两帧图像(例如第N帧图像和第N+1帧图像)作为相邻两帧图像,则步骤S2要识别的是第N帧图像的特效纹理和第N+1帧图像的特效纹理的相似度。其中,能够进行对比识别的特效纹理可以是同种类型的特效纹理,例如均为阴影特效纹理或者均为泛光特效纹理。确定相似度的一种示例性方式是,对第N帧图像的特效纹理与第N+1帧图像的特效纹理的Y分量值(即灰度值)进行Y-diff检测。Y-diff检测可以基于现有技术来实现。Y-diff检测的结果即第N帧图像的特效纹理与第N+1帧图像的特效纹理的灰度差异值。该灰度差异值可以作为第N帧图像的特效纹理与第N+1帧图像的特效纹理的相似度数值,用于判断第N+1帧图像的后一帧图像是否能够复用第N+1帧图像的特效纹理。在此情况下,相似度数值越小时,指示第N帧图像的特效纹理与第N+1帧图像的特效纹理的相似程度越大。
在一种可能的实现方式中,所述条件包括:
所述相邻两帧图像的特效纹理的相似度数值低于第一阈值。
通过设置相邻两帧图像的特效纹理的相似度数值低于第一阈值的条件,使得相似度满足条件时是可以确定图像中对象是无运动或者运动幅度较低的,从而能保证将相邻两帧图像中的后一帧图像的特效纹理作为相邻两帧图像的下一帧图像的特效纹理,对相邻两帧图像的下一帧图像的准确度的影响很小,进而保证图像渲染的质量。
由上文描述可知,以灰度差异值作为相邻两帧图像的特效纹理的相似度数值时,相似度数值越小指示相邻两帧图像的特效纹理的相似程度越大,因此,可以预先设置第一阈值,使得相似度数值低于第一阈值作为步骤S3中的条件。在此情况下,满足条件的相似度数值指示的相邻两帧图像的特效纹理的相似程度是比较大的,可以认为相邻两帧图像以及该相邻两帧图像的下一帧图像中的对象运动幅度均较小,可以使用相邻两帧图像的后一帧图像(例如第N+1帧图像)的特效纹理作为相邻两帧图像的下一帧图像(例如第N+2帧图像)的特效纹理,即相邻两帧图像的下一帧图像复用相邻两帧图像的后一帧图像的特效纹理。其中,在特效纹理包括两种类型以上时,相邻两帧图像的下一帧图像可以分别复用相邻两帧图像的后一帧图像的各类型的特效纹理。
本领域技术人员应理解,相似度数值也可以采用其他方式来获取,例如通过神经网络等方式对两张纹理的相似程度进行检测,本申请对此不作限制。需要注意的是,若相似度数值越小时指示两帧纹理的相似程度越大,则可以设置相邻两帧图像的特效纹理的相似度数值低于第一阈值作为步骤S3中的条件;反之,若相似度数值越小时指示两帧纹理的相似程度越小,则可以设置相邻两帧图像的特效纹理的相似度数值高于第二阈值作为步骤S3中的条件。条件的具体设置方式可以根据相似度数值与相似程度的关系进行调整,本申请对此不作限制。第一阈值(或第二阈值)可以根据终端设备的硬件条件以及用户需求预先设置,本申请对于第一阈值(或第二阈值)的具体数值的选取不作限制。
在一种可能的实现方式中,确定所述相似度满足条件时,所述方法还包括:
将所述相邻两帧图像中的后一帧图像的特效纹理采样至显示缓冲区;
在所述显示缓冲区中,所述相邻两帧图像中的后一帧图像的特效纹理和所述相邻两帧图像的下一帧图像的主场景纹理用于渲染得到所述相邻两帧图像的下一帧图像。
通过这种方式,可以在不采用渲染通道得到相邻两帧图像的下一帧图像的特效纹理时,完成相邻两帧图像的下一帧图像的渲染。使得相邻两帧图像的下一帧图像可以正常显示,不影响用户游戏体验。
举例来说,由上文描述可知,每一帧图像对应的特效纹理都被采样至显示缓冲区,在显示缓冲区上与该一帧图像对应的主场景纹理进行叠加绘制,最终得到在可以在显示器上显示的该一帧图像。在相似度满足条件时,相邻两帧图像中的后一帧图像(例如第N+1帧图像)的特效纹理作为相邻两帧图像的下一帧图像(例如第N+2帧图像)的特效纹理,因此,将相邻两帧图像的下一帧图像(例如第N+2帧图像)对应的特效纹理采样至显示缓冲区时,可以是将相邻两帧图像中的后一帧图像(例如第N+1帧图像)的特效纹理采样至显示缓冲区;在显示缓冲区中,相邻两帧图像的下一帧图像(例如第N+2帧图像)的特效纹理与该一帧图像对应的主场景纹理进行叠加绘制,可以是相邻两帧图像中的后一帧图像(例如第N+1帧图像)的特效纹理和相邻两帧图像的下一帧图像(例如第N+2帧图像)的主场景纹理共同叠加绘制,渲染得到相邻两帧图像的下一帧图像(例如第N+2帧图像)。
在一种可能的实现方式中,确定所述相似度满足条件时,所述方法还包括:
拦截所述渲染指令中,用于创建渲染通道以获得所述相邻两帧图像的下一帧图像的特效纹理的渲染指令,以使拦截的指令不传递至所述渲染指令的执行对象。
通过这种方式,一方面可以减少渲染多帧图像的过程中所创建的渲染通道的数量,实现降低内存带宽的效果,另一方面,可以跳过不必要的、重复的、差别微小的与特效纹理的获得相关的绘制命令的提交,实现降低处理器负载的效果。
举例来说,由上文描述可知,相似度满足条件时,可以直接将相邻两帧图像中的后一帧图像(例如第N+1帧图像)的特效纹理作为相邻两帧图像的下一帧图像(例如第N+2帧图像)的特效纹理,并采样至显示缓冲区,与相邻两帧图像的下一帧图像(例如第N+2帧图像)的主场景纹理共同得到相邻两帧图像的下一帧图像(例如第N+2帧图像)。因此,原本的渲染指令中、用于创建渲染通道以渲染得到相邻两帧图像的下一帧图像(例如第N+2帧图像)的特效纹理的渲染指令可以不必执行。在此情况下,该部分指令可以拦截下来,使其不传递至渲染指令的执行对象(本申请实施例中是图形处理器)。相应地,对应于该部分指令的绘制命令也不需提交给其执行对象(本申请实施例中是图形处理器和中央处理器)。在系统架构的层次上,表现为在渲染第N+2帧图像时,在特效优化框架层截取用于创建渲染通道以获得第N+2帧图像的特效纹理的渲染指令并全部跳过,即省略与第N+2帧图像的特效纹理相关的渲染通道创建切换的步骤。
在相似度满足条件时,直接将相邻两帧图像中的后一帧图像(例如第N+1帧图像)的特效纹理作为相邻两帧图像的下一帧图像(例如第N+2帧图像)的特效纹理,相当于完成了一次特效纹理复用。对于第N+2帧之后的图像,可以继续通过相邻两帧图像的选取、特效纹理的相似度的识别、相似度是否满足条件的判断来完成更多次的特效纹理复用。由于相似度的识别通过对相邻两帧图像的特效纹理的灰度值进行Y-diff检测来实现,且第N+1帧图像和第N+2帧图像的特效纹理是完全一样的,即第N+1帧图像和第N+2帧图像的特效纹理的Y-diff检测结果恒为0,所以在相似度满足条件时选取第N+1帧图像和第N+2帧图像作为相邻两帧图像是无意义的。因此,可选地,相邻两帧图像的选取条件还可以包括:被选取的相邻两帧图像中至少一帧图像的特效纹理是通过渲染通道得到。例如继续选取相邻两帧图像时,可以选取第N+2帧图像和第N+3帧图像(第N+2帧图像的特效纹理是通过复用得到,第N+3帧图像的特效纹理是通过渲染通道得到)作为相邻两帧图像,再基于第N+2帧图像和第N+3帧图像进行特效纹理的相似度的识别、相似度是否满足条件的判断。以此类推,每完成一次特效纹理复用之后,相邻两帧图像的选取、特效纹理的相似度的识别、相似度是否满足条件的判断可以随之进行,从而能实现多帧图像的特效纹理的多次复用。
在一种可能的实现方式中,确定所述相似度不满足条件时,所述方法还包括:
重复执行选取所述多帧图像中的相邻两帧图像及之后的步骤;
其中,重复执行选取所述多帧图像中的相邻两帧图像的步骤时,以前次选取的相邻两帧图像中的后一帧图像和该相邻两帧图像的下一帧图像作为新选取的相邻两帧图像。
通过这种方式,使得相似度不满足条件时是可以确定图像中对象是运动幅度较大的,可以继续对后续的图像进行特效纹理相似度比对,直到相似度满足条件时确定图像中对象无运动或者运动幅度较小,再执行特效纹理的复用,进一步保证图像渲染的质量。
举例来说,以通过Y-diff检测的方式确定相似度、步骤S3的条件是相邻两帧图像的特 效纹理的相似度数值低于第一阈值为例,Y-diff检测的结果即相似度数值若高于或等于第一阈值,则相似度不满足条件,可以认为相邻两帧图像以及该相邻两帧图像的下一帧图像中的对象运动幅度可能较大,可以继续采用现有技术的渲染通道的方式获取相邻两帧图像的下一帧图像,此时所述渲染指令中、用于创建渲染通道以获得相邻两帧图像的下一帧图像的特效纹理的渲染指令,依旧传递给渲染指令的执行对象。以保证游戏系统得到的相邻两帧图像的下一帧图像的特效纹理的准确度。
在此情况下,可以重复执行选取多帧图像中的相邻两帧图像及之后的步骤,即重复执行步骤S2和步骤S3。其中,重复执行选取多帧图像中的相邻两帧图像的步骤时,以前次选取的相邻两帧图像中的后一帧图像和该相邻两帧图像的下一帧图像作为新选取的相邻两帧图像。原因在于,前次执行步骤S2选取相邻两帧图像(例如第N帧图像和第N+1帧图像)之后,由于相似度不满足条件,因此已经采用现有技术的方式得到了的相邻两帧图像的下一帧图像(例如第N+2帧图像)。此时游戏系统尚未获得的图像更新为第N+3帧图像,即第N+1帧图像和第N+2帧图像的下一帧图像。根据上文中的相邻两帧图像的选取条件(被选取的相邻两帧图像的下一帧图像的特效纹理尚未获得),新选取的相邻两帧图像可以包括前次选取的相邻两帧图像中的后一帧图像(例如第N+1帧图像)和该相邻两帧图像的下一帧图像(例如第N+2帧图像)。重复执行步骤S2和步骤S3的方式可以参见上文中的描述,在此不再赘述。
图6示出根据本申请实施例的特效优化方法的一个应用示例。
如图6所示,游戏系统中,应用程序(1001)可以是安卓平台的游戏应用,相应地,系统框架(1002)可以是安卓框架。游戏应用加载安卓框架、特效优化框架后,首先执行步骤S1,完成渲染指令的获取(即指令拦截)以及渲染指令用途的识别(即指令分析和场景识别)。假设图6的示例中,获取的渲染指令对应的渲染流程例如图3a所示,通过4个渲染通道进行分辨率成比例的采样,最终得到泛光特效纹理。则执行步骤S1可例如通过渲染指令的参数所指示的分辨率识别出是特效图像的渲染场景,即渲染指令用于获得多帧图像、且多帧图像包括特效纹理。
假定渲染指令指示从第N帧开始进行特效渲染,且渲染指令用于获得多帧图像、多帧图像包括特效纹理,则渲染指令开始执行时,游戏系统可先通过渲染通道渲染的方式得到第N帧图像和第N+1帧图像,并暂不通过渲染通道渲染的方式得到第N+2帧图像。在此情况下,执行步骤S2,可选取第N帧图像和第N+1帧图像作为多帧图像中的相邻两帧图像,并识别相邻两帧图像的特效纹理的相似度(即特效相似度比对)。其中,相邻两帧图像即第N帧图像和第N+1帧图像的特效纹理,可以在通过渲染通道渲染的方式得到第N帧图像和第N+1帧图像的过程中分别获得。特效相似度比对的方式可以是Y-diff检测,以Y-diff检测的结果作为相似度数值。例如第N帧图像的泛光特效纹理为texture1,第N+1帧图像的泛光特效纹理为texture2,通过Y-diff检测得到纹理texture1和texture2的Y分量(即灰度值)的差异值,作为相似度数值。
识别得到相邻两帧图像即第N帧图像和第N+1帧图像的特效纹理的相似度之后,可以执行步骤S3,在相似度满足条件时,例如相似度数值低于第一阈值时,跳过原本用于得到第N+2帧图像的泛光特效纹理的渲染指令及其相关的绘制命令,将相邻两帧图像中的后一帧图像即第N+1帧图像的特效纹理,作为相邻两帧图像的下一帧图像即第N+2帧图像的特效纹理(即特效优化)。否则,仍旧通过渲染通道渲染的方式得到第N+2帧图像的特效纹理。第N+2帧图 像的特效纹理采样到显示缓冲区上,与采样到显示缓冲区的主场景纹理叠加绘制,来完成第N+2帧图像的渲染。
在相似度不满足条件时,重新执行选取多帧图像中的相邻两帧图像及之后的步骤,例如选取第N+1帧图像和第N+2帧图像作为新选取的相邻两帧图像,并针对新选取的相邻两帧图像进行特效相似度比对以及特效优化(即重复执行步骤S2和步骤S3)。
完成特效优化之后,调用驱动(对应于图4中的驱动1004)将渲染指令转换成硬件GPU指令,提交到GPU(对应于图4中的图形处理器1005),由GPU完成渲染任务。
图7a和图7b分别示出现有技术一和根据本申请实施例得到第N+2帧图像时创建的渲染通道的示意图。
根据图7a和图7b可以看出,相比现有技术一的方案,本申请实施例的特效优化方法可以减少渲染通道的数量(减少的部分例如是图7a方框中的渲染通道)。因此,本申请实施例的特效优化方法在内存重度负载的场景下能够大幅降低内存带宽消耗,同时可以跳过部分帧图像的绘制命令和与用于创建渲染通道以获得特效纹理的渲染指令的提交,减少处理器负载,达到提高游戏性能和降低片上系统功耗的效果。根据测试,在一些游戏的高清画质下,本申请实施例的特效优化方法在不同游戏场景的收益约为使游戏帧率提高3-5帧,使游戏功耗降低30-80mA。
图8示出根据本申请实施例的特效优化装置的示例性结构示意图。
如图8所示,本申请提供了一种特效优化装置,所述装置包括:
用途识别模块101,用于获取渲染指令并识别所述渲染指令的用途;
纹理识别模块102,用于在所述渲染指令用于获得多帧图像、且所述多帧图像包括特效纹理时,选取所述多帧图像中的相邻两帧图像,识别所述相邻两帧图像的特效纹理的相似度;
纹理复用模块103,用于确定所述相似度满足条件时,将所述相邻两帧图像中的后一帧图像的特效纹理作为所述相邻两帧图像的下一帧图像的特效纹理。
在一种可能的实现方式中,获取渲染指令并识别所述渲染指令的用途,包括:拦截来自应用程序的所述渲染指令;针对所述渲染指令中的每条指令,分析该条指令的参数、类型及该条指令所处的指令段的语义中的至少一种;根据分析的结果,确定所述渲染指令的用途。
在一种可能的实现方式中,识别所述相邻两帧图像的特效纹理的相似度,包括:对比确定所述相邻两帧图像的特效纹理的灰度差异值;根据所述灰度差异值确定所述相邻两帧图像的特效纹理的相似度数值。
在一种可能的实现方式中,所述条件包括:所述相邻两帧图像的特效纹理的相似度数值低于第一阈值。
在一种可能的实现方式中,选取所述多帧图像中的相邻两帧图像,包括:选取所述多帧图像中的前两帧图像作为所述相邻两帧图像。
在一种可能的实现方式中,确定所述相似度满足条件时,所述装置还包括:采样模块,用于将所述相邻两帧图像中的后一帧图像的特效纹理采样至显示缓冲区;在所述显示缓冲区中,所述相邻两帧图像中的后一帧图像的特效纹理和所述相邻两帧图像的下一帧图像的主场景纹理用于渲染得到所述相邻两帧图像的下一帧图像。
在一种可能的实现方式中,确定所述相似度满足条件时,所述装置还包括:指令拦截模块,用于拦截所述渲染指令中,用于创建渲染通道以获得所述相邻两帧图像的下一帧图像的 特效纹理的渲染指令,以使拦截的指令不传递至所述渲染指令的执行对象。
在一种可能的实现方式中,确定所述相似度不满足条件时,所述装置重复执行纹理识别模块及之后的模块的功能;其中,纹理识别模块重复执行选取所述多帧图像中的相邻两帧图像的步骤时,以前次选取的相邻两帧图像中的后一帧图像和该相邻两帧图像的下一帧图像作为新选取的相邻两帧图像。
在一种可能的实现方式中,所述特效纹理包括泛光特效纹理、阴影特效纹理中的至少一种。
图9示出根据本申请实施例的特效优化装置的示例性结构示意图。
如图9所示,本申请的实施例提供了一种特效优化装置,包括:处理器以及用于存储处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令时实现上述方法。
特效优化装置可以设置在电子设备中,可以包括手机、可折叠电子设备、平板电脑、桌面型计算机、膝上型计算机、手持计算机、笔记本电脑、有屏音箱、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、人工智能(artificial intelligence,AI)设备、无人机、车载设备、智能家居设备、或智慧城市设备中的至少一种。本申请实施例对该特效优化装置的具体类型不作特殊限制。
特效优化装置可以包括处理器110,内部存储器121,通信模块160等。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。例如,处理器110可执行本申请实施例的Y-diff检测等,以实现本申请实施例的特效优化方法。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器可以为高速缓冲存储器。该存储器可以保存处理器110用过或使用频率较高的指令或数据,例如本申请实施例中的渲染指令和特效纹理等。如果处理器110需要使用该指令或数据,可从该存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,通用输入输出(general-purpose input/output,GPIO)接口等。处理器110可以通过以上至少一种接口连接无线通信模块、显示器、摄像头等模块。
存储器121可以用于存储计算机可执行程序代码,该可执行程序代码包括指令。存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如识别渲染指令的用途的应用程序等)等。存储数据区可存储特效优化装置的使用过程中所创建的数据(比如特效纹理等)等。此外,存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在存储器121的指令, 和/或存储在设置于处理器中的存储器的指令,执行特效优化装置的各种功能方法或数据处理。
通信模块160可以用于通过有线通信或者无线通信的方式接收来自其他装置或设备(例如本申请实施例中的应用程序1001)的数据,或者发送数据到其他装置或设备(例如本申请实施例中的图形处理器1005)。例如可以提供应用在特效优化装置上的包括WLAN(如Wi-Fi网络)、蓝牙(Bluetooth,BT)、全球导航卫星系统(global navigation satellite system,GNSS)、调频(frequency modulation,FM)、近距离无线通信技术(near field communication,NFC)、红外技术(infrared,IR)等无线通信的解决方案。在特效优化装置连接其他装置或设备时,通信模块160也可以使用有线通信方案。
可以理解的是,本申请实施例示意的结构并不构成对特效优化装置的具体限定。在本申请另一些实施例中,特效优化装置可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
本申请的实施例提供了一种非易失性计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。
本申请的实施例提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备的处理器中运行时,所述电子设备中的处理器执行上述方法。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read Only Memory,ROM)、可擦式可编程只读存储器(Electrically Programmable Read-Only-Memory,EPROM或闪存)、静态随机存取存储器(Static Random-Access Memory,SRAM)、便携式压缩盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能盘(Digital Video Disc,DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。
这里所描述的计算机可读程序指令或代码可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本申请操作的计算机程序指令可以是汇编指令、指令集架构(Instruction Set Architecture,ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意 种类的网络—包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或可编程逻辑阵列(Programmable Logic Array,PLA),该电子电路可以执行计算机可读程序指令,从而实现本申请的各个方面。
这里参照根据本申请实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本申请的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本申请的多个实施例的装置、系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。
也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行相应的功能或动作的硬件(例如电路或ASIC(Application Specific Integrated Circuit,专用集成电路))来实现,或者可以用硬件和软件的组合,如固件等来实现。
尽管在此结合各实施例对本申请进行了描述,然而,在实施所要求保护的本申请过程中,本领域技术人员通过查看所述附图、公开内容、以及所附权利要求书,可理解并实现所述公开实施例的其它变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其它单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。
以上已经描述了本申请的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技术人员 能理解本文披露的各实施例。

Claims (13)

  1. 一种特效优化方法,其特征在于,所述方法包括:
    获取渲染指令并识别所述渲染指令的用途;
    在所述渲染指令用于获得多帧图像、且所述多帧图像包括特效纹理时,选取所述多帧图像中的相邻两帧图像,识别所述相邻两帧图像的特效纹理的相似度;
    确定所述相似度满足条件时,将所述相邻两帧图像中的后一帧图像的特效纹理作为所述相邻两帧图像的下一帧图像的特效纹理。
  2. 根据权利要求1所述的方法,其特征在于,获取渲染指令并识别所述渲染指令的用途,包括:
    拦截来自应用程序的所述渲染指令;
    针对所述渲染指令中的每条指令,分析该条指令的参数、类型及该条指令所处的指令段的语义中的至少一种;
    根据分析的结果,确定所述渲染指令的用途。
  3. 根据权利要求1或2所述的方法,其特征在于,识别所述相邻两帧图像的特效纹理的相似度,包括:
    对比确定所述相邻两帧图像的特效纹理的灰度差异值;
    根据所述灰度差异值确定所述相邻两帧图像的特效纹理的相似度数值。
  4. 根据权利要求3所述的方法,其特征在于,所述条件包括:
    所述相邻两帧图像的特效纹理的相似度数值低于第一阈值。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,选取所述多帧图像中的相邻两帧图像,包括:
    选取所述多帧图像中的前两帧图像作为所述相邻两帧图像。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,确定所述相似度满足条件时,所述方法还包括:
    将所述相邻两帧图像中的后一帧图像的特效纹理采样至显示缓冲区;
    在所述显示缓冲区中,所述相邻两帧图像中的后一帧图像的特效纹理和所述相邻两帧图像的下一帧图像的主场景纹理用于渲染得到所述相邻两帧图像的下一帧图像。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,确定所述相似度满足条件时,所述方法还包括:
    拦截所述渲染指令中,用于创建渲染通道以获得所述相邻两帧图像的下一帧图像的特效纹理的渲染指令,以使拦截的指令不传递至所述渲染指令的执行对象。
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,确定所述相似度不满足条件时,所述方法还包括:
    重复执行选取所述多帧图像中的相邻两帧图像及之后的步骤;
    其中,重复执行选取所述多帧图像中的相邻两帧图像的步骤时,以前次选取的相邻两帧图像中的后一帧图像和该相邻两帧图像的下一帧图像作为新选取的相邻两帧图像。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述特效纹理包括泛光特效纹理、阴影特效纹理中的至少一种。
  10. 一种特效优化装置,其特征在于,所述装置包括:
    用途识别模块,用于获取渲染指令并识别所述渲染指令的用途;
    纹理识别模块,用于在所述渲染指令用于获得多帧图像、且所述多帧图像包括特效纹理时,选取所述多帧图像中的相邻两帧图像,识别所述相邻两帧图像的特效纹理的相似度;
    纹理复用模块,用于确定所述相似度满足条件时,将所述相邻两帧图像中的后一帧图像的特效纹理作为所述相邻两帧图像的下一帧图像的特效纹理。
  11. 一种特效优化装置,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为执行所述指令时实现权利要求1-9中任意一项所述的方法。
  12. 一种非易失性计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1-9中任意一项所述的方法。
  13. 一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,其特征在于,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行权利要求1-9中任意一项所述的方法。
PCT/CN2023/071311 2022-01-11 2023-01-09 特效优化方法、装置、存储介质及程序产品 WO2023134625A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210028509.1 2022-01-11
CN202210028509.1A CN116459511A (zh) 2022-01-11 2022-01-11 特效优化方法、装置、存储介质及程序产品

Publications (1)

Publication Number Publication Date
WO2023134625A1 true WO2023134625A1 (zh) 2023-07-20

Family

ID=87172200

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/071311 WO2023134625A1 (zh) 2022-01-11 2023-01-09 特效优化方法、装置、存储介质及程序产品

Country Status (2)

Country Link
CN (1) CN116459511A (zh)
WO (1) WO2023134625A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116761018A (zh) * 2023-08-18 2023-09-15 湖南马栏山视频先进技术研究院有限公司 一种基于云平台的实时渲染系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140219499A1 (en) * 2009-02-18 2014-08-07 Lucasfilm Entertainment Company Ltd. Visual tracking framework
CN108648259A (zh) * 2018-03-27 2018-10-12 广东欧珀移动通信有限公司 图像绘制方法、装置、存储介质及智能终端
US20190213778A1 (en) * 2018-01-05 2019-07-11 Microsoft Technology Licensing, Llc Fusing, texturing, and rendering views of dynamic three-dimensional models
CN112184856A (zh) * 2020-09-30 2021-01-05 广州光锥元信息科技有限公司 支持多图层特效及动画混合的多媒体处理装置
CN113160244A (zh) * 2021-03-24 2021-07-23 北京达佳互联信息技术有限公司 视频处理方法、装置、电子设备及存储介质
CN113506298A (zh) * 2021-09-10 2021-10-15 北京市商汤科技开发有限公司 图像检测与渲染方法及装置、设备、存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140219499A1 (en) * 2009-02-18 2014-08-07 Lucasfilm Entertainment Company Ltd. Visual tracking framework
US20190213778A1 (en) * 2018-01-05 2019-07-11 Microsoft Technology Licensing, Llc Fusing, texturing, and rendering views of dynamic three-dimensional models
CN108648259A (zh) * 2018-03-27 2018-10-12 广东欧珀移动通信有限公司 图像绘制方法、装置、存储介质及智能终端
CN112184856A (zh) * 2020-09-30 2021-01-05 广州光锥元信息科技有限公司 支持多图层特效及动画混合的多媒体处理装置
CN113160244A (zh) * 2021-03-24 2021-07-23 北京达佳互联信息技术有限公司 视频处理方法、装置、电子设备及存储介质
CN113506298A (zh) * 2021-09-10 2021-10-15 北京市商汤科技开发有限公司 图像检测与渲染方法及装置、设备、存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116761018A (zh) * 2023-08-18 2023-09-15 湖南马栏山视频先进技术研究院有限公司 一种基于云平台的实时渲染系统
CN116761018B (zh) * 2023-08-18 2023-10-17 湖南马栏山视频先进技术研究院有限公司 一种基于云平台的实时渲染系统

Also Published As

Publication number Publication date
CN116459511A (zh) 2023-07-21

Similar Documents

Publication Publication Date Title
US11445202B2 (en) Adaptive transfer function for video encoding and decoding
US10110936B2 (en) Web-based live broadcast
US10375314B1 (en) Using a display as a light source
US20210281718A1 (en) Video Processing Method, Electronic Device and Storage Medium
JP6182225B2 (ja) カラーバッファ圧縮
CN110782387B (zh) 图像处理方法、装置、图像处理器及电子设备
CN112055875A (zh) 电子显示器部分图像帧更新系统和方法
WO2023134625A1 (zh) 特效优化方法、装置、存储介质及程序产品
WO2023160617A9 (zh) 视频插帧处理方法、视频插帧处理装置和可读存储介质
WO2020108060A1 (zh) 视频处理方法、装置、电子设备以及存储介质
KR101843411B1 (ko) 클라우드 스트리밍 서비스 시스템, 이미지의 투명도에 기반한 이미지 클라우드 스트리밍 서비스 방법 및 이를 위한 장치
US11443537B2 (en) Electronic apparatus and controlling method thereof
US20210099756A1 (en) Low-cost video segmentation
CN112184538A (zh) 图像加速方法、相关装置、设备及存储介质
KR20160131827A (ko) 클라우드 스트리밍 서비스 시스템, 알파 레벨을 이용한 이미지 클라우드 스트리밍 서비스 방법 및 이를 위한 장치
US10586304B2 (en) Dynamic selection of image rendering formats
CN113794887A (zh) 一种游戏引擎中视频编码的方法及相关设备
CN113409199A (zh) 图像处理方法、装置、电子设备及计算机可读介质
CN110570502A (zh) 显示图像帧的方法、装置、电子设备和计算机可读存储介质
WO2023197284A1 (en) Saliency-based adaptive color enhancement
US20240221119A1 (en) Image-Filtering Interface
WO2022198383A1 (en) Methods and apparatus for saliency based frame color enhancement
KR20170022599A (ko) 클라우드 스트리밍 서비스 시스템, 컬러 비트 감소를 이용한 이미지 클라우드 스트리밍 서비스 방법 및 이를 위한 장치
CN115842906A (zh) 图块显示方法、装置、电子设备及存储介质
KR20170025140A (ko) 클라우드 스트리밍 서비스 시스템, os 메시지를 이용한 변화 영역 탐지 기반의 이미지 클라우드 스트리밍 서비스 방법 및 이를 위한 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23739967

Country of ref document: EP

Kind code of ref document: A1