WO2023093792A1 - 一种图像帧的渲染方法及相关装置 - Google Patents

一种图像帧的渲染方法及相关装置 Download PDF

Info

Publication number
WO2023093792A1
WO2023093792A1 PCT/CN2022/133959 CN2022133959W WO2023093792A1 WO 2023093792 A1 WO2023093792 A1 WO 2023093792A1 CN 2022133959 W CN2022133959 W CN 2022133959W WO 2023093792 A1 WO2023093792 A1 WO 2023093792A1
Authority
WO
WIPO (PCT)
Prior art keywords
image frame
image
rendering
frame
similarity
Prior art date
Application number
PCT/CN2022/133959
Other languages
English (en)
French (fr)
Inventor
郑天季
杨程云
吴江铮
冯绍波
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023093792A1 publication Critical patent/WO2023093792A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the present application relates to the field of computer technology, and in particular to a method for rendering an image frame and a related device.
  • the present application provides a method for rendering an image frame, which can reduce the rendering power consumption of an electronic device while ensuring the frame rate of the rendered image frame.
  • the first aspect of the present application provides a method for rendering an image frame, which is applied to an electronic device performing image frame rendering.
  • the method includes: acquiring a first rendering instruction, where the first rendering instruction is used to instruct rendering of a first target image frame.
  • the first rendering instruction may be intercepted by an instruction reorganization layer in the electronic device.
  • the application program initiates the first rendering instruction to instruct the hardware in the electronic device to perform a rendering operation;
  • the first rendering instruction is intercepted before the hardware performing the rendering operation.
  • the first image frame is a frame preceding the second image frame
  • the second image frame is the first target image frame the previous frame.
  • the first image frame and the second image frame are rendered image frames, and the first image frame and the second image frame have been rendered before the first rendering instruction is acquired Nice image frame.
  • the first image frame, the second image frame, and the first target image frame indicated by the first rendering instruction are three consecutive image frames.
  • the first target image frame is obtained according to the second image frame, wherein the first target image frame
  • the content of is the same as the content of the second image frame. That is, the electronic device can copy the rendered second image frame, and use the copied second image frame as the first target image frame, so that the second image frame and the first target image frame continuously displayed on the electronic device are content The same two frames.
  • the next image frame of the two image frames is multiplexed to replace the rendering of a new image frame, reducing the number of electronic devices.
  • Image frames to be rendered thereby reducing the power consumption of electronic devices for rendering image frames.
  • the method of judging the similarity of the first two image frames is used to determine whether to repeat Using the next image frame can ensure the continuity of the picture, so as not to affect the final rendering effect.
  • the first rendering instruction includes a first viewpoint position in the three-dimensional model to be rendered, that is, the first rendering instruction is used to indicate that the The 3D model is rendered to obtain the first target image frame.
  • the method further includes: acquiring a second viewpoint position corresponding to the second image frame, where the second image frame is obtained by rendering the three-dimensional model based on the second viewpoint position.
  • the electronic device further determines the similarity between the first image frame and the second image frame; When the distance between the first viewpoint position and the second viewpoint position is greater than a second threshold, the electronic device no longer determines the similarity between the first image frame and the second image frame, and It is turned to execute the first rendering instruction.
  • the distance between the positions of the observation points corresponding to the two image frames is firstly determined to initially determine the similarity between the two image frames, so as to avoid calculating the similarity A way to determine the similarity between image frames and reduce the overhead of calculating the similarity.
  • the method further includes: acquiring a second rendering instruction, the second rendering instruction is used to instruct rendering of a second target image frame, and the second rendering instruction includes a 3D model to be rendered The position of the third observation point in the image; according to the second rendering instruction, the third image frame, the fourth image frame, and the fourth observation point position corresponding to the fourth image frame are obtained, and the third image frame is the position of the fourth observation point corresponding to the second rendering instruction.
  • the previous frame of the four image frames, the fourth image frame is the previous frame of the second target image frame, and the fourth image frame is obtained by rendering the three-dimensional model based on the position of the fourth observation point based on the distance between the third observation point position and the fourth observation point position being greater than a second threshold, execute the second rendering instruction to render the second target image frame; or, based on the The distance between the third observation point position and the fourth observation point position is less than or equal to the second threshold and the similarity between the third image frame and the fourth image frame is smaller than the first threshold, execute the second rendering instruction to render the second target image frame.
  • the similarity of the image frames is preliminarily judged by calculating the distance of the observation point positions between the image frames, and then further calculating the distance between the image frames when the distance of the observation point positions between the image frames meets the requirements.
  • the similarity between them can reduce the frequency of calculating the similarity of image frames, thereby reducing the calculation overhead of the similarity and reducing the power consumption of electronic devices.
  • the method further includes:
  • first image block and the second image block with corresponding relationship can be divided into one group, then six first image blocks and six second image blocks can be divided into six groups, and each group includes a first image block and a second image block.
  • the target similarity in the multiple groups of similarities is the similarity between the first image frame and the second image frame, the target similarity being the similarity with the smallest median value among the multiple groups of similarities Spend.
  • the image frame is divided into multiple image blocks, and the similarity of each group of image blocks in the two image frames is calculated respectively, and the similarity corresponding to the group of image blocks with the lowest similarity among the multiple groups of image blocks is taken.
  • the degree of similarity is the final similarity, so that the changes of the dynamic objects in the image frames can be highlighted, so that the similarity of the two image frames can reflect the small but important changes in the image frames.
  • the electronic device finally determines to execute the rendering instruction, and the rendering results in a new image frame, which ensures the continuity of the image.
  • the first threshold is determined based on a third threshold, and the third threshold is a preset fixed value; wherein, if the third target image frame is obtained by rendering, then The first threshold is the same as the third threshold, the third target image frame is located before the first target image frame, and the rendering method of the third target image frame is based on the similarity between image frames determined; if the third target image frame is obtained by multiplexing image frames, the first threshold is the difference between the third threshold and the fourth threshold, and the fourth threshold is a preset Fixed value.
  • the electronic device may determine a target duration, where the target duration is the difference between the display duration of the two image frames and the calculation of the first target image frame. The difference between the durations of image frames. Then, the electronic device stops running the rendering thread, wherein the duration of the rendering thread stopping running is the target time, and the rendering thread is used to render and obtain image frames based on the rendering instruction. That is to say, within the target time period after the first target image frame is obtained, the electronic device suspends the rendering thread and does not execute the rendering of the image frame.
  • the electronic device may perform reduction processing on the first image frame and the second image frame to obtain the reduced first image frame and the reduced first image frame. the second image frame; calculate the similarity between the reduced first image frame and the reduced second image frame, and obtain the similarity between the first image frame and the second image frame Spend.
  • the second aspect of the present application provides a rendering device, including: an acquisition unit, configured to acquire a first rendering instruction, where the first rendering instruction is used to instruct rendering of a first target image frame; the acquisition unit is also configured to The first rendering instruction acquires a first image frame and a second image frame, where the first image frame is a previous frame of the second image frame, and the second image frame is a previous frame of the first target image frame frame; a processing unit configured to obtain the first target image frame based on the second image frame based on the similarity between the first image frame and the second image frame being greater than or equal to a first threshold, wherein The content of the first target image frame is the same as the content of the second image frame.
  • the first rendering instruction includes a position of a first observation point in the three-dimensional model to be rendered; the processing unit is further configured to: acquire a second observation point corresponding to the second image frame point position, the second image frame is obtained by rendering the 3D model based on the second viewpoint position; based on the distance between the first viewpoint position and the second viewpoint position being less than or equal to a second threshold, determining the similarity between the first image frame and the second image frame.
  • the acquiring unit is further configured to acquire a second rendering instruction, the second rendering instruction is used to instruct rendering of the second target image frame, and the second rendering instruction includes the The position of the third observation point in the three-dimensional model; the acquisition unit is further configured to acquire the third image frame, the fourth image frame, and the fourth observation point position corresponding to the fourth image frame according to the second rendering instruction, The third image frame is a previous frame of the fourth image frame, the fourth image frame is a previous frame of the second target image frame, and the fourth image frame is based on the fourth observation obtained by rendering the three-dimensional model at the point position; the processing unit is further configured to: based on the distance between the third observation point position and the fourth observation point position being greater than a second threshold, execute the first Two rendering instructions, to render the second target image frame; or, based on the distance between the third viewpoint position and the fourth viewpoint position being less than or equal to the second threshold and the third The similarity between the image frame and the fourth image frame is smaller than the first threshold, and the second rendering instruction is
  • the processing unit is specifically configured to: respectively divide the first image frame and the second image frame into a plurality of image blocks, and obtain a plurality of image blocks corresponding to the first image frame.
  • the first threshold is determined based on a third threshold, and the third threshold is a preset fixed value; wherein, if the third target image frame is obtained by rendering, then The first threshold is the same as the third threshold, the third target image frame is located before the first target image frame, and the rendering method of the third target image frame is based on the similarity between image frames determined; if the third target image frame is obtained by multiplexing image frames, the first threshold is the difference between the third threshold and the fourth threshold, and the fourth threshold is a preset Fixed value.
  • the processing unit is further configured to: determine a target duration, where the target duration is the difference between the display duration of two image frames and the calculated duration of the first target image frame Value: Stop running the rendering thread, where the duration of the rendering thread stopping running is the target time, and the rendering thread is used to render the image frame based on the rendering instruction.
  • the processing unit is specifically configured to: perform reduction processing on the first image frame and the second image frame to obtain the reduced first image frame and the reduced second image frame.
  • Image frame calculating the similarity between the reduced first image frame and the reduced second image frame to obtain the similarity between the first image frame and the second image frame.
  • the third aspect of the present application provides an electronic device, which includes: a memory and a processor; the memory stores codes, the processor is configured to execute the codes, and when the codes are executed, the The electronic device executes the method in any one implementation manner in the first aspect.
  • a fourth aspect of the present application provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is run on a computer, the computer executes the method according to any one of the implementation manners in the first aspect.
  • the fifth aspect of the present application provides a computer program product, which, when running on a computer, causes the computer to execute the method in any one of the implementation manners in the first aspect.
  • a sixth aspect of the present application provides a chip, including one or more processors. Part or all of the processor is used to read and execute the computer program stored in the memory, so as to execute the method in any possible implementation manner of any aspect above.
  • the chip includes a memory, and the memory and the processor are connected to the memory through a circuit or wires.
  • the chip further includes a communication interface, and the processor is connected to the communication interface.
  • the communication interface is used to receive data and/or information to be processed, and the processor obtains the data and/or information from the communication interface, processes the data and/or information, and outputs the processing result through the communication interface.
  • the communication interface may be an input-output interface.
  • Fig. 1 is a flow chart of a frame multiplexing method in the related art
  • FIG. 2 is a schematic structural diagram of an electronic device 101 provided in an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of an image frame rendering method 300 provided in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an image frame similarity provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of division of an image block provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a system component provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of an image frame rendering method 800 provided by an embodiment of the present application.
  • FIG. 9 is a schematic flow chart of intercepting a graphics instruction data stream provided by an embodiment of the present application.
  • FIG. 10 is a schematic flow diagram of intercepting and caching rendered frames provided by an embodiment of the present application.
  • FIG. 11 is a schematic flow chart of calculating a camera position distance provided by an embodiment of the present application.
  • Fig. 12 is a schematic flow chart of calculating the similarity of image frames provided by the embodiment of the present application.
  • FIG. 13 is a schematic diagram of a SAT table provided by the embodiment of the present application.
  • FIG. 14 is a schematic flowchart of a similarity judgment provided in the embodiment of the present application.
  • FIG. 15 is a schematic flowchart of enabling frame multiplexing provided by an embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of a device 1600 provided in an embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 18 is a schematic structural diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • GPU Graphics processing unit
  • It is a kind of special hardware specially used to process graphics in the computer.
  • the advantage of the GPU is that it can process multiple similar tasks in parallel. For example, when rendering a picture, it will render multiple pixels at the same time, and the GPU can accelerate this process.
  • General-purpose computing pipeline (Compute Shader, CS): It uses the parallel computing characteristics of GPU to let GPU handle work other than rendering pixels, such as simulating the trajectory of all particles in the space, dividing graphics into hundreds of blocks and processing several shaders at the same time. Hundred blocks.
  • the general-purpose computing pipeline can be executed during graphics rendering, supporting real-time computing.
  • Real-time rendering refers to rendering the required image through real-time computing. For example, the game screen you see is essentially the result of uninterrupted display of multiple consecutive rendered frames. Each rendered frame is obtained through complex calculations by computer processors and graphics processors. Generally speaking, rendering that takes less than 100ms per rendering frame is called real-time rendering, and rendering that takes more than 500ms per rendering frame is called non-real-time rendering, or offline rendering.
  • Rendered frame It is an image rendered by a graphics processor, also called an image frame. Continuous playback of multiple rendered frames can form a dynamic effect.
  • Frame rate In real-time rendering, the number of rendered frames per second is called frame rate, and the unit is frame per second. For example, "frame rate 60" means that 60 rendered frames are currently produced in one second. The higher the frame rate, the smoother the displayed effect.
  • Screen refresh rate In real-time rendering, the rendered frame generated by rendering is finally sent to the display buffer by the graphics processor, and finally the screen takes the rendered frame from the display buffer for display. Every time the screen is refreshed, it fetches the latest frame from the rendering buffer. The rendered frame. The number of times the screen refreshes per second is called the screen refresh rate. The higher the screen refresh rate, the higher the frame rate supported by the screen. The final display effect depends on the minimum value of screen refresh rate and frame rate.
  • the process of rendering is to simulate the process of camera recording in the real world.
  • the camera is a rendering term, and the camera in the rendering is similar to the camera in the real world, referring to the eyes viewing the scene.
  • a camera is an object that records scene information, and the rendered frame is formed by projecting the seen scene onto the camera through the observation of the camera.
  • Power Consumption A measure of the efficiency with which a computing process consumes power, in milliamperes (mA). On the premise that the battery power of the mobile terminal is constant, the higher the operating power consumption of the mobile terminal, the faster the power consumption and the shorter the use time.
  • Interpolation The action of inserting a new frame between two consecutive rendered frames.
  • the frame interpolation action can interpolate to generate the middle video frame according to the information of the front and rear frames; in the case of a game, the frame interpolation action can generate the image information of the third frame based on the image information of the first two frames .
  • Frame multiplexing The operation of continuously transmitting two identical rendered frames to the display buffer. This method generally reduces the frequency of rendering, thereby reducing the power consumption of rendering.
  • Motion Estimation Motion Compensation It is a frame interpolation technology, the principle of which is that the hardware quickly calculates the optical flow between two frames to generate a third frame.
  • SSIM Structural Similarity
  • Summed Area Table is an algorithm for quickly calculating the sum of all pixels in a block in an image, and is used to accelerate the calculation of SSIM in the embodiment of the present application.
  • Pixel Buffer Object A technique used to save rendered frames. In the embodiment of the present application, it is used to save two consecutive rendered frames, so as to facilitate calculation of the similarity between rendered frames.
  • Driver A type of program that enables the computer operating system and the underlying hardware to communicate with each other. Enables operating systems or developers to take advantage of hardware features.
  • Game engine It is a game production tool that fully integrates and utilizes the underlying drivers to allow game developers to quickly create games.
  • Convolution kernel In image processing, given an input image, the weighted average of pixels in a small area of the input image becomes each corresponding pixel in the output image, where the weight is defined by a function, which is called a convolution kernel.
  • Frequency limit and frame limit In order to prevent the mobile phone from overheating during the game, it is a mandatory way to limit the operating frequency of the chip and the frame rate of the game.
  • Instruction reorganization layer It is a component between the application layer and the driver layer, which can intercept the driver instruction flow called by the application program and optimize it.
  • YUV color space It is a digital encoding method of a color, Y stands for brightness, U stands for chroma, V stands for concentration, and the three values can represent a color together.
  • the popularity of mobile phones with high refresh rate screens has made game manufacturers increase the game frame rate to adapt to the high refresh rate.
  • the fluency of the game can be improved, it also brings a lot of waste of rendering frames and excessive power consumption of the mobile phone, which leads to serious heating of the mobile phone and affects the battery life of the mobile phone.
  • the mobile phone chip and operating system will limit the frequency and frame of the mobile phone to reduce the power consumption of the chip and mobile phone. Doing this operation on the mobile phone to limit the frequency and frame will make the game frame rate halved or more, thus affecting the player's game experience.
  • game manufacturers in order to reduce the power consumption of games under high refresh rate screens, game manufacturers usually limit the game frame rate in certain scenarios. For example, a certain 3D (3 Dimensions, 3D) game will limit the game frame rate to 10 frames after the player has not operated for a minute.
  • Mobile phone manufacturers will use technical solutions such as frame insertion to reduce the rendering frequency. For example, a certain model of mobile phone is equipped with a MEMC chip, and the MEMC chip is used to accelerate game frame insertion. This frame insertion technology can increase the rendering frame rate of 60 frames. Display frame rate up to 120 frames.
  • frame interpolation schemes such as MEMC will distort the image generated by frame interpolation, which will cause players to feel dizzy, while the frame rate limit based on the scene lacks versatility, and manufacturers have no way to identify all scenes that need to limit the frame rate.
  • FIG. 1 is a flow chart of a frame multiplexing method in the related art.
  • the scheme shown in Figure 1 is a scene-based frame multiplexing scheme.
  • the frame multiplexing scheme mainly provides a frame multiplexing method based on scene recognition on the game engine side. This method performs frame multiplexing by identifying low-changing scenes with low frame rate requirements, so as to reduce the power consumption of mobile games. the goal of.
  • the frame multiplexing method has two main modules, namely a scene recognition module and a frame multiplexing enabling module.
  • the scene recognition module will extract scene features that need to enable frame multiplexing, such as specific drawing instructions, long-term user no feedback and other features, and judge whether to perform frame multiplexing based on these features.
  • the frame multiplexing enabling module will copy the picture information of the previous frame and send it to the display buffer when the scene recognition module confirms that it needs to enable frame multiplexing, and send a message to the rendering module at the same time to suspend the processing of one or more frames Rendering calculations, thus halving the frame rate or more.
  • the scene recognition module in the related art requires manual intervention and enumerates its features one by one. To achieve a better optimization effect, a large amount of labor costs are required for scene recognition and feature extraction. Moreover, this frame multiplexing method has poor versatility and scalability, and cannot be used across games. When porting to a new game, the scene recognition module needs to be redone. In addition, the optimization effect of this frame multiplexing method is limited, because it is impossible to manually traverse and identify all scenes that can be frame multiplexed, so there are still a large number of wasted rendering frames.
  • the embodiment of the present application provides a method for rendering an image frame, by multiplexing the latter frame of the two rendered image frames when the similarity between the rendered two adjacent image frames is high
  • the image frame is used to replace the rendering of a new image frame, reducing the image frames that the electronic device needs to render, thereby reducing the power consumption of the electronic device for rendering the image frame.
  • the image similarity has continuity, that is, the similarity of the first two image frames is very close to the similarity of the next two image frames, so in this scheme, the method of judging the similarity of the first two image frames is used to determine whether to repeat Using the next image frame can ensure the continuity of the picture, so as not to affect the final rendering effect.
  • the electronic device in the embodiment of the present application may be a smart phone (mobile phone), a personal computer (personal computer, PC), a notebook computer, a tablet computer, a smart TV, a mobile internet device (mobile internet device, MID), Wearable devices (such as smart watches, smart glasses or smart helmets, etc.), virtual reality (virtual reality, VR) equipment, augmented reality (augmented reality, AR) equipment, wireless electronic equipment in industrial control (industrial control), unmanned Wireless Electronics in Self Driving, Wireless Electronics in Remote Medical Surgery, Wireless Electronics in Smart Grid, Wireless Electronics in Transportation Safety, Smart City Wireless electronic devices in (smart city), wireless electronic devices in smart home (smart home), etc.
  • VR virtual reality
  • AR augmented reality
  • wireless electronic equipment in industrial control industrial control
  • unmanned Wireless Electronics in Self Driving Wireless Electronics in Remote Medical Surgery
  • Wireless Electronics in Smart Grid Wireless Electronics in Transportation Safety
  • wireless electronic devices in smart home smart home
  • the following embodiments do not specifically limit the specific
  • FIG. 2 is a schematic structural diagram of an electronic device 101 provided in an embodiment of the present application.
  • the electronic device 101 includes a processor 103 , and the processor 103 is coupled to a system bus 105 .
  • the processor 103 may be one or more processors, each of which may include one or more processor cores.
  • a display adapter (video adapter) 107 which can drive a display 109, and the display 109 is coupled to the system bus 105.
  • the system bus 105 is coupled to an input-output (I/O) bus through a bus bridge 111 .
  • the I/O interface 115 is coupled to the I/O bus.
  • the I/O interface 115 communicates with various I/O devices, such as an input device 117 (such as a touch screen, etc.), an external memory 121, (such as a hard disk, a floppy disk, a CD or a flash drive), a multimedia interface, etc.).
  • Transceiver 123 (which can send and/or receive radio communication signals), camera 155 (which can capture still and moving digital video images) and external USB port 125 .
  • the interface connected to the I/O interface 115 may be a USB interface.
  • the processor 103 may be any conventional processor, including a reduced instruction set computing (reduced instruction set computing, RISC) processor, a complex instruction set computing (complex instruction set computing, CISC) processor or a combination of the above.
  • the processor may be a special purpose device such as an ASIC.
  • the electronic device 101 can communicate with the software deployment server 149 through the network interface 129 .
  • the network interface 129 is a hardware network interface, such as a network card.
  • the network 127 may be an external network, such as the Internet, or an internal network, such as Ethernet or a virtual private network (virtual private network, VPN).
  • the network 127 may also be a wireless network, such as a WiFi network, a cellular network, and the like.
  • Hard disk drive interface 131 is coupled to system bus 105 .
  • the hardware driver interface is connected to the hard disk drive 133 .
  • Internal memory 135 is coupled to system bus 105 .
  • Data running on the internal memory 135 may include an operating system (OS) 137 of the electronic device 101 , application programs 143 and a scheduler.
  • OS operating system
  • application programs 143 application programs 143 and a scheduler.
  • the processor 103 can communicate with the internal memory 135 through the system bus 105, and fetches instructions and data in the application program 143 from the internal memory 135, thereby implementing program execution.
  • the operating system includes a Shell 139 and a kernel (kernel) 141.
  • Shell 139 is an interface between the user and the kernel of the operating system.
  • Shell 139 is the outermost layer of the operating system.
  • Shell 139 manages the interaction between the user and the operating system: waiting for user input, interpreting user input to the operating system, and processing various operating system output.
  • Kernel 141 consists of those parts of the operating system that manage memory, files, peripherals, and system resources.
  • the kernel 141 directly interacts with the hardware.
  • the operating system kernel usually runs processes, provides inter-process communication, and provides CPU time slice management, interrupt, memory management, IO management, and the like.
  • the application program 143 includes programs related to instant messaging.
  • the electronic device 101 can download the application program 143 from the software deployment server 149 .
  • FIG. 3 is a schematic flowchart of an image frame rendering method 300 provided in an embodiment of the present application. As shown in Fig. 3, the method 300 includes the following steps 301-303.
  • Step 301 acquire a first rendering instruction, where the first rendering instruction is used to instruct rendering of a first target image frame.
  • the electronic device may obtain a first rendering instruction initiated by the application program, where the first rendering instruction is used to instruct rendering of a first target image frame.
  • the application program may be, for example, a game application program, a navigation application program, an industrial application program, or a medical application program that needs to render a 3D model to obtain an image frame.
  • the first target image frame to be rendered instructed by the first rendering instruction is used for displaying on the display screen of the electronic device, so as to form a continuous picture together with other image frames.
  • the first rendering instruction may be intercepted by an instruction recombination layer in the electronic device.
  • the application program initiates the first rendering instruction to instruct the hardware in the electronic device to perform a rendering operation;
  • the first rendering instruction is intercepted before the hardware performing the rendering operation.
  • Step 302 Obtain a first image frame and a second image frame according to the first rendering instruction, the first image frame is the previous frame of the second image frame, and the second image frame is the first image frame The previous frame of the target image frame.
  • the first image frame and the second image frame are rendered image frames, and the first image frame and the second image frame have been rendered before the first rendering instruction is acquired Nice image frame.
  • the first image frame, the second image frame, and the first target image frame indicated by the first rendering instruction are three consecutive image frames, that is, the first image frame and the second image frame
  • the image frames are the first two frames of the first target image frame.
  • the first image frame is the fifth image frame during the running of the game application
  • the second image frame is the sixth image frame
  • the first The target image frame is the seventh image frame.
  • the first image frame and the second image frame are rendered image frames
  • the first target image frame is an image frame to be rendered.
  • an image frame buffer may be preset in the electronic device, and the buffer is used to save rendered image frames.
  • the electronic device acquires the first rendering instruction for instructing to render the first target image frame, it can acquire the front end of the first target image frame from the buffer according to the first rendering instruction.
  • Two image frames that is, the first image frame and the second image frame.
  • the electronic device may be provided with two buffers, and each buffer is used to store one image frame.
  • the two buffers can store the latest two image frames rendered by the electronic device.
  • the electronic device can determine the buffer that stores the earlier image frame among the two buffers, and save the new image frame in the determined buffer, Thus, the two buffers can store the latest two image frames.
  • the electronic device after the electronic device renders the Nth image frame, save the Nth image frame to the A buffer; after rendering the N+1th frame image After the frame, save the N+1th image frame to the B buffer. Then, after the N+2th image frame is rendered, since the Nth image frame in the A buffer is an earlier image frame than the N+1th image frame in the B buffer, the electronic device will The N+2th image frame is saved to the A buffer to replace the original Nth image frame. Similarly, after rendering the image frame N+3, since the image frame N+1 in the B buffer is earlier than the N+2 image frame in the A buffer , so the electronic device saves the N+3th image frame to the B buffer to replace the original N+1th image frame.
  • Step 303 based on the similarity between the first image frame and the second image frame being greater than or equal to a first threshold, the first target image frame is obtained according to the second image frame, wherein the first The content of the target image frame is the same as the content of the second image frame.
  • the electronic device may calculate a similarity between the first image frame and the second image frame.
  • the electronic device may directly obtain the first target image frame based on the second image frame, that is, for the second image frame
  • the frame multiplexing is performed on the two image frames, so that the content of the first target image frame is the same as that of the second image frame.
  • the electronic device can copy the rendered second image frame, and use the copied second image frame as the first target image frame, so that the second image frame and the first target image continuously displayed on the electronic device Frames are two frames with the same content. In this way, the electronic device may no longer execute the first rendering instruction, that is, avoid rendering the first target image frame.
  • the electronic device may execute the first rendering instruction, so as to render the first target image frame.
  • the first threshold may be a threshold determined according to actual conditions, for example, the first threshold may be 0.99 or 0.98.
  • the value of the first threshold can be a relatively high value; The value can be a lower value.
  • the electronic device can use the next image frame in the first two image frames as a new image frame, that is, multiplex the image frame in the first two image frames.
  • the next image frame if the similarity between the previous two image frames is not high, the electronic device executes a rendering instruction to render a new image frame.
  • FIG. 4 is a schematic diagram of image frame similarity provided by an embodiment of the present application.
  • Figure 4 shows the continuity of the similarity of image frames in three game applications.
  • the abscissa in Fig. 4 represents the frame number of the image frame
  • the ordinate in Fig. 4 represents the similarity between two adjacent image frames, so the curve in Fig. 4 is used to continuously represent two adjacent Similarity of image frames.
  • the curves in Figure 4 sequentially represent the similarity between the first image frame and the second image frame, the similarity between the second image frame and the third image frame...the Nth image frame and the N+th Similarity of 1 image frame and so on.
  • the similarity of images is continuous, that is, the similarity of the first two image frames is the same as the similarity of the last two image frames. There will be no mutations. Therefore, when the similarity of the first two image frames is high, it can be considered that the similarity of the last two image frames is usually also high.
  • the similarity between the first two image frames is high, the next image frame of the first two image frames is multiplexed instead of rendering a new image frame. Since the similarity of the latter two image frames is also relatively high, the similarity between the image frame obtained by multiplexing the latter image frame in the first two image frames and the actual image frame is also relatively high, There will be no sudden change in the screen.
  • whether to multiplex the next image frame is determined by judging the similarity of the first two image frames, which can ensure the continuity of the picture, so as not to affect the final rendering effect.
  • multiplexing the subsequent image frame of the two image frames instead of rendering a new image frame can reduce the time spent on the electronic device. Image frames to be rendered, thereby reducing the power consumption of electronic devices for rendering image frames.
  • the electronic device determines to obtain the first target image frame by multiplexing the second image frame, the electronic device needs to stop executing the above first rendering instruction, so as to reduce power consumption caused by rendering the image frame.
  • the first threshold may be determined based on a third threshold, and the third threshold is a preset fixed value, for example, the third threshold may be 0.99.
  • the third target image frame is obtained by rendering
  • the first threshold is the same as the third threshold
  • the third target image frame is located before the first target image frame and the third The rendering method of the target image frame is determined based on the similarity between the image frames.
  • the third target image frame is the image frame before the first target image frame that needs to judge the similarity between the image frames to determine the rendering method.
  • the rendering method refers to the image frame obtained by rendering the rendering instruction mode or the way to obtain image frames through frame multiplexing.
  • the first threshold is a difference between the third threshold and a fourth threshold
  • the fourth threshold is a preset fixed value.
  • the fourth threshold may be, for example, 0.005.
  • the electronic device can judge the rendering method of the third image frame based on the first image frame and the second frame image, and judge the rendering method of the third image frame based on the fourth frame image frame.
  • the image of the fifth frame judges the rendering method of the image frame of the sixth frame.
  • the first threshold corresponding to the image frame of the sixth frame may be the same as The third threshold above is the same; if the third image frame is obtained by multiplexing the second image frame, then the first threshold corresponding to the sixth image frame can be between the above third threshold and the fourth threshold difference.
  • the electronic device may calculate the similarity once every multiple image frames, or multiplex multiple image frames after calculating the similarity once, and the frequency of similarity calculation may be determined according to the category of the game application. For example, the electronic device can calculate the similarity between the first image frame and the second image frame to determine the rendering method of the third image frame; then, the electronic device renders the fourth image frame and the fifth image frame, and calculate the similarity between the two image frames to determine the rendering method of the sixth image frame, and so on, the electronic device calculates the similarity every two image frames.
  • the electronic device may calculate the similarity between the first image frame and the second image frame to determine the third image frame, the fourth image frame and How the 5th image frame is rendered; the electronic device can then proceed to calculate the similarity between the 6th image frame and the 7th image frame to determine the 8th image frame, the 9th image frame and the 10th image frame
  • the rendering method of the image frame, and so on, the electronic device can determine the rendering method of three image frames every time the similarity is calculated.
  • the electronic device may determine a target duration, where the target duration is the difference between the display duration of the two image frames and the calculated duration of the first target image frame. difference between. For example, in the case that the current frame rate of the electronic device is 60, that is, 60 image frames are displayed in one second, the display duration of two image frames is 2/60 second; the calculated duration of the first target image frame is is the duration for executing the above steps 301-303.
  • the electronic device may obtain the target duration by making a difference between the display duration of the two image frames and the calculated duration of the first target image frame.
  • the electronic device stops running the rendering thread, wherein the duration of the rendering thread stopping running is the target time, and the rendering thread is used to render and obtain image frames based on the rendering instruction. That is to say, within the target time period after the first target image frame is obtained, the electronic device suspends the rendering thread and does not execute the rendering of the image frame.
  • the rendering process of image frames is actually to observe a three-dimensional 3D model based on a certain position, and convert the observed scene into a 2D image The form is displayed to obtain the image frame.
  • the rendering of a 3D model can be understood as a 2D image obtained by shooting a 3D model at a certain position with a camera. Therefore, after the position of the observation point in the 3D model is determined, the object to be rendered in the 3D model can be determined, so that the content in the 2D image can be determined.
  • the content in the rendered image frame is determined by the position of the observation point in the 3D model.
  • the content in the two image frames rendered based on the positions of the two closer observation points has a high probability of being similar; the content in the two image frames rendered based on the positions of the two farther observation points has a high probability of being different similar.
  • the distance between the positions of the observation points corresponding to the two image frames can be judged first to determine the similarity between the two image frames, thereby avoiding as much as possible by calculating
  • the method of similarity is used to determine the similarity between image frames and reduce the overhead of calculating similarity.
  • the first rendering instruction may include a first viewpoint position in the 3D model to be rendered, that is, the first rendering instruction is used to instruct the 3D model to be rendered based on the first viewpoint position Render to obtain the first target image frame.
  • the electronic device can acquire the second viewpoint position corresponding to the second image frame, and the second image frame is based on the second viewpoint position pair
  • the 3D model is obtained by rendering.
  • the electronic device may determine the distance between the first observation point position and the second observation point position, and judge that the distance between the first observation point position and the second observation point position is different from the second observation point position.
  • the magnitude relationship between thresholds In a case where the distance between the first observation point position and the second observation point position is less than or equal to a second threshold, the electronic device further determines the similarity between the first image frame and the second image frame; When the distance between the first viewpoint position and the second viewpoint position is greater than a second threshold, the electronic device no longer determines the similarity between the first image frame and the second image frame, and It is turned to execute the first rendering instruction.
  • the second threshold may be a threshold determined according to actual conditions, for example, the second threshold may be 0.3 meters or 0.4 meters.
  • the value of the second threshold can be a lower value; The value can be a higher value.
  • the distance between the position of the first observation point and the position of the second observation point can be calculated by the following formula 1.
  • Distance represents the distance between the first observation point position and the second observation point position
  • Sqrt represents the square root
  • X1, Y1, Z1 are the coordinates of the first observation point position
  • X2, Y2, Z2 are the second observation point position coordinate of.
  • the electronic device can first determine the distance between the position of the observation point of the image frame to be rendered and the position of the observation point of the previous image frame, so as to preliminarily determine the similarity between the two image frames; When the distance between the observation points of the two image frames is far away, it can be considered that the probability of the two image frames is not similar, so the rendering instruction is executed to obtain the image frame to be rendered; when the observation point positions of the two image frames are relatively close, It can be considered that two image frames are likely to be similar, so continue to calculate the similarity between the first two image frames of the image frame to be rendered to determine whether the image frame to be rendered is similar to its previous image frame.
  • the above method 300 may further include the following steps.
  • the electronic device may acquire a second rendering instruction, where the second rendering instruction is used to instruct rendering of a second target image frame, where the second rendering instruction includes a third viewpoint position in the three-dimensional model to be rendered.
  • the electronic device obtains the third image frame, the fourth image frame, and the fourth viewpoint position corresponding to the fourth image frame according to the second rendering instruction, and the third image frame is the position of the fourth image frame A previous frame, the fourth image frame is a previous frame of the second target image frame, and the fourth image frame is obtained by rendering the 3D model based on the position of the fourth observation point.
  • the electronic device may determine whether the distance between the third observation point position and the fourth observation point position is greater than a second threshold, and based on the distance between the third observation point position and the fourth observation point position being greater than the first Two thresholds, execute the second rendering instruction to render the second target image frame.
  • the electronic device may trigger calculation of the similarity between the third image frame and the fourth image frame based on the distance between the third observation point position and the fourth observation point position being less than or equal to the second threshold . Furthermore, based on the similarity between the third image frame and the fourth image frame being less than the first threshold, the electronic device executes the second rendering instruction to render the second target image frame.
  • the similarity of the image frames is preliminarily judged by calculating the distance between the observation point positions between the image frames, and then further calculating the distance between the image frames when the distance between the observation point positions meets the requirements. Therefore, the frequency of calculating the similarity of image frames can be reduced, thereby reducing the calculation overhead of the similarity and reducing the power consumption of electronic devices.
  • the electronic device may reduce and process the two image frames for which the similarity is to be calculated, and then calculate the similarity between the reduced two image frames, so as to increase the speed of calculating the similarity and save Power consumption for computing similarity.
  • the electronic device may perform reduction processing on the first image frame and the second image frame to obtain a reduced first image frame and a reduced second image frame.
  • the electronic device may reduce the length and width of the first image frame and the second image frame to 1/9 of the original, that is, reduce the area of the first image frame and the second image frame to 1/81 of the original.
  • the reduction ratio of the first image frame and the second image frame can be determined according to the actual situation, and the reduction ratio of the first image frame and the second image frame can be higher when the power consumption requirement of the electronic device is higher. value.
  • the electronic device calculates the similarity between the reduced first image frame and the reduced second image frame, and uses the calculated similarity as the first image frame and the second image similarity between frames.
  • the above method 300 further includes the following multiple steps of calculating the similarity between the first image frame and the second image frame.
  • the first image frame and the second image frame are respectively divided into a plurality of image blocks to obtain a plurality of first image blocks corresponding to the first image frame and a plurality of image blocks corresponding to the second image frame A second image block, wherein the plurality of first image blocks has a one-to-one correspondence with the plurality of second image blocks.
  • the electronic device may perform image block division on the first image frame and the second image frame, or may perform image block division on the first image frame and the second image frame that have been reduced. , which is not specifically limited in this embodiment.
  • image block processing may be performed on the first image frame and the second image frame based on the same image block manner.
  • the electronic device may divide the first image frame into 6 first image blocks, wherein the length of each image block is 1/3 of the length of the first image frame, and the width of each image block is 1/3 of the first image frame. 1/2 of the width, that is, divide the length of the first image frame into 3 parts and divide the width of the first image frame into 2 parts.
  • the electronic device may divide the second image frame in the same manner to obtain six second image blocks corresponding to the second image frame.
  • the plurality of first image blocks corresponding to the first image frame has a one-to-one correspondence with the plurality of second image blocks corresponding to the second image frame, that is, any image block in the plurality of first image blocks is associated with the first image block.
  • a second image block located at the same position in the two image frames corresponds.
  • FIG. 5 is a schematic diagram of division of an image block provided by an embodiment of the present application.
  • the first image frame is divided into 6 image blocks, including: image block A1 located in the upper left position, image block A2 located in the upper middle position, image block A3 located in the upper right position, and image block A3 located in the lower left position.
  • the image block A1 corresponds to the image block B1
  • the image block A2 corresponds to the image block B2
  • the image block A3 corresponds to the image block B3
  • the image block A4 corresponds to the image block B4
  • the image block A5 corresponds to the image block B5
  • the image block A6 corresponds to Corresponds to image block B6.
  • the similarities between the plurality of first image blocks and the corresponding image blocks among the plurality of second image blocks are respectively calculated to obtain multiple sets of similarities.
  • first image blocks and second image blocks can be divided into one group, then six first image blocks and six second image blocks can be divided into six groups, and each group includes a A first image block and a second image block.
  • the SSIM values of the two image blocks can be calculated in parallel on the GPU using a 7x7 convolution kernel, each convolution kernel outputs an SSIM value, and then The SSIM values output by all convolution kernels are averaged, and an averaged SSIM value is output.
  • a total of six averaged SSIM values can be output, that is, six groups of similarities can be obtained.
  • X represents the first image frame, which appears in the form of Xi i in the formula
  • Y represents the second image frame, which appears in the form of Y i in the formula
  • x ijk represents the jth image block kth in the first image frame
  • y ijk represents the Y value of the i-th pixel in the YUV space of the j-th image block in the k-th convolution kernel of the second image frame
  • N jk represents The number of pixels in the k-th convolution kernel of the j-th image block.
  • N jk is equal to 49; ⁇ xjk represents the k-th convolution kernel of the j-th image block of the first image frame.
  • the covariance of the pixel YUV space Y between the j-th image block and the k-th convolution kernel; ⁇ yjk Indicates the pixel point YUV space in the k-th convolution kernel of the j-th rendering block of the second image frame
  • the average value of Y; ⁇ yjk represents the standard deviation of the pixel YUV space Y in the kth convolution kernel of the jth rendering block of the second image frame;
  • M j represents the number of convolution kernels used by the jth image block,
  • the target similarity in the multiple groups of similarities is determined as the similarity between the first image frame and the second image frame, and the target similarity is the minimum median value of the multiple groups of similarities similarity.
  • a group of similarities that is, target similarity
  • this group of similarities can be determined as the first A degree of similarity between an image frame and the second image frame.
  • the process of determining the target similarity can be expressed based on the following formula 7.
  • SSIM j represents the average value of SSIM of all convolution kernels of the jth image block of the first image frame and the second image frame, which is also the SSIM value of the jth image block; SSIM represents the first image frame and the second image frame SSIM value between.
  • the position of the observation point corresponding to the first image frame is the same as the position of the observation point corresponding to the second image frame, due to the movement of the dynamic object in the 3D model, the first image
  • the picture content of the first frame and the second image frame are different.
  • the bird in the 3D model may move, so that the picture content of the second image frame is a nearby building.
  • the building, the sky in the distance, and the bird on the sky, that is, the cerebellum is added to the picture content in the second image frame.
  • the bird in the second image frame occupies less area in the second image frame, when calculating the similarity between the entire first image frame and the entire second image frame, it will get a higher The similarity value of , that is, the first image frame and the second image frame are still very similar as a whole.
  • the image frame (and the second image frame) of a dynamically changing new object is often a relatively important image frame, and usually needs to be reflected in continuously displayed pictures. Therefore, in this case, if the similarity between the first image frame and the second image frame is directly calculated, the final calculated similarity may be higher, and the second image frame is reused, so that it cannot be displayed in the display screen.
  • the process of reflecting the dynamic changes of objects such as the process of a bird flying in the sky, cannot be reflected.
  • the image frame is divided into multiple image blocks, and the similarity of each group of image blocks in the two image frames is calculated respectively, and a group of image blocks with the lowest similarity among the multiple groups of image blocks is taken
  • the corresponding similarity is the final similarity, so that the changes of the dynamic objects in the image frames can be highlighted, so that the similarity of the two image frames can reflect the small but important changes in the image frames.
  • the electronic device finally determines to execute the rendering instruction, and the rendering results in a new image frame, which ensures the continuity of the image.
  • FIG. 6 is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • multiple applications such as game application 1, game application 2, and game application 3 can run in the application layer; graphics application program interface (Application Programming Interface, API) layer can run opengl, vulkan, etc. that can draw Graphics driver; the instruction reorganization layer (i.e. the implementation layer of the present invention) is used to execute the rendering method of the above-mentioned image frame;
  • the operating system (operating system, OS) kernel layer includes a system kernel and related drivers for driving hardware chips;
  • the chip layer includes hardware chips such as CPU and GPU.
  • the instruction recombination layer includes an instruction interception module, a similarity calculation module for some regions of images between frames (Range of Image, ROI) and an enabling frame multiplexing module.
  • the instruction interception module is used to intercept graphics API call instructions, and cache rendering instruction streams and associated data.
  • the inter-frame ROI similarity calculation module is used to pre-screen according to the camera position changes in the instruction stream, determine the image frames that need to calculate the similarity; and scale and divide the image frames, and trigger the GPU to calculate the image blocks in parallel.
  • SSIM value the similarity of ROI between output frames.
  • the enabling frame multiplexing module is used to combine the vertical similarity value to decide whether to enable frame multiplexing, and optimize and reconstruct the rendering instruction data flow based on the decision result.
  • FIG. 7 is a schematic diagram of a system component architecture provided by an embodiment of the present application.
  • the image frame rendering method provided by this embodiment may be embedded in the instruction recombination layer component of the OS in the form of software.
  • the modules involved in this embodiment include a system kernel and a driver module (1007), an instruction reorganization layer module (1008), a graphics API module (1009), a display buffer module (1010), and a screen display module ( 1011).
  • the graphics API interception module (1001) is modified in this embodiment, and the camera movement module (1002), the rendering frame buffer management module (1003), and the similarity calculation module (1004) are added.
  • FIG. 8 is a schematic flowchart of a method 800 for rendering an image frame according to an embodiment of the present application. As shown in FIG. 8, the image frame rendering method 800 includes the following steps 801-808.
  • Step 801 intercepting and buffering graphics command data stream.
  • FIG. 9 is a schematic flow chart of intercepting a graphics instruction data flow provided by an embodiment of the present application.
  • the graphics API interception module (1001) judges whether the subject of issuing graphics instructions is a target optimization game; if the subject of issuing graphics instructions is not the target If the game is optimized, the process of this method is not executed; if the subject of issuing driving instructions is the target optimized game, all graphics instructions are intercepted and cached as an instruction stream.
  • the graphics instruction is used to call the driver layer instruction to trigger the GPU to render the image frame.
  • Some graphics instructions include the location information of the camera during the drawing process. The electronic device can analyze the instruction flow, obtain the location information of the camera, and cache the location information of the camera for subsequent use.
  • Step 802 caching the rendered image frames in the game.
  • FIG. 10 is a schematic flowchart of intercepting and caching rendered frames provided by an embodiment of the present application.
  • the rendering frame buffer management module (1003 ) before the rendered frame is placed in the display buffer it is intercepted and a reduced copy is saved, and the purpose of reducing the rendered frame is to save power consumption for subsequent calculation of similarity.
  • the rendered frame buffer management module there are two buffers for caching rendered frames. Whenever a buffer is saved and the rendered frame is completed, the next saved buffer will be replaced. In this way, the rendered frames are saved alternately through the two buffers to achieve the purpose of saving storage space.
  • the intercepted original rendered frame is handed over to the display buffer, that is, the entire process is completed.
  • Step 803 cache the camera position of the rendered image frame.
  • the camera positions corresponding to these rendered frames can be further cached.
  • Step 804 does the camera position change quickly?
  • FIG. 11 is a schematic flow chart of calculating a camera position distance provided by an embodiment of the present application.
  • the camera motion module (1002) obtains the camera position cached in the above steps, that is, the camera position corresponding to the rendered frame and the camera position in the rendering instruction, and then calculates the cached camera position and compare the distance with the second threshold to determine whether the camera position changes rapidly. If the distance is greater than the second threshold, it means that the camera position changes rapidly; if the distance is less than or equal to the second threshold, it means that the camera position does not change quickly.
  • the camera position changes quickly, it means that it is currently in a scene with rapid motion changes. In these scenes, the similarity between two adjacent image frames is generally not high, so the similarity calculation is skipped in these scenes to save overhead. That is, if the position of the camera changes rapidly, go to step 806 to issue a rendering instruction to realize the rendering of the image frame.
  • the camera position does not change quickly, it means that it is not currently in a scene with rapid motion changes, so the similarity calculation can be further performed on the image frames.
  • step 805 the GPU calculates the ROI similarity between frames in parallel.
  • FIG. 12 is a schematic flowchart of calculating the similarity of image frames provided by the embodiment of the present application.
  • the process of calculating the similarity of image frames is executed in the similarity calculation module (1004).
  • the two reduced rendered frames are divided into 2*3 image blocks to form six groups of image blocks, and each group of image blocks includes an image block in the two reduced rendered frames.
  • the color space of the six groups of image blocks is converted from the RGB color space to the YUV color space for the subsequent calculation of SSIM values.
  • the SAT table of the two reduced rendered frames is calculated for subsequent calculation of the accelerated SSIM.
  • use the convolution kernel to calculate the SSIM value of each group of image blocks, and average the SSIM values calculated by all the convolution kernels corresponding to each group of image blocks to obtain the SSIM value corresponding to each group of image blocks, taking The smallest SSIM value among the SSIM values corresponding to each group of image blocks is used as the similarity between the two rendered frames. After the similarity of the two rendered frames is obtained, the similarity of the two rendered frames is sent to the decision module to determine the subsequent rendering method.
  • FIG. 13 is a schematic diagram of a SAT table provided by an embodiment of the present application.
  • the image is actually composed of many pixels, and each pixel can be abstracted into a value.
  • the abstracted value of each pixel is the Y value in the YUV space.
  • the upper figure in Figure 13 represents the value of each pixel in the image
  • the lower figure in Figure 13 is the SAT table corresponding to the image.
  • the SAT table is a picture formed by accumulating all pixel values in the formed rectangle starting from the pixel point in the upper left corner and ending at the current pixel point.
  • Step 806 issuing a rendering instruction.
  • a rendering instruction is issued to the GPU, so that the GPU implements rendering of the target image frame based on the rendering instruction.
  • Step 807 judging whether the inter-frame ROI similarity is high?
  • FIG. 14 is a schematic flowchart of a similarity judgment provided by an embodiment of the present application.
  • the decision module 1005 ) needs to judge whether the SSIM values are greater than the similarity threshold. If the SSIM value is greater than or equal to the similarity threshold, it means that the similarity between the two rendered frames is high, and the decision enables frame multiplexing; if the SSIM value is smaller than the similarity threshold, it means that the similarity between the two rendered frames is not high, Then the decision is not to enable frame multiplexing.
  • the similarity threshold In the case of a high frame rate (for example, the frame rate is greater than 30FPS), human manual operations cannot quickly change the inter-frame similarity, so the inter-frame similarity has continuity in the time domain, as shown in Figure 4 above. Therefore, it is necessary to determine the similarity threshold according to this characteristic when making a decision. If frame multiplexing was enabled last time, the similarity threshold should be the difference between the reference threshold and 0.005, that is, frame multiplexing is more likely to be enabled this time. If frame multiplexing was not enabled last time, it is less likely that frame multiplexing is enabled in this frame, and the threshold value remains unchanged as the reference threshold, where the reference threshold may be 0.99. The initial decision defaults to not enabling frame multiplexing.
  • Step 808 enable frame multiplexing.
  • FIG. 15 is a schematic flowchart of enabling frame multiplexing provided by an embodiment of the present application.
  • This step is implemented in the frame multiplexing enabling module (1006).
  • this solution only needs to pass two consecutive rendering frames to decide whether to enable frame multiplexing, without labor costs, and saves a lot of labor costs .
  • the decision based on the similarity of two frames of image frames conforms to the habit of human visual perception, and the scene recognition rate can be well guaranteed.
  • this solution only depends on two rendering frames, which means that it does not depend on the platform or the game, thus ensuring that this solution can be transplanted to any platform and any game application, with high portability and versatility sex.
  • this solution can dynamically adjust the threshold according to the user experience in different scenarios, so as to expand the revenue and increase the proportion of frame multiplexing enabled.
  • the frame multiplexing technology which is an error-free technology, solves the problem of dizziness easily caused by the frame insertion technology solution.
  • FIG. 16 is a schematic structural diagram of an apparatus 1600 provided by an embodiment of the present application.
  • the data processing apparatus 1600 includes: an acquisition unit 1601 and a processing unit 1602; the acquisition unit 1601 is used to acquire the first rendering instruction, the first rendering instruction is used to instruct rendering of the first target image frame; the acquisition unit 1601 is also configured to acquire the first image frame and the second image frame according to the first rendering instruction, the first image frame is the previous frame of the second image frame, and the second image frame is the previous frame of the first target image frame; the processing unit 1602 is configured to, based on the first image frame and the first target image frame The similarity between the two image frames is greater than or equal to a first threshold, and the first target image frame is obtained according to the second image frame, wherein the content of the first target image frame is the same as the content of the second image frame same.
  • the first rendering instruction includes the position of the first viewpoint in the 3D model to be rendered; the processing unit 1602 is further configured to: acquire the second view point corresponding to the second image frame An observation point position, the second image frame is obtained by rendering the 3D model based on the second observation point position; based on the fact that the distance between the first observation point position and the second observation point position is less than or equal to a second threshold, determine the similarity between the first image frame and the second image frame.
  • the acquiring unit 1601 is further configured to acquire a second rendering instruction, the second rendering instruction is used to instruct rendering of the second target image frame, and the second rendering instruction includes The position of the third observation point in the three-dimensional model; the obtaining unit 1601 is further configured to obtain the third image frame, the fourth image frame, and the fourth observation point corresponding to the fourth image frame according to the second rendering instruction position, the third image frame is the previous frame of the fourth image frame, the fourth image frame is the previous frame of the second target image frame, and the fourth image frame is based on the first obtained by rendering the 3D model at four observation point positions; the processing unit 1602 is further configured to: based on the distance between the third observation point position and the fourth observation point position being greater than a second threshold, execute The second rendering instruction is used to render the second target image frame; or, based on the fact that the distance between the third observation point position and the fourth observation point position is less than or equal to the second threshold and the obtained If the similarity between the third image frame and the fourth image frame is smaller than
  • the processing unit 1602 is specifically configured to: respectively divide the first image frame and the second image frame into a plurality of image blocks, and obtain the corresponding A plurality of first image blocks and a plurality of second image blocks corresponding to the second image frame, wherein the plurality of first image blocks and the plurality of second image blocks have a one-to-one correspondence;
  • the similarity between the plurality of first image blocks and the corresponding image blocks in the plurality of second image blocks is obtained to obtain multiple sets of similarities;
  • the target similarity in the multiple sets of similarities is determined as the set
  • the similarity between the first image frame and the second image frame, the target similarity is the similarity with the smallest median among the multiple groups of similarities.
  • the first threshold is determined based on a third threshold, and the third threshold is a preset fixed value; wherein, if the third target image frame is obtained by rendering, then The first threshold is the same as the third threshold, the third target image frame is located before the first target image frame, and the rendering method of the third target image frame is based on the similarity between image frames determined; if the third target image frame is obtained by multiplexing image frames, the first threshold is the difference between the third threshold and the fourth threshold, and the fourth threshold is a preset Fixed value.
  • the processing unit 1602 is further configured to: determine a target duration, where the target duration is the difference between the display duration of the two image frames and the calculated duration of the first target image frame Difference value: stop running the rendering thread, wherein the duration of stopping the running of the rendering thread is the target time, and the rendering thread is used to render and obtain image frames based on the rendering instruction.
  • the processing unit 1602 is specifically configured to: perform reduction processing on the first image frame and the second image frame to obtain the reduced first image frame and the reduced first image frame.
  • Two image frames calculating the similarity between the reduced first image frame and the reduced second image frame to obtain the similarity between the first image frame and the second image frame.
  • FIG. 17 is a schematic structural diagram of the electronic device provided by the embodiment of the present application. Smart wearable devices, servers, etc. are not limited here.
  • the sound processing apparatus described in the embodiment corresponding to FIG. 11 may be deployed on the electronic device 1700 to realize the function of sound processing in the embodiment corresponding to FIG. 11 .
  • the electronic device 1700 includes: a receiver 1701, a transmitter 1702, a processor 1703, and a memory 1704 (the number of processors 1703 in the electronic device 1700 may be one or more, and one processor is taken as an example in FIG.
  • the processor 1703 may include an application processor 17031 and a communication processor 17032 .
  • the receiver 1701 , the transmitter 1702 , the processor 1703 and the memory 1704 may be connected through a bus or in other ways.
  • the memory 1704 may include read-only memory and random-access memory, and provides instructions and data to the processor 1703 .
  • a part of the memory 1704 may also include a non-volatile random access memory (non-volatile random access memory, NVRAM).
  • NVRAM non-volatile random access memory
  • the memory 1704 stores processors and operating instructions, executable modules or data structures, or their subsets, or their extended sets, wherein the operating instructions may include various operating instructions for implementing various operations.
  • the processor 1703 controls the operation of the electronic device.
  • various components of an electronic device are coupled together through a bus system, where the bus system may include a power bus, a control bus, and a status signal bus in addition to a data bus.
  • the various buses are referred to as bus systems in the figures.
  • the methods disclosed in the foregoing embodiments of the present application may be applied to the processor 1703 or implemented by the processor 1703 .
  • the processor 1703 may be an integrated circuit chip, which has a signal processing capability. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 1703 or instructions in the form of software.
  • the above-mentioned processor 1703 may be a general-purpose processor, a digital signal processor (digital signal processing, DSP), a microprocessor or a microcontroller, and may further include an application specific integrated circuit (ASIC), field programmable Field-programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP digital signal processing
  • FPGA field programmable Field-programmable gate array
  • the processor 1703 may implement or execute various methods, steps, and logic block diagrams disclosed in the embodiments of the present application.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory 1704, and the processor 1703 reads the information in the memory 1704, and completes the steps of the above method in combination with its hardware.
  • the receiver 1701 can be used to receive input digital or character information, and generate signal input related to related settings and function control of electronic equipment.
  • the transmitter 1702 can be used to output digital or character information through the first interface; the transmitter 1702 can also be used to send instructions to the disk group through the first interface to modify the data in the disk group; the transmitter 1702 can also include a display device such as a display screen .
  • the present application also provides a computer-readable storage medium.
  • the method disclosed in FIG. Computer program instructions encoded on other non-transitory media or articles of manufacture.
  • Figure 18 schematically illustrates a conceptual partial view of an example computer-readable storage medium comprising a computer program for executing a computer process on a computing device, arranged in accordance with at least some embodiments presented herein.
  • computer readable storage media 1800 is provided using signal bearing media 1801 .
  • the signal-bearing medium 1801 may include one or more program instructions 1802 which, when executed by one or more processors, may provide the functions or portions of the functions described above with respect to FIG. 5 . Additionally, program instructions 1802 in FIG. 18 also describe example instructions.
  • signal bearing media 1801 may include computer readable media 1803 such as, but not limited to, a hard drive, compact disc (CD), digital video disc (DVD), digital tape, memory, ROM or RAM, and the like.
  • computer readable media 1803 such as, but not limited to, a hard drive, compact disc (CD), digital video disc (DVD), digital tape, memory, ROM or RAM, and the like.
  • signal bearing media 1801 may comprise computer recordable media 1804 such as, but not limited to, memory, read/write (R/W) CDs, R/W DVDs, and the like.
  • signal bearing media 1801 may include communication media 1805, such as, but not limited to, digital and/or analog communication media (eg, fiber optic cables, waveguides, wired communication links, wireless communication links, etc.).
  • signal bearing medium 1801 may be conveyed by a wireless form of communication medium 1805 (eg, a wireless communication medium that complies with the IEEE 802 standard or other transmission protocol).
  • One or more program instructions 1802 may be, for example, computer-executable instructions or logic-implemented instructions.
  • a computing device of a computing device may be configured to respond to program instructions 1802 communicated to the computing device via one or more of computer-readable media 1803 , computer-recordable media 1804 , and/or communication media 1805 , providing various operations, functions, or actions.
  • the disclosed system, device and method can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or part of the contribution to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: various media capable of storing program codes such as U disk, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Television Systems (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开了一种图像帧的渲染方法,应用于执行图像帧渲染的电子设备。本申请方法包括: 获取第一渲染指令,所述第一渲染指令用于指示渲染第一目标图像帧; 根据所述第一渲染指令获取第一图像帧和第二图像帧,所述第一图像帧为所述第二图像帧的前一帧,所述第二图像帧为所述第一目标图像帧的前一帧;基于所述第一图像帧和所述第二图像帧之间的相似度大于或等于第一阈值,根据所述第二图像帧得到所述第一目标图像帧,其中所述第一目标图像帧的内容与所述第二图像帧的内容相同。基于本方案,能够在保证渲染得到的图像帧的帧率的同时,降低电子设备的渲染功耗。

Description

一种图像帧的渲染方法及相关装置
本申请要求于2021年11月26日提交中国专利局、申请号为202111424170.9、发明名称为“一种图像帧的渲染方法及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种图像帧的渲染方法及相关装置。
背景技术
移动终端和移动互联网的快速发展给人们的生活带来了极大的便利,其中手机游戏的出现更是丰富了人们的娱乐生活。人们可以随时随地打开手机,进入虚拟的游戏世界,畅玩游戏。随着时代的发展,玩家对游戏高画质、高帧率的呼声越来越强。而手机产商为了满足玩家的诉求,也在不断努力地提升硬件性能,使得具有高刷新率屏幕的手机层出不穷。
目前,高刷新率屏幕手机的普及,使得游戏厂商提高游戏帧率以适应高刷新率。在提高游戏帧率后,虽然能够很好地提升游戏流畅性,但也带来了大量渲染帧的浪费以及过高的手机功耗,进而导致手机发热严重以及影响手机的续航能力。
因此,目前亟需一种能够在保证游戏帧率的同时,降低手机渲染功耗的方法。
发明内容
本申请提供了一种图像帧的渲染方法,能够在保证渲染得到的图像帧的帧率的同时,降低电子设备的渲染功耗。
本申请第一方面提供一种图像帧的渲染方法,应用于执行图像帧渲染的电子设备。该方法包括:获取第一渲染指令,所述第一渲染指令用于指示渲染第一目标图像帧。其中,第一渲染指令可以是由电子设备中的指令重组层截获得到的。在电子设备运行应用程序的过程中,应用程序发起所述第一渲染指令,以指示电子设备中的硬件执行渲染操作;电子设备中的指令重组层则可以在该第一渲染指令在到达电子设备中执行渲染操作的硬件之前截获该第一渲染指令。
根据所述第一渲染指令获取第一图像帧和第二图像帧,所述第一图像帧为所述第二图像帧的前一帧,所述第二图像帧为所述第一目标图像帧的前一帧。其中,所述第一图像帧和所述第二图像帧为已渲染的图像帧,且所述第一图像帧和所述第二图像帧是在获取到所述第一渲染指令之前就已经渲染好的图像帧。并且,所述第一图像帧、所述第二图像帧以及所述第一渲染指令所指示的第一目标图像帧是连续的三个图像帧。
基于所述第一图像帧和所述第二图像帧之间的相似度大于或等于第一阈值,根据所述第二图像帧得到所述第一目标图像帧,其中所述第一目标图像帧的内容与所述第二图像帧的内容相同。即电子设备可以拷贝已经渲染得到的第二图像帧,并将拷贝得到的第二图像帧作为第一目标图像帧,从而使得电子设备上连续显示的第二图像帧和第一目标图像帧为内容相同的两帧。
本方案中,通过在已渲染的两帧相邻图像帧的相似度较高的情况下,复用该两帧图像 帧中的后一帧图像帧,以代替渲染新的图像帧,减少电子设备所需渲染的图像帧,从而降低电子设备渲染图像帧的功耗。由于图像相似度具有连续性,即前两帧图像帧的相似度与后两帧图像帧的相似度大概率非常接近,因此本方案中通过判断前两帧图像帧相似度的方式来决定是否复用后一帧图像帧,能够保证画面的连续性,从而不影响最终的渲染效果。
在一种可能的实现方式中,所述第一渲染指令中包括待渲染的三维模型中的第一观察点位置,即所述第一渲染指令用于指示基于所述第一观察点位置对所述三维模型进行渲染以得到第一目标图像帧。
该方法还包括:获取所述第二图像帧对应的第二观察点位置,所述第二图像帧是基于所述第二观察点位置对所述三维模型进行渲染得到的。
在所述第一观察点位置和所述第二观察点位置之间的距离小于或等于第二阈值的情况下,电子设备再进一步确定第一图像帧与第二图像帧之间的相似度;在所述第一观察点位置和所述第二观察点位置之间的距离大于第二阈值的情况下,电子设备则不再确定第一图像帧与第二图像帧之间的相似度,而是转为执行第一渲染指令。
本方案中,通过在计算图像帧的相似度之前,先判断两个图像帧对应的观察点位置之间的距离,来初步确定两个图像帧之间的相似度,从而尽量避免通过计算相似度的方式来确定图像帧之间的相似度,降低计算相似度的开销。
在一种可能的实现方式中,所述方法还包括:获取第二渲染指令,所述第二渲染指令用于指示渲染第二目标图像帧,所述第二渲染指令中包括待渲染的三维模型中的第三观察点位置;根据所述第二渲染指令获取第三图像帧、第四图像帧以及所述第四图像帧对应的第四观察点位置,所述第三图像帧为所述第四图像帧的前一帧,所述第四图像帧为所述第二目标图像帧的前一帧,所述第四图像帧是基于所述第四观察点位置对所述三维模型进行渲染得到的;基于所述第三观察点位置与所述第四观察点位置之间的距离大于第二阈值,执行所述第二渲染指令,以渲染得到所述第二目标图像帧;或,基于所述第三观察点位置与所述第四观察点位置之间的距离小于或等于所述第二阈值且所述第三图像帧和所述第四图像帧之间的相似度小于所述第一阈值,执行所述第二渲染指令,以渲染得到所述第二目标图像帧。
本方案中,先通过计算图像帧之间的观察点位置的距离来初步判断图像帧的相似度,然后再在图像帧之间的观察点位置的距离满足要求的情况下,进一步计算图像帧之间的相似度,从而能够降低计算图像帧相似度的频率,进而降低相似度的计算开销,减少电子设备的功耗。
在一种可能的实现方式中,所述方法还包括:
将所述第一图像帧和所述第二图像帧分别划分为多个图像块,得到所述第一图像帧对应的多个第一图像块和所述第二图像帧对应的多个第二图像块,其中所述多个第一图像块与所述多个第二图像块具有一一对应的关系。由于第一图像帧和第二图像帧是相同大小的图像,因此可以基于相同的图像分块方式对第一图像帧和第二图像帧执行图像分块处理。
分别计算所述多个第一图像块与所述多个第二图像块中具有对应关系的图像块之间的相似度,得到多组相似度。例如,具有对应关系的第一图像块和第二图像块可以被划分为 一组,那么六个第一图像块和六个第二图像块则可以划分为六组,每组包括一个第一图像块和一个第二图像块。
将所述多组相似度中的目标相似度确定为所述第一图像帧和所述第二图像帧之间的相似度,所述目标相似度为所述多组相似度中值最小的相似度。
本方案中通过将图像帧划分为多个图像块,并且分别计算两个图像帧中各组图像块的相似度,并取多组图像块中相似度最低的一组图像块对应的相似度为最终的相似度,从而能够重点凸显图像帧中所发生的动态物体的变化,进而使得两个图像帧的相似度中能够体现图像帧所发生的微小但重要的变化。这样一来,电子设备最终确定执行渲染指令,而渲染得到新的图像帧,保证了画面的连续性。
在一种可能的实现方式中,所述第一阈值是基于第三阈值确定的,所述第三阈值为预先设定的固定值;其中,若第三目标图像帧是通过渲染得到的,则所述第一阈值与所述第三阈值相同,所述第三目标图像帧为位于所述第一目标图像帧之前且所述第三目标图像帧的渲染方式是基于图像帧之间的相似度确定的;若第三目标图像帧是通过复用图像帧得到的,则所述第一阈值为所述第三阈值与第四阈值之间的差值,所述第四阈值为预先设定的固定值。
本方案中,由于图像帧的相似度具有连续性,因此执行过一次帧复用之后,下一次判断是否需要执行帧复用时也有很大的可能确定需要执行帧复用。因此通过基于上一次是否帧复用的决策信息来调整上述的第一阈值,可以使得判断是否帧复用的过程更为合理。
在一种可能的实现方式中,在基于第二图像帧得到第一目标图像帧后,电子设备可以确定目标时长,所述目标时长为两帧图像帧的显示时长与计算得到所述第一目标图像帧的时长之间的差值。然后,电子设备停止运行渲染线程,其中所述渲染线程停止运行的时长为所述目标时间,所述渲染线程用于基于渲染指令渲染得到图像帧。也就是说,在得到第一目标图像帧之后的目标时长内,电子设备都将渲染线程挂起,不再执行图像帧的渲染。
在一种可能的实现方式中,在执行相似度计算的过程中,电子设备可以对所述第一图像帧和所述第二图像帧执行缩小处理,得到缩小后的第一图像帧和缩小后的第二图像帧;计算所述缩小后的第一图像帧和所述缩小后的第二图像帧之间的相似度,得到所述第一图像帧和所述第二图像帧之间的相似度。
本方案中,通过将待计算相似度的两个图像帧缩小处理后,再计算缩小后的两个图像帧之间的相似度,能够提高计算相似度的速度以及节省计算相似度的功耗。
本申请第二方面提供一种渲染装置,包括:获取单元,用于获取第一渲染指令,所述第一渲染指令用于指示渲染第一目标图像帧;所述获取单元还用于根据所述第一渲染指令获取第一图像帧和第二图像帧,所述第一图像帧为所述第二图像帧的前一帧,所述第二图像帧为所述第一目标图像帧的前一帧;处理单元,用于基于所述第一图像帧和所述第二图像帧之间的相似度大于或等于第一阈值,根据所述第二图像帧得到所述第一目标图像帧,其中所述第一目标图像帧的内容与所述第二图像帧的内容相同。
在一种可能的实现方式中,所述第一渲染指令中包括待渲染的三维模型中的第一观察 点位置;所述处理单元还用于:获取所述第二图像帧对应的第二观察点位置,所述第二图像帧是基于所述第二观察点位置对所述三维模型进行渲染得到的;基于所述第一观察点位置与所述第二观察点位置之间的距离小于或等于第二阈值,确定所述第一图像帧和所述第二图像帧之间的相似度。
在一种可能的实现方式中,所述获取单元,还用于获取第二渲染指令,所述第二渲染指令用于指示渲染第二目标图像帧,所述第二渲染指令中包括待渲染的三维模型中的第三观察点位置;所述获取单元,还用于根据所述第二渲染指令获取第三图像帧、第四图像帧以及所述第四图像帧对应的第四观察点位置,所述第三图像帧为所述第四图像帧的前一帧,所述第四图像帧为所述第二目标图像帧的前一帧,所述第四图像帧是基于所述第四观察点位置对所述三维模型进行渲染得到的;所述处理单元,还用于:基于所述第三观察点位置与所述第四观察点位置之间的距离大于第二阈值,执行所述第二渲染指令,以渲染得到所述第二目标图像帧;或,基于所述第三观察点位置与所述第四观察点位置之间的距离小于或等于所述第二阈值且所述第三图像帧和所述第四图像帧之间的相似度小于所述第一阈值,执行所述第二渲染指令,以渲染得到所述第二目标图像帧。
在一种可能的实现方式中,所述处理单元,具体用于:将所述第一图像帧和所述第二图像帧分别划分为多个图像块,得到所述第一图像帧对应的多个第一图像块和所述第二图像帧对应的多个第二图像块,其中所述多个第一图像块与所述多个第二图像块具有一一对应的关系;分别计算所述多个第一图像块与所述多个第二图像块中具有对应关系的图像块之间的相似度,得到多组相似度;将所述多组相似度中的目标相似度确定为所述第一图像帧和所述第二图像帧之间的相似度,所述目标相似度为所述多组相似度中值最小的相似度。
在一种可能的实现方式中,所述第一阈值是基于第三阈值确定的,所述第三阈值为预先设定的固定值;其中,若第三目标图像帧是通过渲染得到的,则所述第一阈值与所述第三阈值相同,所述第三目标图像帧为位于所述第一目标图像帧之前且所述第三目标图像帧的渲染方式是基于图像帧之间的相似度确定的;若第三目标图像帧是通过复用图像帧得到的,则所述第一阈值为所述第三阈值与第四阈值之间的差值,所述第四阈值为预先设定的固定值。
在一种可能的实现方式中,所述处理单元,还用于:确定目标时长,所述目标时长为两帧图像帧的显示时长与计算得到所述第一目标图像帧的时长之间的差值;停止运行渲染线程,其中所述渲染线程停止运行的时长为所述目标时间,所述渲染线程用于基于渲染指令渲染得到图像帧。
在一种可能的实现方式中,所述处理单元,具体用于:对所述第一图像帧和所述第二图像帧执行缩小处理,得到缩小后的第一图像帧和缩小后的第二图像帧;计算所述缩小后的第一图像帧和所述缩小后的第二图像帧之间的相似度,得到所述第一图像帧和所述第二图像帧之间的相似度。
本申请第三方面提供一种电子设备,该电子设备包括:存储器和处理器;所述存储器存储有代码,所述处理器被配置为执行所述代码,当所述代码被执行时,所述电子设备执 行如第一方面中的任意一种实现方式的方法。
本申请第四方面提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序,当其在计算机上运行时,使得计算机执行如第一方面中的任意一种实现方式的方法。
本申请第五方面提供一种计算机程序产品,当其在计算机上运行时,使得计算机执行如第一方面中的任意一种实现方式的方法。
本申请第六方面提供一种芯片,包括一个或多个处理器。处理器中的部分或全部用于读取并执行存储器中存储的计算机程序,以执行上述任一方面任意可能的实现方式中的方法。
可选地,该芯片该包括存储器,该存储器与该处理器通过电路或电线与存储器连接。可选地,该芯片还包括通信接口,处理器与该通信接口连接。通信接口用于接收需要处理的数据和/或信息,处理器从该通信接口获取该数据和/或信息,并对该数据和/或信息进行处理,并通过该通信接口输出处理结果。该通信接口可以是输入输出接口。本申请提供的方法可以由一个芯片实现,也可以由多个芯片协同实现。
附图说明
图1是相关技术中的一种帧复用方法流程图;
图2为本申请实施例提供的一种电子设备101的结构示意图;
图3为本申请实施例提供的一种图像帧的渲染方法300的流程示意图;
图4为本申请实施例提供的一种图像帧相似度的示意图;
图5为本申请实施例提供的一种图像块的划分示意图;
图6为本申请实施例提供的一种系统架构示意图;
图7为本申请实施例提供的一种系统组件的架构示意图;
图8为本申请实施例提供的一种图像帧的渲染方法800的流程示意图;
图9为本申请实施例提供的一种截获图形指令数据流的流程示意图;
图10为本申请实施例提供的一种拦截并缓存渲染帧的流程示意图;
图11为本申请实施例提供的一种计算相机位置距离的流程示意图;
图12为本申请实施例提供的一种计算图像帧相似度的流程示意图;
图13为本申请实施例提供的一种SAT表的示意图;
图14为本申请实施例提供的一种判断相似度的流程示意图;
图15为本申请实施例提供的一种使能帧复用的流程示意图;
图16为本申请实施例提供的一种装置1600的结构示意图;
图17为本申请实施例提供的电子设备的一种结构示意图;
图18为本申请实施例提供的计算机可读存储介质的一种结构示意图。
具体实施方式
下面结合附图,对本申请的实施例进行描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。本领域普通技术人员可知,随着技术的发展和新场 景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。
此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或模块的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或模块,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或模块。在本申请中出现的对步骤进行的命名或者编号,并不意味着必须按照命名或者编号所指示的时间/逻辑先后顺序执行方法流程中的步骤,已经命名或者编号的流程步骤可以根据要实现的技术目的变更执行次序,只要能达到相同或者相类似的技术效果即可。
为便于理解,以下先对本实施例所提出的技术术语进行介绍。
图形处理器(graphics processing unit,GPU):是计算机中专门用来处理图形的一类专用硬件。GPU的优势在于可以并行处理相似的多个任务,例如在渲染一张图时会同时渲染多个像素点,GPU则可以加速此过程。
通用计算管线(Compute Shader,CS):是利用GPU的并行计算特性,让GPU处理渲染像素点之外的工作,例如模拟空间中所有粒子的运动轨迹、将图形分成几百个块并同时处理几百个块。通用计算管线可以在图形渲染过程中执行,支持实时计算。
实时渲染:指的是通过实时运算来渲染出所需要的图像。例如,对于所看到的游戏画面,本质是由多个连续渲染帧不间断地显示出来的结果。每一个渲染帧都是通过计算机处理器和图形处理器经过复杂计算得到的。通常来说,每一个渲染帧耗时<100ms的渲染叫做实时渲染,每一个渲染帧耗时>500ms的渲染过程被称为非实时渲染,或离线渲染。
渲染帧:是一张由图形处理器渲染出来的图像,也可以称为图像帧。多个渲染帧连续播放可以形成动态的效果。
帧率:在实时渲染中,每一秒生产渲染帧的数量叫做帧率,单位是帧每秒。例如,“帧率60”指的是当前在一秒的时间内产生60个渲染帧。帧率越高,显示的效果越流畅。
屏幕刷新率:在实时渲染中,渲染产生的渲染帧最终要由图形处理器发送给显示缓冲区,最终屏幕从显示缓冲区取渲染帧显示,屏幕每刷新一次,就从渲染缓冲区取一个最新的渲染帧。屏幕每秒刷新的次数称为屏幕刷新率。屏幕刷新率越高,屏幕支持的帧率越高。最终的显示效果取决于屏幕刷新率和帧率的最低值。
相机:渲染的过程是模拟现实世界中相机录像的过程。其中,相机是一种渲染术语,渲染中的相机和现实世界中的相机类似,指的是查看场景的眼睛。简单来说,相机是一种来记录场景信息的物体,渲染帧是通过相机的观察,将看到的场景投影到相机上形成的。
功耗:用来衡量计算过程消耗电量的效率,单位是毫安(mA)。在移动终端的电池电量一定的前提下,移动终端的运行功耗越高,耗电量越快,使用时间越短。
插帧:在两个连续渲染帧之间插入一个新帧的动作。在应用于视频的情况下,插帧动作可以依据前后帧信息插值生成中间的视频帧;在应用于游戏的情况下,插帧动作可以依 据前两帧画面的图像信息生成第三帧的图像信息。
帧复用:连续传两个相同的渲染帧给显示缓冲区的操作。此方法通常会降低渲染的频率,从而降低渲染产生的功耗。
运动估计及运动补偿(Motion Estimation Motion Compensation,MEMC):是一种插帧技术,其原理是硬件快速计算两帧之间的光流,从而生成第三帧。
结构相似度算法(Structural Similarity,SSIM):是一种计算两个图像之间相似度的算法。
加和汇总表(Summed Area Table,SAT):加和汇总表,是一种快速计算图像中一个块里面所有像素和的算法,本申请实施例中用来加速SSIM的计算。
像素缓冲对象(Pixel Buffer Object,PBO):用来保存渲染帧的一种技术。本申请实施例中用来保存两个连续渲染帧,从而方便计算渲染帧之间的相似度。
驱动:是一种可以使计算机操作系统和底层硬件相互通信的一类程序。能够让操作系统或开发者利用硬件的特性。
游戏引擎:是一种游戏制作工具,其充分集成并利用底层的驱动程序,让游戏开发者快速的做出游戏。
卷积核:图像处理时,给定输入图像,输入图像中一个小区域中像素加权平均后成为输出图像中的每个对应像素,其中权值由一个函数定义,这个函数称为卷积核。
限频限帧:为了防止手机在游戏过程中过热,采取的限制芯片运行频率和游戏运行帧率的一种强制方式。
指令重组层:是一种介于应用层和驱动层之间的组件,能够拦截应用程序调用的驱动指令流,并对其进行优化。
YUV颜色空间:是一种颜色的数字编码方式,Y代表亮度,U代表色度,V代表浓度,三个值可以一起表示一种颜色。
目前,高刷新率屏幕手机的普及,使得游戏厂商提高游戏帧率以适应高刷新率。在提高游戏帧率后,虽然能够很好地提升游戏流畅性,但也带来了大量渲染帧的浪费以及过高的手机功耗,进而导致手机发热严重以及影响手机的续航能力。为了保证用户的安全,避免用户被过高的温度烫伤,手机芯片和操作系统会对手机做限频限帧操作,降低芯片和手机运行功耗。对手机做该操作限频限帧会使得游戏帧率减半或更多,从而影响玩家的游戏体验。此外,为了降低游戏在高刷新率屏幕下的功耗,游戏厂商通常会在特定场景下限制游戏帧率。例如,某一个三维(3 Dimensions,3D)游戏会在玩家一分钟没有操作后将游戏帧率限制到10帧。手机厂商则会使用插帧等技术方案降低渲染频率,例如某一个型号下的手机中加入了MEMC芯片,使用MEMC芯片来加速游戏插帧,这种插帧技术可以将60帧的渲染帧率提升到120帧的显示帧率。
然而,MEMC等插帧方案会使插帧产生的图像发生扭曲,进而导致玩家产生眩晕的感觉,而基于场景的帧率限制则缺乏通用性,厂商没有办法识别所有需要限制帧率的场景。
可以参阅图1,图1是相关技术中的一种帧复用方法流程图。如图1所示,图1所示 的方案是一种基于场景的帧复用方案。该帧复用方案主要是在游戏引擎端提供了一种基于场景识别的帧复用方法,该方法通过识别对帧率需求较低的低变化场景进行帧复用,以达到降低手机游戏功耗的目的。
该帧复用方法有两个主要模块,分别是场景识别模块和帧复用使能模块。场景识别模块会提取需要使能帧复用的场景特征,例如特定绘制指令、长时间用户无反馈等特征,并根据这些特征判断是否进行帧复用。帧复用使能模块则会在场景识别模块确认需要使能帧复用时,复制上一帧的画面信息并交给显示缓冲区,同时发消息给渲染模块,暂停一帧或多帧时间的渲染计算,从而实现帧率减半或更多。
然而,相关技术中的场景识别模块需要人工干预并一一列举其特征,想要达到比较好的优化效果则需要在场景识别和特征提取上耗费大量的人工成本。并且,该帧复用方法通用性和可扩展性较差,无法做到跨游戏通用,移植到新游戏时需要重做场景识别模块。此外,该帧复用方法的优化效果有限,因为人工无法遍历识别所有可帧复用的场景,所以仍然有大量被浪费的渲染帧。
有鉴于此,本申请实施例提供了一种图像帧的渲染方法,通过在已渲染的两帧相邻图像帧的相似度较高的情况下,复用该两帧图像帧中的后一帧图像帧,以代替渲染新的图像帧,减少电子设备所需渲染的图像帧,从而降低电子设备渲染图像帧的功耗。由于图像相似度具有连续性,即前两帧图像帧的相似度与后两帧图像帧的相似度大概率非常接近,因此本方案中通过判断前两帧图像帧相似度的方式来决定是否复用后一帧图像帧,能够保证画面的连续性,从而不影响最终的渲染效果。
示例性地,本申请实施例中的电子设备可以是智能手机(mobile phone)、个人电脑(personal computer,PC)、笔记本电脑、平板电脑、智慧电视、移动互联网设备(mobile internet device,MID)、可穿戴设备(如智能手表、智能眼镜或者智能头盔等),虚拟现实(virtual reality,VR)设备、增强现实(augmented reality,AR)设备、工业控制(industrial control)中的无线电子设备、无人驾驶(self driving)中的无线电子设备、远程手术(remote medical surgery)中的无线电子设备、智能电网(smart grid)中的无线电子设备、运输安全(transportation safety)中的无线电子设备、智慧城市(smart city)中的无线电子设备、智慧家庭(smart home)中的无线电子设备等。以下实施例对该电子设备的具体形式不做特殊限制。
可以参阅图2,图2为本申请实施例提供的一种电子设备101的结构示意图。如图2所示,电子设备101包括处理器103,处理器103和系统总线105耦合。处理器103可以是一个或者多个处理器,其中每个处理器都可以包括一个或多个处理器核。显示适配器(video adapter)107,显示适配器可以驱动显示器109,显示器109和系统总线105耦合。系统总线105通过总线桥111和输入输出(I/O)总线耦合。I/O接口115和I/O总线耦合。I/O接口115和多种I/O设备进行通信,比如输入设备117(如:触摸屏等),外存储器121,(例如,硬盘、软盘、光盘或优盘),多媒体接口等)。收发器123(可以发送和/或接收无线电通信信号),摄像头155(可以捕捉静态和动态数字视频图像)和外部USB端口125。其中, 可选地,和I/O接口115相连接的接口可以是USB接口。
其中,处理器103可以是任何传统处理器,包括精简指令集计算(reduced instruction set Computing,RISC)处理器、复杂指令集计算(complex instruction set computing,CISC)处理器或上述的组合。可选地,处理器可以是诸如ASIC的专用装置。
电子设备101可以通过网络接口129和软件部署服务器149通信。示例性的,网络接口129是硬件网络接口,比如,网卡。网络127可以是外部网络,比如因特网,也可以是内部网络,比如以太网或者虚拟私人网络(virtual private network,VPN)。可选地,网络127还可以是无线网络,比如WiFi网络,蜂窝网络等。
硬盘驱动器接口131和系统总线105耦合。硬件驱动接口和硬盘驱动器133相连接。内存储器135和系统总线105耦合。运行在内存储器135的数据可以包括电子设备101的操作系统(OS)137、应用程序143和调度表。
处理器103可以通过系统总线105与内存储器135通信,从内存储器135中取出应用程序143中的指令和数据,从而实现程序的执行。
操作系统包括Shell 139和内核(kernel)141。Shell 139是介于使用者和操作系统的内核间的一个接口。Shell 139是操作系统最外面的一层。Shell 139管理使用者与操作系统之间的交互:等待使用者的输入,向操作系统解释使用者的输入,并且处理各种各样的操作系统的输出结果。
内核141由操作系统中用于管理存储器、文件、外设和系统资源的那些部分组成。内核141直接与硬件交互,操作系统内核通常运行进程,并提供进程间的通信,提供CPU时间片管理、中断、内存管理和IO管理等等。
示例性地,在电子设备101为智能手机的情况下,应用程序143包括即时通讯相关的程序。在一个实施例中,在需要执行应用程序143时,电子设备101可以从软件部署服务器149下载应用程序143。
可以参阅图3,图3为本申请实施例提供的一种图像帧的渲染方法300的流程示意图。如图3所示,该方法300包括以下的步骤301-303。
步骤301,获取第一渲染指令,所述第一渲染指令用于指示渲染第一目标图像帧。
在电子设备运行需要持续渲染图像帧的应用程序的过程中,电子设备可以获取到应用程序所发起的第一渲染指令,该第一渲染指令用于指示渲染第一目标图像帧。示例性地,该应用程序例如可以为游戏应用程序、导航应用程序、工业应用程序或医疗应用程序等需要对三维模型进行渲染以得到图像帧的应用程序。该第一渲染指令所指示渲染的第一目标图像帧用于在所述电子设备的显示屏幕上显示,以与其他的图像帧共同构成连续的画面。
可选的,第一渲染指令可以是由电子设备中的指令重组层截获得到的。在电子设备运行应用程序的过程中,应用程序发起所述第一渲染指令,以指示电子设备中的硬件执行渲染操作;电子设备中的指令重组层则可以在该第一渲染指令在到达电子设备中执行渲染操作的硬件之前截获该第一渲染指令。
步骤302,根据所述第一渲染指令获取第一图像帧和第二图像帧,所述第一图像帧为 所述第二图像帧的前一帧,所述第二图像帧为所述第一目标图像帧的前一帧。
其中,所述第一图像帧和所述第二图像帧为已渲染的图像帧,且所述第一图像帧和所述第二图像帧是在获取到所述第一渲染指令之前就已经渲染好的图像帧。并且,所述第一图像帧、所述第二图像帧以及所述第一渲染指令所指示的第一目标图像帧是连续的三个图像帧,即所述第一图像帧和所述第二图像帧为所述第一目标图像帧的前两帧。
示例性地,在电子设备运行游戏应用的过程中,所述第一图像帧为游戏应用运行过程中的第5帧图像帧,所述第二图像帧为第6帧图像帧,所述第一目标图像帧则为第7帧图像帧。并且,第一图像帧和第二图像帧是已渲染好的图像帧,而第一目标图像帧则是待渲染的图像帧。
可选的,为了便于电子设备能够快速获取到第一图像帧和第二图像帧,电子设备中可以预先设置图像帧的缓冲区,该缓冲区用于保存已经渲染得到的图像帧。这样一来,电子设备在获取到用于指示渲染第一目标图像帧的所述第一渲染指令的情况下,可以根据所述第一渲染指令从缓冲区获取所述第一目标图像帧的前两帧图像帧,即所述第一图像帧和所述第二图像帧。
进一步地,由于后续计算图像帧之间的相似度时只需要用到两帧图像帧,因此电子设备可以是设置两个缓冲区,每个缓冲区分别用于存储一帧图像帧。其中,通过交替地往电子设备的两个缓冲区中存储最新渲染得到的图像帧,可以使得该两个缓冲区能够存储电子设备所渲染得到的最新的两个图像帧。简单来说,在电子设备渲染得到新的图像帧之后,电子设备可以确定两个缓冲区中保存较早的图像帧的缓冲区,并将该新的图像帧保存在所确定的缓冲区中,从而使得两个缓冲区能够存储最新的两个图像帧。
示例性地,假设电子设备中有A缓冲区和B缓冲区,在电子设备渲染得到第N帧图像帧之后,将第N帧图像帧保存到A缓冲区;在渲染得到第N+1帧图像帧之后,将第N+1帧图像帧保存到B缓冲区。然后,在渲染得到第N+2帧图像帧之后,由于A缓冲区中的第N帧图像帧相对于B缓冲区中的第N+1帧图像帧为较早的图像帧,因此电子设备将第N+2帧图像帧保存到A缓冲区,以代替原来的第N帧图像帧。类似地,在渲染得到第N+3帧图像帧之后,由于此时B缓冲区中的第N+1帧图像帧相对于A缓冲区中的第N+2帧图像帧为较早的图像帧,因此电子设备将第N+3帧图像帧保存到B缓冲区,以代替原来的第N+1帧图像帧。
步骤303,基于所述第一图像帧和所述第二图像帧之间的相似度大于或等于第一阈值,根据所述第二图像帧得到所述第一目标图像帧,其中所述第一目标图像帧的内容与所述第二图像帧的内容相同。
在基于第一渲染指令获取到第一图像帧和第二图像帧之后,电子设备可以计算所述第一图像帧和所述第二图像帧之间的相似度。
在第一图像帧和第二图像帧之间的相似度大于或等于第一阈值的情况下,电子设备可以基于所述第二图像帧直接得到所述第一目标图像帧,即对所述第二图像帧进行帧复用,使得第一目标图像帧的内容与第二图像帧的内容相同。简单来说,电子设备可以拷贝已经渲染得到的第二图像帧,并将拷贝得到的第二图像帧作为第一目标图像帧,从而使得电子 设备上连续显示的第二图像帧和第一目标图像帧为内容相同的两帧。这样,电子设备可以不再执行第一渲染指令,即避免渲染第一目标图像帧。
在第一图像帧和第二图像帧之间的相似度小于第一阈值的情况下,电子设备则可以执行所述第一渲染指令,从而渲染得到所述第一目标图像帧。其中,第一阈值可以是根据实际情况而定的一个阈值,例如第一阈值可以为0.99或0.98。例如,在对画面要求较高的情况下,第一阈值的取值可以为较高的值;在对画面要求不高且对电子设备的功耗要求较高的情况下,第一阈值的取值可以为较低的值。
也就是说,在前两帧图像帧相似度较高的情况下,电子设备可以将前两帧图像帧中的后一帧图像帧作为新的图像帧,即复用前两帧图像帧中的后一帧图像帧;在前两帧图像帧相似度不高的情况下,电子设备则执行渲染指令,以渲染得到新的图像帧。
经申请人研究发现,在大部分需要渲染图像帧的应用中,图像帧的相似度具有连续性,即前两帧图像帧的相似度与后两帧图像帧的相似度大概率非常接近。示例性地,可以参阅图4,图4为本申请实施例提供的一种图像帧相似度的示意图。如图4所示,图4中展示了三个游戏应用中的图像帧相似度的连续性。具体地,图4中的横坐标表示图像帧的帧号,图4中的纵坐标表示相邻的两个图像帧的相似度,因此图4中的曲线则用于连续表示相邻的两个图像帧的相似度。例如,图4中的曲线依次表示第1个图像帧和第2个图像帧的相似度、第2个图像帧和第3个图像帧的相似度...第N个图像帧和第N+1个图像帧的相似度等等。由图4可以看出,在游戏应用1、游戏应用2和游戏应用3中,图像的相似度是具有连续性的,即前两帧图像帧的相似度与后两帧图像帧的相似度并不会出现突变。因此,在前两帧图像帧的相似度较高的情况下,可以认为后两帧图像帧的相似度通常也是较高的。
基于此,本实施例中在前两帧图像帧的相似度较高的情况下,将前两帧图像帧中的后一帧图像帧复用,以代替渲染新的图像帧。由于后两帧图像帧的相似度也是较高的,因此通过复用前两帧图像帧中的后一帧图像帧而得到的图像帧与实际的图像帧之间的相似度也是较高的,从而不会出现画面突变的情况。
也就是说,本实施例中通过判断前两帧图像帧相似度的方式来决定是否复用后一帧图像帧,能够保证画面的连续性,从而不影响最终的渲染效果。并且,通过在已渲染的两帧相邻图像帧的相似度较高的情况下,复用该两帧图像帧中的后一帧图像帧,以代替渲染新的图像帧,能够减少电子设备所需渲染的图像帧,从而降低电子设备渲染图像帧的功耗。
在电子设备确定通过复用所述第二图像帧来得到所述第一目标图像帧的情况下,电子设备需要停止执行上述的第一渲染指令,以降低渲染图像帧所带来的功耗。
由于图像帧的相似度具有连续性,因此执行过一次帧复用之后,下一次判断是否需要执行帧复用时也有很大的可能确定需要执行帧复用,因此可以基于上一次是否帧复用的决策信息来调整上述的第一阈值,以使得判断是否帧复用的过程更为合理。
示例性地,所述第一阈值可以是基于第三阈值确定的,所述第三阈值为预先设定的固定值,例如第三阈值可以为0.99。
其中,若第三目标图像帧是通过渲染得到的,则所述第一阈值与所述第三阈值相同,所述第三目标图像帧为位于所述第一目标图像帧之前且所述第三目标图像帧的渲染方式是 基于图像帧之间的相似度确定的。简单来说,第三目标图像帧是第一目标图像帧的前一个需要判断图像帧之间的相似度来确定渲染方式的图像帧,该渲染方式则是指通过执行渲染指令渲染得到图像帧的方式或者通过帧复用得到图像帧的方式。
若第三目标图像帧是通过复用图像帧得到的,则所述第一阈值为所述第三阈值与第四阈值之间的差值,所述第四阈值为预先设定的固定值。其中,所述第四阈值例如可以为0.005。
例如,假设电子设备每隔两帧判断一次图像帧的渲染方式,则电子设备可以是基于第1帧图像帧、第2帧图像判断第3帧图像帧的渲染方式,以及基于第4帧图像帧、第5帧图像判断第6帧图像帧的渲染方式。如果第3帧图像帧是通过执行渲染指令而渲染得到的(即第3帧图像帧不是通过复用第2帧图像帧得到的),则第6帧图像帧对应的第一阈值则可以是与上述的第三阈值相同;如果第3帧图像帧是通过复用第2帧图像帧得到的,则第6帧图像帧对应的第一阈值则可以是上述的第三阈值与第四阈值之间的差值。
本实施例中,电子设备可以是每隔多个图像帧计算一次相似度,也可以是计算一次相似度则复用多个图像帧,计算相似度的频率可以根据游戏应用的类别来确定。例如,电子设备可以计算第1个图像帧和第2个图像帧之间的相似度,以确定第3个图像帧的渲染方式;然后,电子设备渲染得到第4个图像帧和第5个图像帧,并计算这两个图像帧之间的相似度,以确定第6个图像帧的渲染方式,以此类推,电子设备每隔2个图像帧计算一次相似度。又例如,在电子设备渲染一些2D游戏的情况下,电子设备可以是计算第1个图像帧和第2个图像帧之间的相似度,以确定第3个图像帧、第4个图像帧和第5个图像帧的渲染方式;然后,电子设备可以继续计算第6个图像帧和第7个图像帧之间的相似度,以确定第8个图像帧、第9个图像帧和第10个图像帧的渲染方式,以此类推,电子设备每隔计算一次相似度则可以确定3个图像帧的渲染方式。
可选的,在基于第二图像帧得到第一目标图像帧后,电子设备可以确定目标时长,所述目标时长为两帧图像帧的显示时长与计算得到所述第一目标图像帧的时长之间的差值。例如,在电子设备当前的帧率为60的情况下,即一秒显示60个图像帧,两帧图像帧的显示时长则为2/60秒;计算得到所述第一目标图像帧的时长则为执行上述的步骤301-303的时长。
通常来说,电子设备在渲染得到一个图像帧之后,往往需要等待一个图像帧的显示时长再渲染下一个图像帧;而本实施例中在渲染得到第二图像帧之后,则执行上述的步骤301-303而得到第一目标图像帧,而第一目标图像帧是不再需要渲染的,电子设备所需渲染的图像帧为第一目标图像帧的下一个图像帧。因此,电子设备则可以是通过将两帧图像帧的显示时长与计算得到第一目标图像帧的时长作差,得到目标时长。
然后,电子设备停止运行渲染线程,其中所述渲染线程停止运行的时长为所述目标时间,所述渲染线程用于基于渲染指令渲染得到图像帧。也就是说,在得到第一目标图像帧之后的目标时长内,电子设备都将渲染线程挂起,不再执行图像帧的渲染。
可以理解的是,对于一些通过渲染三维模型而得到图像帧的应用程序而言,图像帧的 渲染过程实际上就是基于某一个位置观察立体的三维模型,并将观察到的景物以二维图像的形式进行展示,从而得到图像帧。简单来说,三维模型的渲染可以理解为通过一个相机在某一个位置上对三维模型进行拍摄得到的二维图像。因此,在确定了三维模型中的观察点位置之后,即可确定三维模型中需要渲染的物体,从而能够确定二维图像中的内容。
也就是说,对于同一个三维模型而言,渲染得到的图像帧中的内容是由三维模型中的观察点位置所决定的。基于距离较近的两个观察点位置渲染得到的两个图像帧中的内容大概率是相似的;基于距离较远的两个观察点位置渲染得到的两个图像帧中的内容大概率是不相似的。
基于此,本实施例中可以在计算图像帧的相似度之前,先判断两个图像帧对应的观察点位置之间的距离,来确定两个图像帧之间的相似度,从而尽量避免通过计算相似度的方式来确定图像帧之间的相似度,降低计算相似度的开销。
示例性地,所述第一渲染指令中可以包括待渲染的三维模型中的第一观察点位置,即所述第一渲染指令用于指示基于所述第一观察点位置对所述三维模型进行渲染以得到第一目标图像帧。
由于所述第二图像帧为已渲染得到的图像帧,因此电子设备可以获取所述第二图像帧对应的第二观察点位置,所述第二图像帧是基于所述第二观察点位置对所述三维模型进行渲染得到的。
然后,电子设备可以确定所述第一观察点位置和所述第二观察点位置之间的距离,并且判断所述第一观察点位置和所述第二观察点位置之间的距离与第二阈值之间的大小关系。在所述第一观察点位置和所述第二观察点位置之间的距离小于或等于第二阈值的情况下,电子设备再进一步确定第一图像帧与第二图像帧之间的相似度;在所述第一观察点位置和所述第二观察点位置之间的距离大于第二阈值的情况下,电子设备则不再确定第一图像帧与第二图像帧之间的相似度,而是转为执行第一渲染指令。
具体地,在电子设备确定第一观察点位置和第二观察点位置之间的距离后,可以基于所述第一观察点位置与所述第二观察点位置之间的距离小于或等于第二阈值,确定所述第一图像帧和所述第二图像帧之间的相似度。其中,第二阈值可以是根据实际情况而定的一个阈值,例如第二阈值可以为0.3米或0.4米。例如,在对画面要求较高的情况下,第二阈值的取值可以为较低的值;在对画面要求不高且对电子设备的功耗要求较高的情况下,第二阈值的取值可以为较高的值。
第一观察点位置和第二观察点位置之间的距离可以通过以下的公式1计算得到。
Distance=Sqrt((X1-X2) 2+(Y1-Y2) 2+(Z1-Z2) 2)      公式1
其中,Distance表示第一观察点位置和第二观察点位置之间的距离;Sqrt表示开方;X1,Y1,Z1为第一观察点位置的坐标;X2,Y2,Z2为第二观察点位置的坐标。
也就是说,电子设备可以先确定待渲染图像帧的观察点位置与前一帧图像帧的观察点位置之间的距离,以初步确定两个图像帧之间的相似度;在两个图像帧的观察点位置距离较远的情况下,可以认为两个图像帧大概率并不相似,因此执行渲染指令以得到待渲染图像帧;在两个图像帧的观察点位置距离较近的情况下,可以认为两个图像帧大概率相似,因此继续通过计算待渲染图像帧的前两帧图像帧之间的相似度,来确定待渲染图像帧是否 与其前一帧图像帧相似。
示例性地,上述的方法300还可以包括以下的步骤。
首先,电子设备可以获取第二渲染指令,所述第二渲染指令用于指示渲染第二目标图像帧,所述第二渲染指令中包括待渲染的三维模型中的第三观察点位置。
然后,电子设备根据所述第二渲染指令获取第三图像帧、第四图像帧以及所述第四图像帧对应的第四观察点位置,所述第三图像帧为所述第四图像帧的前一帧,所述第四图像帧为所述第二目标图像帧的前一帧,所述第四图像帧是基于所述第四观察点位置对所述三维模型进行渲染得到的。
其次,电子设备可以判断第三观察点位置与第四观察点位置之间的距离是否大于第二阈值,并且基于所述第三观察点位置与所述第四观察点位置之间的距离大于第二阈值,执行所述第二渲染指令,以渲染得到所述第二目标图像帧。
或者,电子设备可以基于所述第三观察点位置与所述第四观察点位置之间的距离小于或等于所述第二阈值,触发计算第三图像帧和第四图像帧之间的相似度。并且,电子设备基于所述第三图像帧和所述第四图像帧之间的相似度小于所述第一阈值,执行所述第二渲染指令,以渲染得到所述第二目标图像帧。
本实施例中,先通过计算图像帧之间的观察点位置的距离来初步判断图像帧的相似度,然后再在图像帧之间的观察点位置的距离满足要求的情况下,进一步计算图像帧之间的相似度,从而能够降低计算图像帧相似度的频率,进而降低相似度的计算开销,减少电子设备的功耗。
在一些可能的实施例中,电子设备可以是将待计算相似度的两个图像帧缩小处理后,再计算缩小后的两个图像帧之间的相似度,以提高计算相似度的速度以及节省计算相似度的功耗。
示例性地,电子设备可以对所述第一图像帧和所述第二图像帧执行缩小处理,得到缩小后的第一图像帧和缩小后的第二图像帧。例如,电子设备可以将第一图像帧和第二图像帧的长和宽均缩小为原来的1/9,即第一图像帧和第二图像帧的面积均缩小为原来的1/81。其中,第一图像帧和第二图像帧的缩小比例可以根据实际情况来确定,在电子设备的功耗要求较高的情况下,第一图像帧和第二图像帧的缩小比例可以为较高的值。
然后,电子设备计算所述缩小后的第一图像帧和所述缩小后的第二图像帧之间的相似度,并将计算得到的相似度作为所述第一图像帧和所述第二图像帧之间的相似度。
本方案中,通过以牺牲计算相似度的少量精度为代价,大大地减少了待计算相似度的图像帧的面积,从而降低了相似度的计算开销,有效地降低了电子设备的功耗。
为了便于理解,以下将详细介绍计算图像帧之间的相似度的过程。
具体地,上述的方法300中还包括以下计算第一图像帧和第二图像帧之间的相似度的多个步骤。
首先,将所述第一图像帧和所述第二图像帧分别划分为多个图像块,得到所述第一图 像帧对应的多个第一图像块和所述第二图像帧对应的多个第二图像块,其中所述多个第一图像块与所述多个第二图像块具有一一对应的关系。
本实施例中,电子设备可以是对第一图像帧和第二图像帧进行图像块的划分,也可以是对执行缩小处理后的所述第一图像帧和第二图像帧进行图像块的划分,本实施例并不做具体限定。
由于第一图像帧和第二图像帧是相同大小的图像,因此可以基于相同的图像分块方式对第一图像帧和第二图像帧执行图像分块处理。例如,电子设备可以将第一图像帧划分为6个第一图像块,其中每个图像块的长为第一图像帧的长的1/3,每个图像块的宽为第一图像帧的宽的1/2,即将第一图像帧的长划分为3份以及将第一图像帧的宽划分为2份。类似的,电子设备可以基于相同的方式对第二图像帧进行划分,得到第二图像帧对应的6个第二图像块。并且,第一图像帧对应的多个第一图像块与第二图像帧对应的多个第二图像块具有一一对应的关系,即多个第一图像块中的任意一个图像块均与第二图像帧中位于相同位置的一个第二图像块对应。
示例性地,可以参阅图5,图5为本申请实施例提供的一种图像块的划分示意图。如图5所示,第一图像帧被划分为6个图像块,分别包括:位于左上位置的图像块A1,位于中上位置的图像块A2,位于右上位置的图像块A3,位于左下位置的图像块A4,位于中下位置的图像块A5,位于右下位置的图像块A6;第二图像帧被划分为6个图像块,分别包括:位于左上位置的图像块B1,位于中上位置的图像块B2,位于右上位置的图像块B3,位于左下位置的图像块B4,位于中下位置的图像块B5,位于右下位置的图像块B6。其中,图像块A1与图像块B1对应,图像块A2与图像块B2对应,图像块A3与图像块B3对应,图像块A4与图像块B4对应,图像块A5与图像块B5对应,图像块A6与图像块B6对应。
然后,分别计算所述多个第一图像块与所述多个第二图像块中具有对应关系的图像块之间的相似度,得到多组相似度。
本实施例中,具有对应关系的第一图像块和第二图像块可以被划分为一组,那么六个第一图像块和六个第二图像块则可以划分为六组,每组包括一个第一图像块和一个第二图像块。
具体地,对于具有对应关系的第一图像块与第二图像块,可以分别使用7x7卷积核在GPU上并行计算两个图像块的SSIM值,每一个卷积核输出一个SSIM值,然后将所有卷积核输出的SSIM值取平均,输出一个平均后的SSIM值。这样,对于六组具有对应关系的第一图像块和第二图像块,一共可以输出六个平均后的SSIM值,即得到六组相似度。
其中,计算各组图像块的相似度的过程可以是通过以下的公式2-公式6来表示。
Figure PCTCN2022133959-appb-000001
Figure PCTCN2022133959-appb-000002
Figure PCTCN2022133959-appb-000003
Figure PCTCN2022133959-appb-000004
Figure PCTCN2022133959-appb-000005
其中,X表示第一图像帧,公式中以X i的形式出现;Y表示第二图像帧,公式中以Y i的形式出现;x ijk表示第一图像帧中第j个图像块第k个卷积核中的第i个像素点YUV空间的Y值;y ijk表示第二图像帧第j个图像块第k个卷积核中的第i个像素点YUV空间的Y值;N jk表示第j图像块第k个卷积核中拥有像素点的数量,在7x7卷积核中,N jk等于49;μ xjk表示第一图像帧第j个图像块第k个卷积核中,像素点YUV空间Y的平均值;σ xjk表示第一图像帧第j个渲染小块第k个卷积核中,像素点YUV空间Y的标准差;σ xyjk表示第一图像帧和第二图像帧的第j个图像块第k个卷积核之间,像素点YUV空间Y的协方差;μ yjk:表示第二图像帧第j个渲染小块第k个卷积核中,像素点YUV空间Y的平均值;σ yjk表示第二图像帧第j个渲染小块第k个卷积核中,像素点YUV空间Y的标准差;M j表示第j个图像块使用卷积核的数量,这个数量取决于图像块的大小和卷积核的大小;SSIM jk表示第一图像帧和第二图像帧第j个图像块第k个卷积核的SSIM值;SSIM j表示第一图像帧和第二图像帧第j个图像块所有卷积核的SSIM的平均值,也是第j个图像块的SSIM值。
最后,将所述多组相似度中的目标相似度确定为所述第一图像帧和所述第二图像帧之间的相似度,所述目标相似度为所述多组相似度中值最小的相似度。
在得到多组图像块所对应的多组相似度之后,可以确定多组相似度中相似度值最小的一组相似度(即目标相似度),并将该组相似度确定为所述第一图像帧和所述第二图像帧之间的相似度。
具体地,确定目标相似度的过程可以基于以下的公式7来表示。
SSIM=min(SSIM 1,SSIM 2,SSIM 3,SSIM 4,SSIM 5,SSIM 6)   公式7
其中,SSIM j表示第一图像帧和第二图像帧第j个图像块所有卷积核的SSIM的平均值,也是第j个图像块的SSIM值;SSIM表示第一图像帧和第二图像帧之间的SSIM值。
可以理解的是,在一些渲染场景下,尽管第一图像帧对应的观察点位置和第二图像帧对应的观察点位置相同,但是由于三维模型中的动态物体发生移动,也会使得第一图像帧和第二图像帧的画面内容不相同。例如,在第一图像帧对应的画面内容为近处的建筑和远处的天空的情况下,可能会由于三维模型中的小鸟发生移动,使得第二图像帧的画面内容 中为近处的建筑、远处的天空以及在天空上的小鸟,即第二图像帧中的画面内容中多了小脑。一般来说,第二图像帧中的小鸟在第二图像帧中占较少面积的情况下,在计算整个第一图像帧和整个第二图像帧之间的相似度时,会得到较高的相似度值,即第一图像帧和第二图像帧整体上还是非常相似的。然而,出现动态变化的新物体的图像帧(及第二图像帧)往往是较为重要的图像帧,通常是需要在连续显示的画面中体现的。因此,在这种情况下,如果直接计算第一图像帧和第二图像帧的相似度,可能会导致最终计算得到的相似度较高,而复用第二图像帧,从而无法在显示画面中体现物体动态变化的过程,例如无法体现小鸟在天空中飞翔的过程。
基于此,本方案中通过将图像帧划分为多个图像块,并且分别计算两个图像帧中各组图像块的相似度,并取多组图像块中相似度最低的一组图像块对应的相似度为最终的相似度,从而能够重点凸显图像帧中所发生的动态物体的变化,进而使得两个图像帧的相似度中能够体现图像帧所发生的微小但重要的变化。这样一来,电子设备最终确定执行渲染指令,而渲染得到新的图像帧,保证了画面的连续性。
为便于理解,以下将结合具体例子详细介绍本申请实施例所提供的图像帧的渲染方法。
可以参阅图6,图6为本申请实施例提供的一种系统架构示意图。如图6所示,在应用层可以运行有游戏应用1、游戏应用2和游戏应用3等多个应用;图形应用程序接口(Application Programming Interface,API)层则可以运行有opengl、vulkan等能够绘制图形的驱动程序;指令重组层(即本发明实施层)则用于执行上述的图像帧的渲染方法;操作系统(operating system,OS)内核层包括系统内核以及用于驱动硬件芯片的相关驱动;芯片层则包括CPU和GPU等硬件芯片。
其中,指令重组层包括指令截获模块、帧间图像部分区域(Range of Image,ROI)相似度计算模块和使能帧复用模块。指令截获模块用于截获图形API调用指令,并缓存渲染指令流以及关联数据。帧间ROI相似度计算模块用于根据指令流中的相机位置变化情况进行预筛选,确定需要计算相似度的图像帧;以及对图像帧进行缩放和分块,并且触发GPU并行计算各个图像块的SSIM值,输出帧间ROI的相似度。使能帧复用模块则用于结合立式相似度值,决策是否使能帧复用,并且基于决策结果对渲染指令数据流进行优化重构。
可以参阅图7,图7为本申请实施例提供的一种系统组件的架构示意图。如图7所示,本实施例所提供的图像帧的渲染方法可以是以软件的形式嵌入到OS的指令重组层组件中。在整个OS框架中,本实施例涉及到的模块包括系统内核及驱动模块(1007)、指令重组层模块(1008)、图形API模块(1009)、显示缓冲区模块(1010)、屏幕显示模块(1011)。在指令重组层模块(1008)中,本实施例中修改了图形API拦截模块(1001),新增了摄像机运动模块(1002)、渲染帧缓存管理模块(1003)、相似度计算模块(1004)、决策模块(1005)以及帧复用使能模块(1006)。
以下将结合具体的流程图详细介绍本实施例中所修改以及新增的模块的工作原理。
可以参阅图8,图8为本申请实施例提供的一种图像帧的渲染方法800的流程示意图。如图8所示,该图像帧的渲染方法800包括以下的步骤801-808。
步骤801,截获并缓存图形指令数据流。
可以参阅图9,图9为本申请实施例提供的一种截获图形指令数据流的流程示意图。如图9所示,在游戏引擎通过下发图形指令来调用驱动层指令之前,图形API拦截模块(1001)判断下发图形指令的主体是否为目标优化游戏;如果下发驱动指令的主体不是目标优化游戏,则不执行本方法流程;如果下发驱动指令的主体是目标优化游戏,则拦截所有图形指令,并缓存成指令流。其中,该图形指令用于调用驱动层指令来触发GPU渲染图像帧。部分图形指令中包含了绘制过程中相机的位置信息,电子设备可以分析指令流,获取相机的位置信息,并缓存相机的位置信息,以备后续使用。
步骤802,缓存游戏中已渲染的图像帧。
可以参阅图10,图10为本申请实施例提供的一种拦截并缓存渲染帧的流程示意图。如图10所示,在系统内核及驱动(1007)下发渲染指令至GPU以完成渲染并生成渲染帧(例如上述的第一图像帧和第二图像帧)后,渲染帧缓存管理模块(1003)在渲染帧放到显示缓冲区前将其拦截并缩小拷贝保存一份,其中缩小渲染帧的目的是为了节省后续计算相似度的功耗。在渲染帧缓存管理模块中,有两个用于缓存渲染帧的缓冲区。每当有一个缓冲区保存完成渲染帧后,会更换下一次保存的缓冲区。这样,通过两个缓冲区交替保存渲染帧,以达到节省存储空间的目的。此外,在保存完成渲染帧后,将拦截的原始渲染帧交给显示缓冲区,即完成整个过程。
步骤803,缓存已渲染的图像帧的相机位置。
此外,对于缓存在缓冲区中的渲染帧,可以进一步缓存这些渲染帧对应的相机位置。
步骤804,相机位置是否变化较快?
可以参阅图11,图11为本申请实施例提供的一种计算相机位置距离的流程示意图。如图11所示,摄像机运动模块(1002)获取到上述步骤所缓存的相机位置,即已渲染得到的渲染帧对应的相机位置以及渲染指令中的相机位置,然后通过计算所缓存的相机位置之间的距离,并将该距离和第二阈值作比较,来判断相机位置是否变化较快。如果该距离大于第二阈值,则代表相机位置变化较快;如果该距离小于或等于第二阈值,则代表相机位置变化不快。
如果相机位置变化较快,则代表当前处于运动变化较快的场景,这些场景下相邻的两个图像帧之间的相似度一般不高,因此这些场景跳过相似度计算,以节省开销。即,如果相机位置变化较快,则转至执行步骤806,下发渲染指令,实现图像帧的渲染。
如果相机位置变化不快,则代表当前不处于运动变化较快的场景,因此可以进一步对图像帧进行相似度计算。
步骤805,GPU并行计算帧间ROI相似度。
可以参阅图12,图12为本申请实施例提供的一种计算图像帧相似度的流程示意图。如图12所示,计算图像帧相似度的过程在相似度计算模块(1004)中执行。首先,从缓冲区中取出已经缩小缓存的两个渲染帧,并且将两个缩小后的渲染帧穿入GPU中。然后,将两个缩小后的渲染帧均切分为2*3的图像块,构成六组图像块,每组图像块分别包括两个缩小后的渲染帧中的一个图像块。此外,将六组图像块的颜色空间从RGB颜色空间转换到 YUV颜色空间,以便于后续计算SSIM值。其次,计算这两个缩小后的渲染帧的SAT表,以用于后续加速SSIM的计算。最后,使用卷积核计算每组图像块的SSIM值,并且将每组图像块对应的所有卷积核计算得到的SSIM值取平均值,得到每组图像块对应的SSIM值,取每组图像块对应的SSIM值中最小的SSIM值作为两个渲染帧的相似度。在得到两个渲染帧的相似度之后,则将两个渲染帧的相似度发送给决策模块,以确定后续的渲染方式。
为便于理解,以下将结合附图解释用于加速SSIM计算的SAT表。
可以参阅图13,图13为本申请实施例提供的一种SAT表的示意图。图像实际上是由很多像素点组成,每一个像素点都可以抽象成一个值,在本方案中,每个像素点抽象得到的值就是YUV空间中的Y值。如图13所示,图13中上方的图表示了图像中各个像素点的值,图13中下方的图则为图像对应的SAT表。其中,SAT表就是从左上角的像素点开始,到当前像素点结束,组成的矩形中所有像素值累加起来形成的一张图。例如,SAT表中的101就是该位置左上角所有像素点的值累加得到的,即31+2+12+26+13+17=101。
SAT的好处在于:如果要计算图13中上方的图的方框内的15+16+14+28+27+11的值,只需要基于四个角的值进行计算即可得到相同的值,即15+16+14+28+27+11=101+450-254-186=111。如果图13中上方的图的方框再大一些,则可以节省较大的计算量。比如,卷积核是7x7的情况下,通常需要累加49次的值,此时就可以节省49-4=45次的计算开销。当然,实际使用中同样需要考虑生成SAT表的开销,但整体而言,生成SAT表还是能够节省相似度的计算量。
步骤806,下发渲染指令。
本步骤中,在决策模块(1005)确定不执行帧复用的情况下,则向GPU下发渲染指令,从而使得GPU基于渲染指令实现目标图像帧的渲染。
步骤807,判断帧间ROI相似度是否较高?
可以参阅图14,图14为本申请实施例提供的一种判断相似度的流程示意图。如图14所示,在步骤805中输出两个渲染帧的SSIM值后,决策模块(1005)需要判断该SSIM值是否大于相似度阈值。若SSIM值大于或等于相似度阈值,则表示两个渲染帧的相似度较高,则决策使能帧复用;若SSIM值小于相似度阈值,则表示两个渲染帧的相似度不高,则决策不使能帧复用。
在高帧率(例如帧率大于30FPS)的情况下,人的手工操作无法快速改变帧间相似度,因此帧间相似度在时域上具有连续性,如上述的图4所示。因此,在决策时需要根据这一特性确定相似度阈值。若上一次使能帧复用,相似度阈值应为基准阈值与0.005的差值,即本次更有可能使能帧复用。若上一次未使能帧复用,则本帧使能帧复用的可能性小,阈值为基准阈值不变,其中基准阈值可以为0.99。初始时的决策默认为不使能帧复用。
步骤808,使能帧复用。
可以参阅图15,图15为本申请实施例提供的一种使能帧复用的流程示意图。本步骤在帧复用使能模块(1006)中实施。首先,从显示缓冲区中拷贝一份当前的渲染帧,输送到下一个显示器从显示缓冲区中取值的位置,从而实现帧复用;然后挂起渲染线程,计算从上一帧结束后开始,到目前为止消耗的时间,用两帧图像帧的显示时长减掉计算得到的 消耗时间,得到需要挂起渲染线程的时间,进而延迟下一帧的渲染指令流下发。
由上述的方法可以看出,相较于现有的相关技术,本方案只需要通过两个连续的渲染帧就可以决策是否需要使能帧复用,不需要人工成本,节省了大量的人工成本。另外基于两帧图像帧相似度的决策,符合人类视觉感知习惯,场景识别率可以得到很好的保障。
其次,本方案只依赖于两个渲染帧,意味着不依赖于平台也不依赖于游戏,从而保证本方案可以移植到任意一个平台,任意一个游戏应用上,具有较高的可移植性和通用性。
此外,本方案可以根据不同场景下的用户体验,动态的调整阈值,从而扩大收益,增加帧复用使能的比例。同时选择帧复用这种画面不会出错的技术,解决了插帧技术方案易致眩晕的问题。
在图1至图15所对应的实施例的基础上,为了更好的实施本申请实施例的上述方案,下面还提供用于实施上述方案的相关设备。
具体可以参阅图16,图16为本申请实施例提供的一种装置1600的结构示意图,该数据处理装置1600包括:获取单元1601和处理单元1602;所述获取单元1601,用于获取第一渲染指令,所述第一渲染指令用于指示渲染第一目标图像帧;所述获取单元1601还用于根据所述第一渲染指令获取第一图像帧和第二图像帧,所述第一图像帧为所述第二图像帧的前一帧,所述第二图像帧为所述第一目标图像帧的前一帧;所述处理单元1602,用于基于所述第一图像帧和所述第二图像帧之间的相似度大于或等于第一阈值,根据所述第二图像帧得到所述第一目标图像帧,其中所述第一目标图像帧的内容与所述第二图像帧的内容相同。
在一种可能的实现方式中,所述第一渲染指令中包括待渲染的三维模型中的第一观察点位置;所述处理单元1602还用于:获取所述第二图像帧对应的第二观察点位置,所述第二图像帧是基于所述第二观察点位置对所述三维模型进行渲染得到的;基于所述第一观察点位置与所述第二观察点位置之间的距离小于或等于第二阈值,确定所述第一图像帧和所述第二图像帧之间的相似度。
在一种可能的实现方式中,所述获取单元1601,还用于获取第二渲染指令,所述第二渲染指令用于指示渲染第二目标图像帧,所述第二渲染指令中包括待渲染的三维模型中的第三观察点位置;所述获取单元1601,还用于根据所述第二渲染指令获取第三图像帧、第四图像帧以及所述第四图像帧对应的第四观察点位置,所述第三图像帧为所述第四图像帧的前一帧,所述第四图像帧为所述第二目标图像帧的前一帧,所述第四图像帧是基于所述第四观察点位置对所述三维模型进行渲染得到的;所述处理单元1602,还用于:基于所述第三观察点位置与所述第四观察点位置之间的距离大于第二阈值,执行所述第二渲染指令,以渲染得到所述第二目标图像帧;或,基于所述第三观察点位置与所述第四观察点位置之间的距离小于或等于所述第二阈值且所述第三图像帧和所述第四图像帧之间的相似度小于所述第一阈值,执行所述第二渲染指令,以渲染得到所述第二目标图像帧。
在一种可能的实现方式中,所述处理单元1602,具体用于:将所述第一图像帧和所述第二图像帧分别划分为多个图像块,得到所述第一图像帧对应的多个第一图像块和所述第 二图像帧对应的多个第二图像块,其中所述多个第一图像块与所述多个第二图像块具有一一对应的关系;分别计算所述多个第一图像块与所述多个第二图像块中具有对应关系的图像块之间的相似度,得到多组相似度;将所述多组相似度中的目标相似度确定为所述第一图像帧和所述第二图像帧之间的相似度,所述目标相似度为所述多组相似度中值最小的相似度。
在一种可能的实现方式中,所述第一阈值是基于第三阈值确定的,所述第三阈值为预先设定的固定值;其中,若第三目标图像帧是通过渲染得到的,则所述第一阈值与所述第三阈值相同,所述第三目标图像帧为位于所述第一目标图像帧之前且所述第三目标图像帧的渲染方式是基于图像帧之间的相似度确定的;若第三目标图像帧是通过复用图像帧得到的,则所述第一阈值为所述第三阈值与第四阈值之间的差值,所述第四阈值为预先设定的固定值。
在一种可能的实现方式中,所述处理单元1602,还用于:确定目标时长,所述目标时长为两帧图像帧的显示时长与计算得到所述第一目标图像帧的时长之间的差值;停止运行渲染线程,其中所述渲染线程停止运行的时长为所述目标时间,所述渲染线程用于基于渲染指令渲染得到图像帧。
在一种可能的实现方式中,所述处理单元1602,具体用于:对所述第一图像帧和所述第二图像帧执行缩小处理,得到缩小后的第一图像帧和缩小后的第二图像帧;计算所述缩小后的第一图像帧和所述缩小后的第二图像帧之间的相似度,得到所述第一图像帧和所述第二图像帧之间的相似度。
接下来介绍本申请实施例提供的一种电子设备,请参阅图17,图17为本申请实施例提供的电子设备的一种结构示意图,电子设备1700具体可以表现为手机、平板、笔记本电脑、智能穿戴设备、服务器等,此处不做限定。其中,电子设备1700上可以部署有图11对应实施例中所描述的声音处理装置,用于实现图11对应实施例中声音处理的功能。具体的,电子设备1700包括:接收器1701、发射器1702、处理器1703和存储器1704(其中电子设备1700中的处理器1703的数量可以一个或多个,图17中以一个处理器为例),其中,处理器1703可以包括应用处理器17031和通信处理器17032。在本申请的一些实施例中,接收器1701、发射器1702、处理器1703和存储器1704可通过总线或其它方式连接。
存储器1704可以包括只读存储器和随机存取存储器,并向处理器1703提供指令和数据。存储器1704的一部分还可以包括非易失性随机存取存储器(non-volatile random access memory,NVRAM)。存储器1704存储有处理器和操作指令、可执行模块或者数据结构,或者它们的子集,或者它们的扩展集,其中,操作指令可包括各种操作指令,用于实现各种操作。
处理器1703控制电子设备的操作。具体的应用中,电子设备的各个组件通过总线系统耦合在一起,其中总线系统除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都称为总线系统。
上述本申请实施例揭示的方法可以应用于处理器1703中,或者由处理器1703实现。 处理器1703可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1703中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1703可以是通用处理器、数字信号处理器(digital signal processing,DSP)、微处理器或微控制器,还可进一步包括专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。该处理器1703可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1704,处理器1703读取存储器1704中的信息,结合其硬件完成上述方法的步骤。
接收器1701可用于接收输入的数字或字符信息,以及产生与电子设备的相关设置以及功能控制有关的信号输入。发射器1702可用于通过第一接口输出数字或字符信息;发射器1702还可用于通过第一接口向磁盘组发送指令,以修改磁盘组中的数据;发射器1702还可以包括显示屏等显示设备。
可以参阅图18,本申请还提供了一种计算机可读存储介质,在一些实施例中,上述图3所公开的方法可以实施为以机器可读格式被编码在计算机可读存储介质上或者被编码在其它非瞬时性介质或者制品上的计算机程序指令。
图18示意性地示出根据这里展示的至少一些实施例而布置的示例计算机可读存储介质的概念性局部视图,示例计算机可读存储介质包括用于在计算设备上执行计算机进程的计算机程序。
在一个实施例中,计算机可读存储介质1800是使用信号承载介质1801来提供的。信号承载介质1801可以包括一个或多个程序指令1802,其当被一个或多个处理器运行时可以提供以上针对图5描述的功能或者部分功能。此外,图18中的程序指令1802也描述示例指令。
在一些示例中,信号承载介质1801可以包含计算机可读介质1803,诸如但不限于,硬盘驱动器、紧密盘(CD)、数字视频光盘(DVD)、数字磁带、存储器、ROM或RAM等等。
在一些实施方式中,信号承载介质1801可以包含计算机可记录介质1804,诸如但不限于,存储器、读/写(R/W)CD、R/W DVD、等等。在一些实施方式中,信号承载介质1801可以包含通信介质1805,诸如但不限于,数字和/或模拟通信介质(例如,光纤电缆、波导、有线通信链路、无线通信链路、等等)。因此,例如,信号承载介质1801可以由无线形式的通信介质1805(例如,遵守IEEE 802标准或者其它传输协议的无线通信介质)来传达。
一个或多个程序指令1802可以是,例如,计算机可执行指令或者逻辑实施指令。在一些示例中,计算设备的计算设备可以被配置为,响应于通过计算机可读介质1803、计算机可记录介质1804、和/或通信介质1805中的一个或多个传达到计算设备的程序指令1802, 提供各种操作、功能、或者动作。
应该理解,这里描述的布置仅仅是用于示例的目的。因而,本领域技术人员将理解,其它布置和其它元素(例如,机器、接口、功能、顺序、和功能组等等)能够被取而代之地使用,并且一些元素可以根据所期望的结果而一并省略。另外,所描述的元素中的许多是可以被实现为离散的或者分布式的组件的、或者以任何适当的组合和位置来结合其它组件实施的功能实体。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。

Claims (17)

  1. 一种图像帧的渲染方法,其特征在于,包括:
    获取第一渲染指令,所述第一渲染指令用于指示渲染第一目标图像帧;
    根据所述第一渲染指令获取第一图像帧和第二图像帧,所述第一图像帧为所述第二图像帧的前一帧,所述第二图像帧为所述第一目标图像帧的前一帧;
    基于所述第一图像帧和所述第二图像帧之间的相似度大于或等于第一阈值,根据所述第二图像帧得到所述第一目标图像帧,其中所述第一目标图像帧的内容与所述第二图像帧的内容相同。
  2. 根据权利要求1所述的方法,其特征在于,所述第一渲染指令中包括待渲染的三维模型中的第一观察点位置;
    所述方法还包括:
    获取所述第二图像帧对应的第二观察点位置,所述第二图像帧是基于所述第二观察点位置对所述三维模型进行渲染得到的;
    基于所述第一观察点位置与所述第二观察点位置之间的距离小于或等于第二阈值,确定所述第一图像帧和所述第二图像帧之间的相似度。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    获取第二渲染指令,所述第二渲染指令用于指示渲染第二目标图像帧,所述第二渲染指令中包括待渲染的三维模型中的第三观察点位置;
    根据所述第二渲染指令获取第三图像帧、第四图像帧以及所述第四图像帧对应的第四观察点位置,所述第三图像帧为所述第四图像帧的前一帧,所述第四图像帧为所述第二目标图像帧的前一帧,所述第四图像帧是基于所述第四观察点位置对所述三维模型进行渲染得到的;
    基于所述第三观察点位置与所述第四观察点位置之间的距离大于第二阈值,执行所述第二渲染指令,以渲染得到所述第二目标图像帧;
    或,基于所述第三观察点位置与所述第四观察点位置之间的距离小于或等于所述第二阈值且所述第三图像帧和所述第四图像帧之间的相似度小于所述第一阈值,执行所述第二渲染指令,以渲染得到所述第二目标图像帧。
  4. 根据权利要求1-3任意一项所述的方法,其特征在于,所述方法还包括:
    将所述第一图像帧和所述第二图像帧分别划分为多个图像块,得到所述第一图像帧对应的多个第一图像块和所述第二图像帧对应的多个第二图像块,其中所述多个第一图像块与所述多个第二图像块具有一一对应的关系;
    分别计算所述多个第一图像块与所述多个第二图像块中具有对应关系的图像块之间的相似度,得到多组相似度;
    将所述多组相似度中的目标相似度确定为所述第一图像帧和所述第二图像帧之间的相 似度,所述目标相似度为所述多组相似度中值最小的相似度。
  5. 根据权利要求1-4任意一项所述的方法,其特征在于,所述第一阈值是基于第三阈值确定的,所述第三阈值为预先设定的固定值;
    其中,若第三目标图像帧是通过渲染得到的,则所述第一阈值与所述第三阈值相同,所述第三目标图像帧为位于所述第一目标图像帧之前且所述第三目标图像帧的渲染方式是基于图像帧之间的相似度确定的;
    若第三目标图像帧是通过复用图像帧得到的,则所述第一阈值为所述第三阈值与第四阈值之间的差值,所述第四阈值为预先设定的固定值。
  6. 根据权利要求1-5任意一项所述的方法,其特征在于,所述方法还包括:
    确定目标时长,所述目标时长为两帧图像帧的显示时长与计算得到所述第一目标图像帧的时长之间的差值;
    停止运行渲染线程,其中所述渲染线程停止运行的时长为所述目标时间,所述渲染线程用于基于渲染指令渲染得到图像帧。
  7. 根据权利要求1-6任意一项所述的方法,其特征在于,所述方法还包括:
    对所述第一图像帧和所述第二图像帧执行缩小处理,得到缩小后的第一图像帧和缩小后的第二图像帧;
    计算所述缩小后的第一图像帧和所述缩小后的第二图像帧之间的相似度,得到所述第一图像帧和所述第二图像帧之间的相似度。
  8. 一种渲染装置,其特征在于,包括:
    获取单元,用于获取第一渲染指令,所述第一渲染指令用于指示渲染第一目标图像帧;
    所述获取单元还用于根据所述第一渲染指令获取第一图像帧和第二图像帧,所述第一图像帧为所述第二图像帧的前一帧,所述第二图像帧为所述第一目标图像帧的前一帧;
    处理单元,用于基于所述第一图像帧和所述第二图像帧之间的相似度大于或等于第一阈值,根据所述第二图像帧得到所述第一目标图像帧,其中所述第一目标图像帧的内容与所述第二图像帧的内容相同。
  9. 根据权利要求8所述的装置,其特征在于,所述第一渲染指令中包括待渲染的三维模型中的第一观察点位置;
    所述处理单元还用于:
    获取所述第二图像帧对应的第二观察点位置,所述第二图像帧是基于所述第二观察点位置对所述三维模型进行渲染得到的;
    基于所述第一观察点位置与所述第二观察点位置之间的距离小于或等于第二阈值,确 定所述第一图像帧和所述第二图像帧之间的相似度。
  10. 根据权利要求8或9所述的装置,其特征在于,
    所述获取单元,还用于获取第二渲染指令,所述第二渲染指令用于指示渲染第二目标图像帧,所述第二渲染指令中包括待渲染的三维模型中的第三观察点位置;
    所述获取单元,还用于根据所述第二渲染指令获取第三图像帧、第四图像帧以及所述第四图像帧对应的第四观察点位置,所述第三图像帧为所述第四图像帧的前一帧,所述第四图像帧为所述第二目标图像帧的前一帧,所述第四图像帧是基于所述第四观察点位置对所述三维模型进行渲染得到的;
    所述处理单元,还用于:基于所述第三观察点位置与所述第四观察点位置之间的距离大于第二阈值,执行所述第二渲染指令,以渲染得到所述第二目标图像帧;
    或,基于所述第三观察点位置与所述第四观察点位置之间的距离小于或等于所述第二阈值且所述第三图像帧和所述第四图像帧之间的相似度小于所述第一阈值,执行所述第二渲染指令,以渲染得到所述第二目标图像帧。
  11. 根据权利要求8-10任意一项所述的装置,其特征在于,所述处理单元,具体用于:
    将所述第一图像帧和所述第二图像帧分别划分为多个图像块,得到所述第一图像帧对应的多个第一图像块和所述第二图像帧对应的多个第二图像块,其中所述多个第一图像块与所述多个第二图像块具有一一对应的关系;
    分别计算所述多个第一图像块与所述多个第二图像块中具有对应关系的图像块之间的相似度,得到多组相似度;
    将所述多组相似度中的目标相似度确定为所述第一图像帧和所述第二图像帧之间的相似度,所述目标相似度为所述多组相似度中值最小的相似度。
  12. 根据权利要求8-11任意一项所述的装置,其特征在于,所述第一阈值是基于第三阈值确定的,所述第三阈值为预先设定的固定值;
    其中,若第三目标图像帧是通过渲染得到的,则所述第一阈值与所述第三阈值相同,所述第三目标图像帧为位于所述第一目标图像帧之前且所述第三目标图像帧的渲染方式是基于图像帧之间的相似度确定的;
    若第三目标图像帧是通过复用图像帧得到的,则所述第一阈值为所述第三阈值与第四阈值之间的差值,所述第四阈值为预先设定的固定值。
  13. 根据权利要求8-12任意一项所述的装置,其特征在于,所述处理单元,还用于:
    确定目标时长,所述目标时长为两帧图像帧的显示时长与计算得到所述第一目标图像帧的时长之间的差值;
    停止运行渲染线程,其中所述渲染线程停止运行的时长为所述目标时间,所述渲染线程用于基于渲染指令渲染得到图像帧。
  14. 根据权利要求8-13任意一项所述的装置,其特征在于,所述处理单元,具体用于:
    对所述第一图像帧和所述第二图像帧执行缩小处理,得到缩小后的第一图像帧和缩小后的第二图像帧;
    计算所述缩小后的第一图像帧和所述缩小后的第二图像帧之间的相似度,得到所述第一图像帧和所述第二图像帧之间的相似度。
  15. 一种电子设备,其特征在于,包括存储器和处理器;所述存储器存储有代码,所述处理器被配置为执行所述代码,当所述代码被执行时,所述电子设备执行如权利要求1至7任意一项所述的方法。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有指令,所述指令在由计算机执行时使得所述计算机实施如权利要求1至7任意一项所述的方法。
  17. 一种计算机程序产品,其特征在于,所述计算机程序产品存储有指令,所述指令在由计算机执行时使得所述计算机实施如权利要求1至7任意一项所述的方法。
PCT/CN2022/133959 2021-11-26 2022-11-24 一种图像帧的渲染方法及相关装置 WO2023093792A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111424170.9 2021-11-26
CN202111424170.9A CN116173496A (zh) 2021-11-26 2021-11-26 一种图像帧的渲染方法及相关装置

Publications (1)

Publication Number Publication Date
WO2023093792A1 true WO2023093792A1 (zh) 2023-06-01

Family

ID=86435049

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/133959 WO2023093792A1 (zh) 2021-11-26 2022-11-24 一种图像帧的渲染方法及相关装置

Country Status (2)

Country Link
CN (1) CN116173496A (zh)
WO (1) WO2023093792A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117687495A (zh) * 2024-02-04 2024-03-12 荣耀终端有限公司 一种数据获取方法、训练方法及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170330496A1 (en) * 2016-05-16 2017-11-16 Unity IPR ApS System and method for rendering images in virtual reality and mixed reality devices
CN110062176A (zh) * 2019-04-12 2019-07-26 北京字节跳动网络技术有限公司 生成视频的方法、装置、电子设备和计算机可读存储介质
US10537799B1 (en) * 2018-03-23 2020-01-21 Electronic Arts Inc. User interface rendering and post processing during video game streaming
CN111476273A (zh) * 2020-03-11 2020-07-31 西安万像电子科技有限公司 图像处理方法及装置
CN111724293A (zh) * 2019-03-22 2020-09-29 华为技术有限公司 图像渲染方法及装置、电子设备
CN112422873A (zh) * 2020-11-30 2021-02-26 Oppo(重庆)智能科技有限公司 插帧方法、装置、电子设备及存储介质
CN113473181A (zh) * 2021-09-03 2021-10-01 北京市商汤科技开发有限公司 视频处理方法和装置、计算机可读存储介质及计算机设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170330496A1 (en) * 2016-05-16 2017-11-16 Unity IPR ApS System and method for rendering images in virtual reality and mixed reality devices
US10537799B1 (en) * 2018-03-23 2020-01-21 Electronic Arts Inc. User interface rendering and post processing during video game streaming
CN111724293A (zh) * 2019-03-22 2020-09-29 华为技术有限公司 图像渲染方法及装置、电子设备
CN110062176A (zh) * 2019-04-12 2019-07-26 北京字节跳动网络技术有限公司 生成视频的方法、装置、电子设备和计算机可读存储介质
CN111476273A (zh) * 2020-03-11 2020-07-31 西安万像电子科技有限公司 图像处理方法及装置
CN112422873A (zh) * 2020-11-30 2021-02-26 Oppo(重庆)智能科技有限公司 插帧方法、装置、电子设备及存储介质
CN113473181A (zh) * 2021-09-03 2021-10-01 北京市商汤科技开发有限公司 视频处理方法和装置、计算机可读存储介质及计算机设备

Also Published As

Publication number Publication date
CN116173496A (zh) 2023-05-30

Similar Documents

Publication Publication Date Title
US11321906B2 (en) Asynchronous time and space warp with determination of region of interest
US20140176591A1 (en) Low-latency fusing of color image data
US10242710B2 (en) Automatic cinemagraph
KR102441514B1 (ko) 하이브리드 스트리밍
US9161012B2 (en) Video compression using virtual skeleton
US11211034B2 (en) Display rendering
WO2023093792A1 (zh) 一种图像帧的渲染方法及相关装置
CN112884908A (zh) 基于增强现实的显示方法、设备、存储介质及程序产品
US10237563B2 (en) System and method for controlling video encoding using content information
CN115176455A (zh) 功率高效的动态电子图像稳定
WO2022218042A1 (zh) 视频处理方法、装置、视频播放器、电子设备及可读介质
CN114570020A (zh) 数据处理方法以及系统
US11810524B2 (en) Virtual reality display device and control method thereof
CN112565883A (zh) 一种用于虚拟现实场景的视频渲染处理系统和计算机设备
WO2023133082A1 (en) Resilient rendering for augmented-reality devices
WO2021249562A1 (zh) 一种信息传输方法、相关设备及系统
TWM630947U (zh) 立體影像播放裝置
CN110860084A (zh) 一种虚拟画面处理方法及装置
TWI817335B (zh) 立體影像播放裝置及其立體影像產生方法
US20230412724A1 (en) Controlling an Augmented Call Based on User Gaze
US20220326527A1 (en) Display System Optimization
CN117830497A (zh) 一种智能化分配3d渲染功耗资源的方法及系统
KR20240048207A (ko) 사용자의 상황 정보 예측 기반 확장현실 디바이스의 영상 스트리밍 방법 및 장치
US20190114825A1 (en) Light fields as better backgrounds in rendering
Ikkala Scalable Parallel Path Tracing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22897875

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022897875

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022897875

Country of ref document: EP

Effective date: 20240422