CN115398907A - Image frame prediction method and electronic equipment - Google Patents

Image frame prediction method and electronic equipment Download PDF

Info

Publication number
CN115398907A
CN115398907A CN202180026284.XA CN202180026284A CN115398907A CN 115398907 A CN115398907 A CN 115398907A CN 202180026284 A CN202180026284 A CN 202180026284A CN 115398907 A CN115398907 A CN 115398907A
Authority
CN
China
Prior art keywords
block
frame
electronic device
twenty
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180026284.XA
Other languages
Chinese (zh)
Inventor
李煜
赵铎
胡笑鸣
黄开兴
蒋铭辉
陈健
周越海
王亮
高三山
季美辰
朱欢欢
王军
沈勇武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202011493948.7A external-priority patent/CN114708289A/en
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN115398907A publication Critical patent/CN115398907A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

In the method, the electronic equipment can draw a first moving object in a first memory space and draw a first static object in a second memory space according to a drawing instruction of an Nth drawing frame; and the electronic equipment predicts the moving object in the (N + 3) th prediction frame according to the first moving object and the second moving object and predicts the static object in the (N + 3) th prediction frame according to the first static object and the second static object. Finally, the electronic device synthesizes the (N + 3) th predicted frame according to the moving object and the static object of the (N) th predicted frame. By implementing the technical scheme provided by the application, the electronic equipment can more accurately predict the image frame, and the frame rate of the playing video of the application is improved by utilizing the predicted image frame, so that the fluency of a video interface can be improved.

Description

Image frame prediction method and electronic equipment
The present application claims priority of chinese patent application having application number 202011069443.8 and application name of "a frame prediction method, electronic device and computer readable storage medium" filed in 30/09/2020 by the chinese patent office; and the priority of the Chinese patent application with the application number of 202011063375.4 and the application name of 'image frame generation method and electronic equipment' submitted by the Chinese patent office at 30/09/2020; and the priority of the Chinese patent application with application number 202011197968.X, entitled as "a method for predicting image frames and electronic equipment", filed by the Chinese patent office on 31/10/2020; and the priority of the Chinese patent application with the application number of 202011377449.1, the name of which is "a method for predicting image frames and an electronic device", filed in the Chinese patent office at 11/30/2020; and the priority of the Chinese patent application with the application number of 202011377306.0, the application name of which is "image frame generation method and electronic equipment", which is submitted in 2020, 11, month and 30 days; and the priority of Chinese patent application with application number 202011493948.7, entitled "a method for predicting image frame and electronic equipment", filed by the Chinese patent office on 16.12.2020; and the priority of Chinese patent application with application number 202011629171.2 and application name of 'a method for predicting image frame and electronic equipment' is submitted in 30.12.12.2020; the entire contents of the above patent applications are incorporated by reference in the present application.
Technical Field
The present application relates to the field of electronic technologies and image processing, and in particular, to a method for image frame prediction and an electronic device.
Background
The video interface (video playing interface such as a television show, a movie and the like, game pictures and the like) displayed by the electronic equipment is a continuous picture in nature. Taking the game picture as an example, the higher the frame rate of the game picture is, the smoother the game picture displayed by the electronic device is, and the better the user visual experience is. For game pictures needing to be rendered in real time, the higher the frame rate is, the more image frames (drawing frames for short) of the electronic device need to be drawn and rendered by applications (video applications, game applications and the like), and the larger the power consumption of the electronic device is. Therefore, how to improve the fluency of the video interface displayed by the electronic equipment is an urgent problem to be solved under the condition of saving the power consumption of the electronic equipment.
Disclosure of Invention
The application provides an image frame prediction method and electronic equipment, which can improve fluency of a video interface displayed by the electronic equipment under the condition of saving power consumption of the electronic equipment.
In a first aspect, the present application provides a method of image frame prediction, which may include: when a first drawing frame is drawn, the electronic equipment determines that the spatial information of a first drawing object changes according to a drawing instruction of the first drawing object, determines that the spatial information of a second drawing object does not change according to a drawing instruction of the second drawing object, writes the color data of the first drawing object into a first color accessory, and writes the color data of the second drawing object into a second color accessory; when the second drawing frame is drawn, the electronic equipment determines that the spatial information of the first drawing object changes according to the drawing instruction of the third drawing object, determines that the spatial information of the fourth drawing object does not change according to the drawing instruction of the fourth drawing object, writes the color data of the third drawing object into the third color accessory, and writes the color data of the fourth drawing object into the fourth color accessory; the electronic device generates a fifth color attachment of the first predicted frame according to the first color attachment and the third color attachment, and generates a sixth color attachment of the first predicted frame according to the second color attachment and the fourth color attachment; the electronic device synthesizes the fifth color attachment and the sixth color attachment into the first predicted frame.
By the method provided by the first aspect, the electronic device may determine whether the spatial information of the drawing object changes according to the drawing instruction of the drawing object. That is, the electronic apparatus can determine whether the drawing object is a moving object according to the drawing instruction of the drawing object. If the drawing instruction of the drawing object indicates that the spatial information is changed, the drawing object is a moving object. And if the drawing instruction of the drawing object does not indicate that the spatial information is changed, the drawing object is a static object. If the drawing object is a moving object, the electronic equipment writes the color data of the drawing object into the first color accessory. If the drawing object is a static object, the electronic equipment writes the color data of the drawing object into the second color attachment. In this way, the electronic device can store the color data of the moving object and the color data of the static object in the first drawing frame separately. Likewise, the electronic device may store color data of the moving object and color data of the static object in the second drawing frame separately. Then, the electronic device may predict the moving object in the first prediction frame from the moving object in the first drawing frame and the moving object in the second drawing frame. The electronic device may predict the static object in the first predicted frame based on the static object in the first rendered frame and the static object in the second rendered frame. Therefore, the electronic equipment can utilize the predicted frame predicted by the drawing frame, and the frame rate can be improved, so that the fluency of a video interface displayed by the electronic equipment is improved.
With reference to the first aspect, in one possible implementation manner, an electronic device generates a fifth color attachment of a first predicted frame according to the first color attachment and the third color attachment, including: the electronic device determines a first motion vector of a third color accessory according to the first color accessory and the third color accessory; the electronic device generates a fifth color attachment for the first predicted frame based on the second color attachment and the first motion vector.
In the above implementation, the electronic device calculates the motion vector of the moving object only by using the color data of the moving object, so that the motion vector of the moving object can be calculated more accurately. And, the electronic device calculates the motion vector of the moving object only according to the color data of the moving object. The electronic device does not miscalculate the motion vector of the moving object into the motion vector of the static object. Therefore, the electronic equipment predicts more accurate moving objects in the predicted frame.
With reference to the first aspect, in a possible implementation manner, the electronic device determines, according to the first color accessory and the third color accessory, a first motion vector of a third color accessory; the method specifically comprises the following steps: the electronic device divides the third color accessory into Q pixel blocks, and the electronic device takes out a first pixel block from the Q pixel blocks of the third color accessory; the electronic device determines a second pixel block in the first color attachment that matches the first pixel block; the electronic equipment obtains a motion vector of the first pixel block according to the displacement from the second pixel block to the first pixel block; the electronic device determines a first motion vector of the third color accessory based on the motion vector of the first pixel block. Following the steps in this implementation, the electronic device can determine motion vectors for all of the Q pixel blocks of the third color accessory. Each pixel block includes f x f (e.g., 16 x 16) pixels.
In the above implementation manner, the electronic device divides the color attachments of the moving object into blocks to calculate the motion vector, and does not need to calculate the motion vector of each pixel point of the motion vector. This can reduce the amount of computation, thereby reducing the power consumption of the electronic device.
With reference to the first aspect, in a possible implementation manner, the determining, by the electronic device, a second pixel block matched with the first pixel block in the first color accessory specifically includes: the electronic equipment determines a plurality of candidate pixel blocks in the first color accessory through first pixel points in the first pixel block, and the electronic equipment respectively calculates the difference values of the color values of the candidate pixel blocks and the color value of the first pixel block; the electronic equipment determines a second pixel block matched with the first pixel block according to the difference of the color values of the first pixel block of the candidate pixel blocks; the second pixel block is a candidate pixel block with the smallest difference value with the color value of the first pixel block in the plurality of candidate pixel blocks. In this way, the electronic device can more accurately find a matching pixel block of each pixel block, thereby being able to more accurately calculate a motion vector of each pixel block.
With reference to the first aspect, in a possible implementation manner, the generating, by the electronic device, a fifth color accessory of the first predicted frame according to the second color accessory and the first motion vector specifically includes: the electronic device determines a motion vector for a fifth color accessory based on the first motion vector and generates a fifth color accessory based on the motion vectors for the second color accessory and the fifth color accessory. The motion vector of the fifth color attachment is K times the first motion vector, K being greater than 0 and less than 1.
With reference to the first aspect, in one possible implementation manner, K is equal to 0.5. Therefore, the object in each image frame moves at a constant speed, so that the electronic equipment can calculate conveniently, and the user experience is better when watching the video.
With reference to the first aspect, in one possible implementation manner, the electronic device generates a sixth color attachment of the first predicted frame according to the second color attachment and the fourth color attachment; the method specifically comprises the following steps: the electronic equipment determines a second motion vector of a fourth color accessory according to the second color accessory and the fourth color accessory; the electronic device generates a sixth color attachment for the first predicted frame based on the fourth color attachment and the second motion vector.
In the above implementation, the electronic device calculates the motion vector of the static object only by using the color data of the static object, so that the motion vector of the static object can be calculated more accurately. And, the electronic device calculates the motion vector of the static object only from the color data of the static object. The electronic device does not miscalculate the motion vector of the static object into the motion vector of the moving object. Therefore, the static objects in the predicted frame predicted by the electronic equipment are more accurate.
With reference to the first aspect, in a possible implementation manner, the electronic device determines a second motion vector of a fourth color accessory according to the second color accessory and the fourth color accessory; the method specifically comprises the following steps: the electronic equipment divides the fourth color accessory into Q pixel blocks, and the electronic equipment takes out the third pixel block from the fourth color accessory; the electronic device calculates a first position of the third pixel block in the second color accessory; the electronic equipment determines a motion vector of a third pixel block according to the first position and a second position in a fourth color accessory of the third pixel block; the electronic device determines a second motion vector of the fourth color accessory from the motion vector of the third block of pixels. According to steps in this implementation, the electronic device can determine a motion vector for each of the Q pixel blocks of the fourth color attachment.
In the above implementation manner, the electronic device divides the color attachments of the static object into blocks to calculate the motion vector, and does not need to calculate the motion vector of each pixel point of the static vector. This can reduce the amount of computation, thereby reducing the power consumption of the electronic device.
With reference to the first aspect, in a possible implementation manner, the calculating, by the electronic device, a first position of the third pixel block in the second color accessory specifically includes: the electronic equipment acquires a first matrix in a drawing instruction of a first drawing frame and a second matrix of a drawing instruction of a second drawing frame, wherein the first matrix is used for recording corner information of a camera position of the first drawing frame, and the second matrix is used for recording corner information of a camera position of the second drawing frame; the electronic device calculates a first position of the third pixel block in the second color accessory according to the first matrix and the second matrix, and a depth value of the third pixel block.
With reference to the first aspect, in a possible implementation manner, the generating, by the electronic device, a sixth color attachment of the first predicted frame according to the fourth color attachment and the second motion vector specifically includes: the electronic device determines a motion vector for a sixth color accessory based on the second motion vector, and generates a sixth color accessory based on the motion vectors for the fourth color accessory and the sixth color accessory. The motion vector of the sixth color accessory is K times the second motion vector, K being greater than 0 and less than 1.
With reference to the first aspect, in one possible implementation manner, K is equal to 0.5. Therefore, the object in each image frame moves at a constant speed, so that the electronic equipment can calculate conveniently, and the user experience is better when watching the video.
With reference to the first aspect, in a possible implementation manner, the drawing instruction of the first drawing object includes an execution drawing instruction of the first drawing object and a drawing state device instruction of the first drawing object, where the execution drawing instruction of the first drawing object is used to trigger the electronic device to perform drawing rendering on drawing state data of the first drawing object, and generate a drawing result; the rendering state device instruction of the first rendering object is used for setting rendering state data on which the rendering instruction of the first rendering object depends; the rendering state data of the first rendering object includes vertex information data, a vertex index, texture information, and a vertex information cache index of the first rendering object.
With reference to the first aspect, in a possible implementation manner, the determining, by an electronic device according to a drawing instruction of a first drawing object, that spatial information of the first drawing object changes includes: the electronic equipment determines that transfer matrix parameters exist in the drawing instruction of the first drawing object, the electronic equipment determines that the transfer matrix parameters existing in the drawing instruction of the first drawing object are different from the transfer matrix parameters of the corresponding first drawing object, and the transfer matrix is used for describing the mapping relation from the local coordinate system of the drawing object to the world coordinate system.
With reference to the first aspect, in a possible implementation manner, the drawing instruction of the second drawing object includes an execution drawing instruction of the second drawing object and a drawing state device instruction of the second drawing object, where the execution drawing instruction of the second drawing object is used to trigger the electronic device to perform drawing and rendering on drawing state data of the second drawing object, and generate a drawing result; drawing state device instructions of the second drawing object are used for setting drawing state data on which the drawing instructions of the second drawing object depend; the rendering state data of the second rendering object includes vertex information data, vertex index, texture information, and vertex information cache index of the second rendering object.
With reference to the first aspect, in a possible implementation manner, the determining, by the electronic device according to the drawing instruction of the second drawing object, that the spatial information of the second drawing object is not changed includes: the electronic equipment determines that no transfer matrix parameter exists in the drawing instruction of the second drawing object, the electronic equipment determines that the transfer matrix parameter existing in the drawing instruction of the second drawing object is the same as the transfer matrix parameter of the corresponding second drawing object, and the transfer matrix is used for describing the mapping relation from the local coordinate system of the drawing object to the world coordinate system.
With reference to the first aspect, in a possible implementation manner, the drawing instruction of the third drawing object includes an execution drawing instruction of the third drawing object and a drawing state device instruction of the third drawing object, where the execution drawing instruction of the third drawing object is used to trigger the electronic device to perform drawing and rendering on drawing state data of the third drawing object, and generate a drawing result; the rendering state device instruction of the third rendering object is to set rendering state data on which the rendering instruction of the third rendering object is executed; the drawing state data of the third drawing object includes vertex information data, a vertex index, texture information, and a vertex information cache index of the third drawing object.
With reference to the first aspect, in a possible implementation manner, the determining, by the electronic device, that the spatial information of the first drawing object changes according to the drawing instruction of the third drawing object includes: the electronic equipment determines that a transfer matrix parameter exists in the drawing instruction of the third drawing object, the electronic equipment determines that the transfer matrix parameter existing in the drawing instruction of the third drawing object is different from the transfer matrix parameter of the corresponding third drawing object, and the transfer matrix is used for describing the mapping relation from the local coordinate system of the drawing object to the world coordinate system.
With reference to the first aspect, in a possible implementation manner, the drawing instruction of the fourth drawing object includes an execution drawing instruction of the fourth drawing object and a drawing state device instruction of the fourth drawing object, where the execution drawing instruction of the fourth drawing object is used to trigger the electronic device to perform drawing and rendering on drawing state data of the fourth drawing object, and generate a drawing result; the drawing state device instruction of the fourth drawing object is used for setting drawing state data on which the drawing instruction of the fourth drawing object depends; the drawing state data of the fourth drawing object includes vertex information data, a vertex index, texture information, and a vertex information cache index of the fourth drawing object.
With reference to the first aspect, in a possible implementation manner, the determining, by the electronic device according to the drawing instruction of the fourth drawing object, that the spatial information of the fourth drawing object is not changed includes: the electronic equipment determines that no transfer matrix parameter exists in the drawing instruction of the fourth drawing object, the electronic equipment determines that the transfer matrix parameter existing in the drawing instruction of the fourth drawing object is the same as the transfer matrix parameter of the corresponding fourth drawing object, and the transfer matrix is used for describing the mapping relation from the local coordinate system of the drawing object to the world coordinate system.
With reference to the first aspect, in a possible implementation manner, before the electronic device determines that the spatial information of the first drawing object changes according to a drawing instruction of the first drawing object and determines that the spatial information of the second drawing object does not change according to a drawing instruction of the second drawing object, the electronic device writes the color data of the first drawing object in the first color accessory and writes the color data of the second drawing object in the second color accessory, the method further includes: the electronic equipment creates a first memory space, a second memory space, a third memory space, a fourth memory space, a fifth memory space, a sixth memory space and a seventh memory space; the first memory space is used for storing a first color accessory, the second memory space is used for storing a second color accessory, the third memory space is used for storing a third color accessory, the fourth memory space is used for storing a fourth color accessory, the fifth memory space is used for storing a fifth color accessory, the sixth memory space is used for storing a sixth color accessory, and the seventh memory space is used for storing the first prediction frame.
With reference to the first aspect, in a possible implementation manner, when a first drawing frame is drawn, the electronic device determines that spatial information of a first drawing object changes according to a drawing instruction of the first drawing object, determines that spatial information of a second drawing object does not change according to a drawing instruction of the second drawing object, and after the electronic device writes color data of the first drawing object in the first color accessory and writes color data of the second drawing object in the second color accessory, the method further includes: the electronic device synthesizes the first color attachment and the second color attachment into a first drawing frame in a seventh memory space.
With reference to the first aspect, in a possible implementation manner, when a second drawing frame is drawn, the electronic device determines that spatial information of the first drawing object changes according to a drawing instruction of a third drawing object, determines that spatial information of a fourth drawing object does not change according to a drawing instruction of a fourth drawing object, writes color data of the third drawing object in the third color accessory, and writes color data of the fourth drawing object in the fourth color accessory, and the method further includes: and the electronic equipment synthesizes the third color attachment and the fourth color attachment into a second drawing frame in a seventh memory space.
With reference to the first aspect, in a possible implementation manner, the synthesizing, by the electronic device, the first color attachment and the second color attachment into the first drawing frame in the seventh memory space specifically includes: and the electronic equipment synthesizes the first color attachment and the second color attachment into a first drawing frame in a seventh memory space according to the first depth attachment and the second depth attachment, wherein the first depth attachment is used for writing the depth data of the first drawing object, and the second depth attachment is used for writing the second color attachment of the second drawing object.
With reference to the first aspect, in a possible implementation manner, the synthesizing, by the electronic device, the third color attachment and the fourth color attachment into the second drawing frame in the seventh memory space specifically includes: the electronic equipment synthesizes the third color attachment and the fourth color attachment into a second drawing frame in a seventh memory space according to the third depth attachment and the fourth depth attachment; the third depth attachment is for writing depth data of a third drawing object and the fourth depth attachment is for writing depth data of a fourth drawing object.
In a second aspect, an electronic device is provided, which may include: one or more processors and memory; the memory is coupled to the one or more processors and is configured to store computer program code comprising computer instructions which are invoked by the one or more processors to cause the electronic device to perform the method as in any of the possible ways of the first aspect.
In a third aspect, an electronic device is provided, including: one or more processor CPUs, a graphics processor GPU, a memory and a display screen; the memory is coupled to the one or more processors; the CPU is coupled with the GPU; wherein:
the memory may be used to store computer program code, the computer program code comprising computer instructions; the CPU may be configured to determine that spatial information of the first drawing object changes according to a drawing instruction of the first drawing object, determine that spatial information of the second drawing object does not change according to a drawing instruction of the second drawing object, instruct the GPU to write color data of the first drawing object into the first color accessory, and write color data of the second drawing object into the second color accessory, where the spatial information of the first drawing object is indicated to change in the drawing instruction of the first drawing object and the spatial information of the second drawing object is not indicated to change in the drawing instruction of the second drawing object when the first drawing frame is drawn; and when the second drawing frame is drawn, determining that the spatial information of the first drawing object changes according to the drawing instruction of the third drawing object, determining that the spatial information of the fourth drawing object does not change according to the drawing instruction of the fourth drawing object, instructing the GPU to write the color data of the third drawing object into the third color accessory and write the color data of the fourth drawing object into the fourth color accessory, wherein the drawing instruction of the third drawing object indicates that the spatial information of the third drawing object changes, and the drawing instruction of the fourth drawing object does not indicate that the spatial information of the fourth drawing object changes.
The GPU may be configured to write color data of the first drawing object into the first color attachment and color data of the second drawing object into the second color attachment; writing color data of a third drawing object into a third color attachment, and writing color data of a fourth drawing object into a fourth color attachment; generating a fifth color attachment of the first predicted frame based on the first color attachment and the third color attachment, and generating a sixth color attachment of the first predicted frame based on the second color attachment and the fourth color attachment; and synthesizing the fifth color attachment and the sixth color attachment into the first prediction frame.
The display screen may be configured to display the first rendered frame, the second rendered frame, and the first predicted frame.
With the electronic device provided by the third aspect, the electronic device may determine whether the spatial information of the drawing object changes according to the drawing instruction of the drawing object. That is, the electronic apparatus can determine whether the drawing object is a moving object according to the drawing instruction of the drawing object. If the drawing instruction of the drawing object indicates that the spatial information is changed, the drawing object is a moving object. And if the drawing instruction of the drawing object does not indicate that the spatial information is changed, the drawing object is a static object. If the drawing object is a moving object, the electronic equipment writes the color data of the drawing object into the first color accessory. And if the drawing object is a static object, the electronic equipment writes the color data of the drawing object into the second color accessory. In this way, the electronic device can store the color data of the moving object and the color data of the static object in the first drawing frame separately. Likewise, the electronic device may store color data of the moving object and color data of the static object in the second drawing frame separately. Then, the electronic device may predict the moving object in the first prediction frame from the moving object in the first rendering frame and the moving object in the second rendering frame. The electronic device can predict the static object in the first predicted frame from the static object in the first rendered frame and the static object in the second rendered frame. Therefore, the electronic equipment can utilize the predicted frame predicted by the drawing frame, and the frame rate can be improved, so that the fluency of a video interface displayed by the electronic equipment is improved.
With reference to the third aspect, in a possible implementation manner, the GPU may be further configured to: determining a first motion vector of a third color attachment according to the first color attachment and the third color attachment; a fifth color mask of the first predicted frame is generated based on the second color mask and the first motion vector.
In the above implementation manner, the GPU in the electronic device calculates the motion vector of the moving object only by using the color data of the moving object, so that the motion vector of the moving object can be calculated more accurately. And, the GPU calculates the motion vector of the moving object only from the color data of the moving object. The GPU does not miscalculate the motion vectors of moving objects into motion vectors of static objects. Therefore, the electronic equipment predicts more accurate moving objects in the predicted frame.
With reference to the third aspect, in a possible implementation manner, the GPU may be further configured to: dividing the third color accessory into Q pixel blocks, and taking out a first pixel block from the Q pixel blocks of the third color accessory; determining a second pixel block matching the first pixel block in the first color attachment; obtaining a motion vector of the first pixel block according to the displacement from the second pixel block to the first pixel block; a first motion vector of the third color accessory is determined from the motion vector of the first block of pixels.
In the above implementation manner, the GPU calculates the motion vector by blocking the color attachments of the moving object, without calculating the motion vector of each pixel point of the motion vector. This may reduce the amount of computations, thereby reducing the power consumption of the GPU.
With reference to the third aspect, in a possible implementation manner, the GPU may be further configured to: determining a plurality of candidate pixel blocks in the first color accessory through first pixel points in the first pixel block, and respectively calculating the difference values of the color values of the candidate pixel blocks and the first pixel block; determining a second pixel block matched with the first pixel block according to the difference value of the color values of the first pixel block of the candidate pixel blocks; the second pixel block is a candidate pixel block having a smallest difference value with the color value of the first pixel block among the plurality of candidate pixel blocks. In this way, the GPU of the electronic device can more accurately find the matching pixel block of each pixel block, thereby being able to more accurately calculate the motion vector of each pixel block.
With reference to the third aspect, in a possible implementation manner, the GPU may be further configured to: determining a second motion vector of a fourth color accessory according to the second color accessory and the fourth color accessory; a sixth color attachment for the first predicted frame is generated based on the fourth color attachment and the second motion vector.
In the above implementation manner, the GPU of the electronic device calculates the motion vector of the static object only by using the color data of the static object, so that the motion vector of the static object can be calculated more accurately. And, the GPU calculates the motion vector of the static object only from the color data of the static object. The GPU does not miscalculate the motion vectors of static objects into motion vectors of moving objects. Therefore, the static objects in the predicted frame predicted by the GPU are more accurate.
With reference to the third aspect, in a possible implementation manner, the GPU may be further configured to: dividing the fourth color attachment into Q pixel blocks, and taking out a third pixel block from the fourth color attachment; calculating a first position of the third block of pixels in the second color accessory; determining a motion vector of the third pixel block according to the first position and a second position in a fourth color accessory of the third pixel block; and determining a second motion vector of the fourth color accessory according to the motion vector of the third pixel block.
In the implementation manner, the GPU calculates the motion vector by blocking the color attachments of the static object without calculating the motion vector of each pixel point of the static vector. This may reduce the amount of computations, thereby reducing the power consumption of the GPU.
With reference to the third aspect, in a possible implementation manner, the electronic device may further be configured to: acquiring a first matrix in a drawing instruction of a first drawing frame and a second matrix in a drawing instruction of a second drawing frame, wherein the first matrix is used for recording corner information of the camera position of the first drawing frame, and the second matrix is used for recording corner information of the camera position of the second drawing frame; a first position of the third pixel block in the second color attachment is calculated based on the first and second matrices and a depth value of the third pixel block.
With reference to the third aspect, in a possible implementation manner, the GPU may be further configured to: creating a first memory space, a second memory space, a third memory space, a fourth memory space, a fifth memory space, a sixth memory space and a seventh memory space; the first memory space is used for storing a first color accessory, the second memory space is used for storing a second color accessory, the third memory space is used for storing a third color accessory, the fourth memory space is used for storing a fourth color accessory, the fifth memory space is used for storing a fifth color accessory, the sixth memory space is used for storing a sixth color accessory, and the seventh memory space is used for storing the first prediction frame.
With reference to the third aspect, in a possible implementation manner, the GPU may be further configured to: and synthesizing the first color attachment and the second color attachment into a first drawing frame in a seventh memory space.
With reference to the third aspect, in a possible implementation manner, the GPU may be further configured to: synthesizing the first color attachment and the second color attachment into a first drawing frame in a seventh memory space according to the first depth attachment and the second depth attachment; the third depth attachment is for writing depth data of the first drawing object and the fourth depth attachment is for writing depth data of the second drawing object.
With reference to the third aspect, in a possible implementation manner, the GPU may be further configured to: and synthesizing the third color attachment and the fourth color attachment into a second drawing frame in a seventh memory space.
With reference to the third aspect, in a possible implementation manner, the GPU may be further configured to: synthesizing the third color attachment and the fourth color attachment into a second drawing frame in a seventh memory space according to the third depth attachment and the fourth depth attachment; the third depth attachment is for writing depth data of a third drawing object and the fourth depth attachment is for writing depth data of a fourth drawing object.
With reference to the third aspect, in a possible implementation manner, the drawing instruction of the first drawing object includes an execution drawing instruction of the first drawing object and a drawing state device instruction of the first drawing object, where the execution drawing instruction of the first drawing object is used to trigger the electronic device to perform drawing rendering on drawing state data of the first drawing object, and generate a drawing result; the rendering state device instruction of the first rendering object is used for setting rendering state data on which the rendering instruction of the first rendering object depends; the rendering state data of the first rendering object includes vertex information data, a vertex index, texture information, and a vertex information cache index of the first rendering object.
With reference to the third aspect, in a possible implementation manner, the CPU determines that the spatial information of the first drawing object changes according to a drawing instruction of the first drawing object, and the CPU may further be specifically configured to: and determining that transfer matrix parameters exist in the drawing instruction of the first drawing object, wherein the transfer matrix parameters determining the existence of the drawing instruction of the first drawing object are different from the transfer matrix parameters of the corresponding first drawing object, and the transfer matrix is used for describing the mapping relation from the local coordinate system of the drawing object to the world coordinate system. With reference to the third aspect, in a possible implementation manner, the drawing instruction of the second drawing object includes an execution drawing instruction of the second drawing object and a drawing state device instruction of the second drawing object, where the execution drawing instruction of the second drawing object is used to trigger the electronic device to perform drawing and rendering on drawing state data of the second drawing object, and generate a drawing result; the rendering state device instruction of the second rendering object is used for setting rendering state data on which the rendering instruction of the second rendering object depends; the rendering state data of the second rendering object includes vertex information data, vertex index, texture information, and vertex information cache index of the second rendering object.
With reference to the third aspect, in a possible implementation manner, the CPU determines that the spatial information of the second drawing object is not changed according to the drawing instruction of the second drawing object, and the CPU may be further specifically configured to: and determining that no transfer matrix parameter exists in the drawing instruction of the second drawing object, determining that the transfer matrix parameter existing in the drawing instruction of the second drawing object is the same as the transfer matrix parameter of the corresponding second drawing object, wherein the transfer matrix is used for describing the mapping relation from the local coordinate system of the drawing object to the world coordinate system.
With reference to the third aspect, in a possible implementation manner, the drawing instruction of the third drawing object includes an execution drawing instruction of the third drawing object and a drawing state device instruction of the third drawing object, where the execution drawing instruction of the third drawing object is used to trigger the electronic device to perform drawing and rendering on drawing state data of the third drawing object, and generate a drawing result; the rendering state device instruction of the third rendering object is to set rendering state data on which the rendering instruction of the third rendering object is executed; the drawing state data of the third drawing object includes vertex information data, a vertex index, texture information, and a vertex information cache index of the third drawing object.
With reference to the third aspect, in a possible implementation manner, the CPU determines that the spatial information of the third drawing object is not changed according to the drawing instruction of the third drawing object, and the CPU may further be specifically configured to: and determining that no transfer matrix parameter exists in the drawing instruction of the third drawing object, determining that the transfer matrix parameter existing in the drawing instruction of the third drawing object is the same as the transfer matrix parameter of the corresponding third drawing object, wherein the transfer matrix is used for describing the mapping relation from the local coordinate system of the drawing object to the world coordinate system.
With reference to the third aspect, in a possible implementation manner, the drawing instruction of the fourth drawing object includes a drawing execution instruction of the fourth drawing object and a drawing state device instruction of the fourth drawing object, where the drawing execution instruction of the fourth drawing object is used to trigger the electronic device to perform drawing rendering on drawing state data of the fourth drawing object, and generate a drawing result; the drawing state device instruction of the fourth drawing object is used for setting drawing state data on which the drawing instruction of the fourth drawing object depends; the drawing state data of the fourth drawing object includes vertex information data, vertex index, texture information, and vertex information cache index of the fourth drawing object.
With reference to the third aspect, in a possible implementation manner, the CPU determines that the spatial information of the fourth drawing object is not changed according to the drawing instruction of the fourth drawing object, and the CPU may be further specifically configured to: and determining that no transfer matrix parameter exists in the drawing instruction of the fourth drawing object, determining that the transfer matrix parameter existing in the drawing instruction of the fourth drawing object is the same as the transfer matrix parameter of the corresponding fourth drawing object, wherein the transfer matrix is used for describing the mapping relation from the local coordinate system of the drawing object to the world coordinate system.
In a fourth aspect, an image frame prediction apparatus is provided, which may include a drawing unit, a prediction unit, a synthesis unit; wherein:
the drawing unit may be configured to determine that spatial information of the first drawing object changes according to a drawing instruction of the first drawing object, determine that spatial information of the second drawing object does not change according to a drawing instruction of the second drawing object, write color data of the first drawing object in the first color accessory, and write color data of the second drawing object in the second color accessory when the first drawing frame is drawn; when the second drawing frame is drawn, the spatial information of the first drawing object is determined to be changed according to the drawing instruction of the third drawing object, the spatial information of the fourth drawing object is determined not to be changed according to the drawing instruction of the fourth drawing object, the color data of the third drawing object is written into the third color attachment, and the color data of the fourth drawing object is written into the fourth color attachment.
The prediction unit may be configured to generate a fifth color attachment for the first predicted frame based on the first color attachment and the third color attachment, and generate a sixth color attachment for the first predicted frame based on the second color attachment and the fourth color attachment.
The synthesis unit may be configured to synthesize the fifth color attachment and the sixth color attachment into the first predicted frame.
With the image frame prediction apparatus provided in the fourth aspect, the image frame prediction apparatus may determine whether or not spatial information of a drawing object changes according to a drawing instruction of the drawing object. That is, the image frame prediction apparatus may determine whether the drawing object is a moving object according to a drawing instruction of the drawing object. If the drawing instruction of the drawing object indicates that the spatial information is changed, the drawing object is a moving object. If the drawing instruction of the drawing object does not indicate that the spatial information is changed, the drawing object is a static object. If the drawing object is a moving object, the image frame prediction apparatus writes color data of the drawing object in a first color attachment. If the drawing object is a static object, the image frame prediction apparatus writes color data of the drawing object in the second color attachment. In this way, the image frame prediction apparatus can store color data of a moving object and color data of a static object in the first drawing frame separately. Also, the image frame prediction apparatus may store color data of a moving object and color data of a static object in the second drawing frame separately. Then, the image frame prediction apparatus may predict the moving object in the first prediction frame from the moving object in the first drawing frame and the moving object in the second drawing frame. The image frame prediction means may predict the static object in the first prediction frame from the static object in the first drawing frame and the static object in the second drawing frame. Therefore, the image frame prediction device can utilize the predicted frame predicted by the drawing frame, and can improve the frame rate, so that the fluency of a video interface displayed by the image frame prediction device is improved.
With reference to the fourth aspect, in a possible implementation manner, the prediction unit may further be configured to: determining a first motion vector of a third color attachment according to the first color attachment and the third color attachment; a fifth color attachment of the first predicted frame is generated based on the second color attachment and the first motion vector.
In the above implementation, the image frame prediction apparatus calculates the motion vector of the moving object using only the color data of the moving object, so that the motion vector of the moving object can be calculated more accurately. The image frame prediction device calculates a motion vector of the moving object based on only color data of the moving object. The image frame prediction apparatus does not miscalculate the motion vector of the moving object into the motion vector of the stationary object. Thus, the moving object in the predicted frame predicted by the image frame prediction apparatus is more accurate.
With reference to the fourth aspect, in a possible implementation manner, the prediction unit may further be configured to: dividing the third color attachment into Q pixel blocks, and taking out a first pixel block from the Q pixel blocks of the third color attachment; determining a second pixel block matching the first pixel block in the first color attachment; obtaining a motion vector of the first pixel block according to the displacement from the second pixel block to the first pixel block; a first motion vector of the third color accessory is determined from the motion vector of the first pixel block. According to a step in this implementation, the prediction unit may determine motion vectors for all of the Q pixel blocks of the third color accessory. Each pixel block includes f × f (e.g., 16 × 16) pixels.
In the above implementation, the prediction unit divides the color attachments of the moving object into blocks to calculate the motion vector, without calculating the motion vector of each pixel point of the motion vector. This can reduce the amount of calculation, thereby reducing the power consumption of the image frame prediction apparatus.
With reference to the fourth aspect, in a possible implementation manner, the prediction unit may be further configured to: determining a plurality of candidate pixel blocks in the first color accessory through first pixel points in the first pixel block, and respectively calculating the difference values of the color values of the candidate pixel blocks and the first pixel block; determining a second pixel block matched with the first pixel block according to the difference of the color values of the first pixel blocks of the candidate pixel blocks; the second pixel block is a candidate pixel block having a smallest difference value with the color value of the first pixel block among the plurality of candidate pixel blocks. In this way, the prediction unit can more accurately find a matching pixel block for each pixel block, thereby being able to more accurately calculate a motion vector for each pixel block.
With reference to the fourth aspect, in a possible implementation manner, the prediction unit may be further configured to: a motion vector of a fifth color accessory is determined from the first motion vector, and a fifth color accessory is generated from the motion vectors of the second color accessory and the fifth color accessory. The motion vector of the fifth color accessory is K times the first motion vector, K is greater than 0 and less than 1.
With reference to the fourth aspect, in one possible implementation manner, K is equal to 0.5. Thus, the object in each image frame moves at a constant speed.
With reference to the fourth aspect, in a possible implementation manner, the prediction unit may be further configured to: determining a second motion vector of a fourth color accessory according to the second color accessory and the fourth color accessory; a sixth color attachment for the first predicted frame is generated based on the fourth color attachment and the second motion vector.
In the above implementation, the prediction unit calculates the motion vector of the static object using only the color data of the static object, so that the motion vector of the static object can be calculated more accurately. And, the prediction unit calculates the motion vector of the static object based only on the color data of the static object. The prediction unit does not miscalculate the motion vector of a static object into the motion vector of a moving object. Thus, the static object in the predicted frame predicted by the prediction unit is more accurate.
With reference to the fourth aspect, in a possible implementation manner, the prediction unit may further be configured to: dividing the fourth color attachment into Q pixel blocks, and taking out a third pixel block from the fourth color attachment; calculating a first position of the third pixel block in the second color accessory; determining a motion vector of the third pixel block according to the first position and a second position in a fourth color accessory of the third pixel block; and determining a second motion vector of the fourth color accessory according to the motion vector of the third pixel block. According to steps in this implementation, the prediction unit may determine a motion vector for each of the Q pixel blocks of the fourth color accessory.
In the above implementation manner, the prediction unit divides the color attachments of the static object into blocks to calculate the motion vector, and does not need to calculate the motion vector of each pixel point of the static vector. This can reduce the amount of calculation, thereby reducing the power consumption of the image frame prediction apparatus.
With reference to the fourth aspect, in a possible implementation manner, the prediction unit may further be configured to: acquiring a first matrix in a drawing instruction of a first drawing frame and a second matrix of a drawing instruction of a second drawing frame, wherein the first matrix is used for recording corner information of a camera position of the first drawing frame, and the second matrix is used for recording corner information of a camera position of the second drawing frame; a first position of the third pixel block in the second color attachment is calculated based on the first and second matrices and a depth value of the third pixel block.
With reference to the fourth aspect, in a possible implementation manner, the prediction unit may be further configured to: a motion vector for a sixth color attachment is determined from the second motion vector, and a sixth color attachment is generated from the fourth color attachment and the motion vector for the sixth color attachment. The motion vector of the sixth color attachment is K times the second motion vector, K being greater than 0 and less than 1.
With reference to the fourth aspect, in a possible implementation manner, the drawing instruction of the first drawing object includes an execution drawing instruction of the first drawing object and a drawing state device instruction of the first drawing object, where the execution drawing instruction of the first drawing object is used to trigger the electronic device to perform drawing rendering on drawing state data of the first drawing object, and generate a drawing result; the rendering state device instruction of the first rendering object is used for setting rendering state data on which the rendering instruction of the first rendering object depends; the rendering state data of the first rendering object includes vertex information data, a vertex index, texture information, and a vertex information cache index of the first rendering object.
With reference to the fourth aspect, in a possible implementation manner, the drawing instruction of the second drawing object includes an execution drawing instruction of the second drawing object and a drawing state device instruction of the second drawing object, where the execution drawing instruction of the second drawing object is used to trigger the electronic device to perform drawing and rendering on drawing state data of the second drawing object, and generate a drawing result; the rendering state device instruction of the second rendering object is used for setting rendering state data on which the rendering instruction of the second rendering object depends; the rendering state data of the second rendering object includes vertex information data, vertex index, texture information, and vertex information cache index of the second rendering object.
With reference to the fourth aspect, in a possible implementation manner, the drawing instruction of the third drawing object includes an execution drawing instruction of the third drawing object and a drawing state device instruction of the third drawing object, where the execution drawing instruction of the third drawing object is used to trigger the electronic device to perform drawing rendering on drawing state data of the third drawing object, and generate a drawing result; the rendering state device instruction of the third rendering object is to set rendering state data on which the rendering instruction of the third rendering object is executed; the drawing state data of the third drawing object includes vertex information data, vertex index, texture information, and vertex information cache index of the third drawing object.
With reference to the fourth aspect, in a possible implementation manner, the drawing instruction of the fourth drawing object includes an execution drawing instruction of the fourth drawing object and a drawing state device instruction of the fourth drawing object, where the execution drawing instruction of the fourth drawing object is used to trigger the electronic device to perform drawing and rendering on drawing state data of the fourth drawing object, and generate a drawing result; the rendering state device instruction of the fourth rendering object is to set rendering state data on which the rendering instruction of the fourth rendering object is executed; the drawing state data of the fourth drawing object includes vertex information data, a vertex index, texture information, and a vertex information cache index of the fourth drawing object.
With reference to the fourth aspect, in a possible implementation manner, the drawing unit may further be configured to: determining that a transfer matrix parameter exists in the drawing instruction of the first drawing object, and determining that the transfer matrix parameter existing in the drawing instruction of the first drawing object is different from the transfer matrix parameter of the corresponding first drawing object, wherein the transfer matrix is used for describing the mapping relation from the local coordinate system of the drawing object to the world coordinate system.
With reference to the fourth aspect, in a possible implementation manner, the drawing unit may further be configured to: and determining that no transfer matrix parameter exists in the drawing instruction of the second drawing object, determining that the transfer matrix parameter existing in the drawing instruction of the second drawing object is the same as the transfer matrix parameter of the corresponding second drawing object, wherein the transfer matrix is used for describing the mapping relation from the local coordinate system of the drawing object to the world coordinate system. With reference to the fourth aspect, in a possible implementation manner, the drawing unit may be further configured to: and determining that a transfer matrix parameter exists in the drawing instruction of the third drawing object, and determining that the transfer matrix parameter existing in the drawing instruction of the third drawing object is different from the transfer matrix parameter of the corresponding third drawing object, wherein the transfer matrix is used for describing the mapping relation from the local coordinate system of the drawing object to the world coordinate system.
With reference to the fourth aspect, in a possible implementation manner, the drawing unit may further be configured to: and determining that no transfer matrix parameter exists in the drawing instruction of the fourth drawing object, determining that the transfer matrix parameter existing in the drawing instruction of the fourth drawing object is the same as the transfer matrix parameter of the corresponding fourth drawing object, wherein the transfer matrix is used for describing the mapping relation from the local coordinate system of the drawing object to the world coordinate system.
With reference to the fourth aspect, in a possible implementation manner, the drawing unit may further be configured to: creating a first memory space, a second memory space, a third memory space, a fourth memory space, a fifth memory space, a sixth memory space and a seventh memory space; the first memory space is used for storing a first color accessory, the second memory space is used for storing a second color accessory, the third memory space is used for storing a third color accessory, the fourth memory space is used for storing a fourth color accessory, the fifth memory space is used for storing a fifth color accessory, the sixth memory space is used for storing a sixth color accessory, and the seventh memory space is used for storing the first predicted frame.
With reference to the fourth aspect, in a possible implementation manner, the synthesis unit may further be configured to: and synthesizing the first color attachment and the second color attachment into a first drawing frame in a seventh memory space.
With reference to the fourth aspect, in a possible implementation manner, the synthesis unit may further be configured to: synthesizing the first color attachment and the second color attachment into a first drawing frame in a seventh memory space according to the first depth attachment and the second depth attachment; the third depth attachment is for writing depth data of the first drawing object and the fourth depth attachment is for writing depth data of the second drawing object.
With reference to the fourth aspect, in a possible implementation manner, the synthesis unit may further be configured to: and synthesizing the third color attachment and the fourth color attachment into a second drawing frame in a seventh memory space.
With reference to the fourth aspect, in a possible implementation manner, the synthesis unit may further be configured to: synthesizing the third color attachment and the fourth color attachment into a second drawing frame in a seventh memory space according to the third depth attachment and the fourth depth attachment; the third depth attachment is for writing depth data of a third drawing object and the fourth depth attachment is for writing depth data of a fourth drawing object.
In a fifth aspect, the present application provides a method for image frame prediction, which may include: the electronic equipment determines a tenth moving object in the tenth drawing frame according to the tenth drawing instruction, and determines an eleventh moving object in the eleventh drawing frame according to the second drawing instruction; the electronic equipment determines that the tenth moving object is matched with the eleventh moving object according to the attribute of the tenth moving object and the attribute of the eleventh moving object; and the electronic equipment determines a drawing result of the twelfth moving object in the tenth prediction frame according to the tenth moving object and the eleventh moving object.
By the method provided by the embodiment of the application, the electronic equipment can obtain the predicted frame according to the two-frame drawing frame of the application program. This can improve the frame rate of the application video interface. Therefore, the fluency of the electronic equipment for displaying the video interface of the application program can be improved.
With reference to the fifth aspect, in one possible implementation manner, the tenth drawing frame is displayed in the display screen of the electronic device before the eleventh drawing frame, and the tenth prediction frame is displayed in the display screen of the electronic device after the eleventh drawing frame. Namely, the electronic device can predict the later prediction frame according to the first two drawn frames, thereby improving the frame rate.
With reference to the fifth aspect, in a possible implementation manner, one frame of image frame is separated between the tenth drawing frame and the eleventh drawing frame, and the tenth prediction frame is a next frame of image frame adjacent to the eleventh drawing frame. For example, the electronic device predicts a 5 th predicted frame using the 2 nd and 4 th rendered frames. Therefore, the electronic equipment can have enough time to calculate the motion vector and draw the first prediction frame, and the situation that the 5 th drawing frame is not drawn after the 4 th drawing frame is displayed by the electronic equipment, so that the video interface is blocked is avoided.
With reference to the fifth aspect, in a possible implementation manner, the determining, by the electronic device, that the tenth moving object is matched with the eleventh moving object according to the first attribute of the tenth moving object and the second attribute of the eleventh moving object specifically includes: the electronic equipment establishes a first index table, wherein the first index table is used for storing the moving object and the attribute of the moving object in the tenth drawing frame, and the first index table comprises the tenth moving object and the attribute of the tenth moving object; the electronic equipment establishes a second index table, wherein the second index table is used for storing the moving object and the attribute of the moving object in the eleventh drawing frame, and the second index table comprises the attribute of the eleventh moving object and the attribute of the eleventh moving object; the electronic equipment takes the eleventh moving object out of the second index table, and determines the tenth moving object matched with the eleventh moving object from the first index table. In this way, the electronic device can quickly find the tenth moving object matching the eleventh moving object through the index table.
With reference to the fifth aspect, in one possible implementation manner, the matching of the tenth moving object and the eleventh moving object includes: the first attribute of the tenth moving object is the same as the second attribute of the eleventh moving object.
With reference to the fifth aspect, in a possible implementation manner, the determining, by the electronic device, a drawing result of a twelfth moving object in a tenth prediction frame according to the tenth moving object and the eleventh moving object specifically includes: the electronic equipment determines a first coordinate of a first point of a tenth moving object and determines a second coordinate of a second point of an eleventh moving object; the electronic equipment determines a tenth motion vector from a tenth motion object to an eleventh motion object according to the displacement from the first coordinate to the second coordinate; and the electronic equipment determines a drawing result of the twelfth moving object in the tenth predicted frame according to the tenth motion vector and the eleventh moving object. In this way, the electronic device can represent the motion vector of the entire object with the motion vector of one point of the object, without separately determining the motion vector of each block of the object. In this way, the amount of computation of the electronic device can be reduced, and the power consumption of the electronic device can be saved.
With reference to the fifth aspect, in a possible implementation manner, the determining, by the electronic device, the first coordinate of the first point of the tenth moving object specifically includes: the electronic equipment determines a first coordinate of the first point according to the coordinates of all pixel points of the tenth moving object; the determining, by the electronic device, the second coordinate of the second point of the eleventh moving object specifically includes: and the electronic equipment determines the second coordinate of the second point according to the coordinates of all the pixel points of the eleventh moving object.
Furthermore, the electronic device may obtain, through a shader in the GPU, coordinates of each pixel point of the tenth moving object from the template image of the tenth moving object. The GPU has a template image of the tenth moving object drawn therein. The electronic device may obtain coordinates of each pixel point of the eleventh moving object from the template image of the eleventh moving object through a shader in the GPU. And drawing a template image of an eleventh moving object in the GPU.
With reference to the fifth aspect, in a possible implementation manner, the first point is a geometric center point of the tenth moving object, and the second point is a geometric center point of the eleventh moving object. The central point of the moving object is more convenient to calculate relative to other points of the moving object, so that the calculation amount of the electronic equipment can be reduced, and the power consumption of the electronic equipment can be reduced.
With reference to the fifth aspect, in a possible implementation manner, the determining, by the electronic device, a drawing result of a twelfth moving object in a tenth prediction frame according to the tenth motion vector and the eleventh moving object specifically includes: and the electronic equipment determines a second pixel point of a twelfth moving object in the tenth prediction frame according to the tenth motion vector and the first pixel point of the eleventh moving object.
With reference to the fifth aspect, in a possible implementation manner, the determining, by the electronic device, a second pixel point of a twelfth motion object in the tenth prediction frame according to the tenth motion vector and the first pixel point of the eleventh motion object specifically includes: the electronic equipment determines an eleventh motion vector from the eleventh motion object to the twelfth motion object according to the tenth motion vector; and the electronic equipment determines a second pixel point of a twelfth moving object in the tenth prediction frame according to the eleventh motion vector and the first pixel point of the eleventh moving object, wherein the second pixel point is a pixel point of the first pixel point which moves from the eleventh rendering frame to the tenth prediction frame according to the eleventh motion vector.
With reference to the fifth aspect, in a possible implementation manner, the eleventh motion vector is K times the tenth motion vector, and K is greater than 0 and smaller than 1.
With reference to the fifth aspect, in one possible implementation manner, K is equal to 0.5. Therefore, the object in each image frame moves at a constant speed, so that the electronic equipment can calculate conveniently, and the user experience is better when watching the video.
In a sixth aspect, an electronic device is provided, comprising: one or more processors CPU, graphics processor GPU, memory and display screen; the memory is coupled to the one or more processors; the CPU is coupled with the GPU; wherein:
The memory is for storing computer program code, the computer program code comprising computer instructions;
the CPU is used for determining a tenth moving object in the tenth drawing frame according to the tenth drawing instruction and determining an eleventh moving object in the eleventh drawing frame according to the second drawing instruction;
the GPU is used for determining that the tenth moving object is matched with the eleventh moving object according to the attribute of the tenth moving object and the attribute of the eleventh moving object; determining a drawing result of a twelfth moving object in a tenth prediction frame according to the tenth moving object and the eleventh moving object;
the display screen is used for displaying the tenth drawing frame, the eleventh drawing frame and the tenth prediction frame.
The electronic device provided by the embodiment of the application can obtain the predicted frame according to the two-frame drawing frame of the application program. This can increase the frame rate of the application video interface. Therefore, the fluency of the electronic equipment for displaying the video interface of the application program can be improved.
With reference to the sixth aspect, in a possible implementation manner, the tenth drawing frame is displayed in the display screen before the eleventh drawing frame, and the tenth prediction frame is displayed in the display screen after the eleventh drawing frame. Namely, the electronic device can predict the later prediction frame according to the first two drawn frames, thereby improving the frame rate.
With reference to the sixth aspect, in a possible implementation manner, one frame image frame is separated between the tenth drawing frame and the eleventh drawing frame, and the tenth prediction frame is a next frame image frame adjacent to the eleventh drawing frame. For example, the electronic device predicts a 5 th predicted frame using the 2 nd and 4 th rendered frames. Therefore, the electronic equipment can have enough time to calculate the motion vector and draw the first prediction frame, and the situation that the 5 th drawing frame is not drawn after the 4 th drawing frame is displayed by the electronic equipment, so that the video interface is blocked is avoided.
With reference to the sixth aspect, in a possible implementation manner, the CPU is specifically configured to: establishing a first index table, wherein the first index table is used for storing the moving object and the attribute of the moving object in the tenth drawing frame, and the first index table comprises the tenth moving object and the attribute of the tenth moving object; and establishing a second index table, wherein the second index table is used for storing the moving object and the attribute of the moving object in the eleventh drawing frame, and the second index table comprises the eleventh moving object and the attribute of the eleventh moving object. The GPU is specifically configured to: and taking the eleventh moving object out of a second index table, and determining the tenth moving object matched with the eleventh moving object from the first index table. In this way, the electronic device can quickly find the tenth moving object matching the eleventh moving object from the index table.
With reference to the sixth aspect, in a possible implementation manner, the matching of the tenth moving object and the eleventh moving object includes: the attribute of the tenth moving object is the same as that of the eleventh moving object.
With reference to the sixth aspect, in a possible implementation manner, the GPU is specifically configured to: determining a first coordinate of a first point of a tenth moving object, and determining a second coordinate of a second point of an eleventh moving object; determining a tenth motion vector from the tenth moving object to the eleventh moving object according to the displacement from the first coordinate to the second coordinate; and determining a drawing result of the twelfth moving object in the tenth predicted frame according to the tenth motion vector and the eleventh moving object. In this way, the electronic device can represent the motion vector of the entire object with the motion vector of one point of the object, without separately determining the motion vector of each block of the object. Thus, the calculation amount of the electronic device can be reduced, and the power consumption of the electronic device can be saved.
With reference to the sixth aspect, in a possible implementation manner, the GPU is specifically configured to: determining a first coordinate of the first point according to the coordinates of all pixel points of the tenth moving object; and determining the second coordinates of the second points according to the coordinates of all the pixel points of the eleventh moving object.
With reference to the sixth aspect, in a possible implementation manner, the first point is a geometric center point of the tenth moving object, and the second point is a geometric center point of the eleventh moving object.
With reference to the sixth aspect, in a possible implementation manner, the GPU is specifically configured to: and determining a second pixel point of a twelfth motion object in the tenth prediction frame according to the tenth motion vector and the first pixel point of the eleventh motion object.
With reference to the sixth aspect, in a possible implementation manner, the GPU is specifically configured to: determining an eleventh motion vector of the eleventh moving object moving to the twelfth moving object according to the tenth motion vector; and determining a second pixel point of a twelfth moving object in the tenth prediction frame according to the eleventh motion vector and the first pixel point of the eleventh moving object, wherein the second pixel point is a pixel point of the first pixel point which moves from the eleventh rendering frame to the tenth prediction frame according to the eleventh motion vector.
With reference to the sixth aspect, in a possible implementation manner, the eleventh motion vector is K times the tenth motion vector, and K is greater than 0 and smaller than 1.
With reference to the sixth aspect, in one possible implementation manner, K is equal to 0.5.
In a seventh aspect, an embodiment of the present application provides a frame prediction method, where the method includes:
the electronic equipment determines a predicted motion vector of a first vertex from a target reference frame to a predicted frame according to the predicted motion vector of blocks around the first vertex from the target reference frame to the predicted frame, wherein the first vertex is one vertex in a first block, the first block is one block in the target reference frame, the target reference frame is one frame determined from the first reference frame or a second reference frame according to the position of the predicted frame relative to the first reference frame or the second reference frame, and the first reference frame and the second reference frame are two adjacent frames in a video stream; determining the coordinates of a first vertex in a prediction frame according to the coordinates of the first vertex in the target reference frame and the prediction motion vector of the first vertex; determining a pixel block of the first block in the predicted frame according to the coordinates of the vertex of the first block in the predicted frame and the coordinates in the target reference frame; a predicted frame is displayed including blocks of pixels.
With the frame prediction method provided in the seventh aspect of the embodiment of the present application, for each vertex of a block, an electronic device determines, according to a predicted motion vector from a target reference frame to a predicted frame of blocks around the vertex, a position of the vertex in the predicted frame, further, for each block, calculates, according to a correspondence between four vertices of the block in the reference frame and the predicted frame, a position of each pixel of the block in the predicted frame, and finally obtains the entire predicted frame.
In the above frame prediction method, the vertex is determined, and the first block is stretched according to the predicted motion vectors of the blocks around the first block, where the first block is a block in the reference frame, so that the vertices of the adjacent blocks in the predicted frame are overlapped, and the blocks are continuous and have no holes.
With reference to the seventh aspect, in a possible implementation manner, the electronic device obtains a correspondence between coordinates of a pixel of the first block in the predicted frame and coordinates of the pixel of the first block in the target reference frame according to coordinates of a vertex of the first block in the predicted frame and coordinates of the vertex of the first block in the target reference frame, and further determines coordinates of the pixel of the first block in the predicted frame according to the coordinates of the pixel of the first block in the target reference frame and the correspondence.
With reference to the seventh aspect, in a possible implementation manner, the electronic device inputs coordinates of four vertices of the first block in the prediction frame and coordinates of the four vertices in the target reference frame into the homography transformation formula, respectively, to obtain a homography equation set, where the homography equation set includes four equations. The electronic equipment obtains a homography transformation matrix corresponding to the first block by solving the homography equation set, and further obtains a homography transformation formula corresponding to the first block according to the homography transformation matrix corresponding to the first block, wherein the homography transformation formula corresponding to the first block is used for expressing the corresponding relation between the coordinates of the pixels of the first block in the prediction frame and the coordinates of the pixels of the first block in the target reference frame.
With reference to the seventh aspect, in a possible implementation manner, the electronic device inputs coordinates of the first pixel in the target reference frame into a homography transformation formula corresponding to the first block to obtain coordinates of the first pixel in the prediction frame, where the first pixel is a pixel of the first block.
With reference to the seventh aspect, in one possible implementation manner, the area of the pixel block includes a number of coordinates greater than the number of coordinates of the first block in the prediction frame, where the area of the pixel block is determined by a vertex of the first block. In this case, the electronic device eliminates the coordinates of the pixel of the first block in the prediction frame from the coordinates in the pixel block region to obtain a first coordinate, and then inputs the first coordinate into a homography transformation formula corresponding to the first block to obtain the pixel of the first coordinate in the target reference frame.
With reference to the seventh aspect, in a possible implementation manner, before the electronic device performs prediction of a motion vector from the target reference frame to the predicted frame according to blocks around the first vertex, and determines the prediction of the motion vector from the target reference frame to the predicted frame for the first vertex, the electronic device obtains the first reference frame and the second reference frame, and then determines the target reference frame from the first reference frame and the second reference frame according to a position of the frame to be predicted. The target reference frame is divided into blocks according to square blocks of a first size, and further, motion vectors of the blocks from the target reference frame to the predicted frame are calculated.
With reference to the seventh aspect, in a possible implementation manner, when the target reference frame is the first reference frame, the electronic device obtains a motion vector of the first block from the first reference frame to the second reference frame, and then determines half of the motion vector of the first block from the first reference frame to the second reference frame as a predicted motion vector of the first block from the target reference frame to the predicted frame.
With reference to the seventh aspect, in a possible implementation manner, when the target reference frame is the first reference frame, the electronic device obtains a motion vector of the first block from the second reference frame to the first reference frame, and then determines a half of a negative value of the motion vector of the first block from the second reference frame to the first reference frame as a predicted motion vector of the first block from the target reference frame to the predicted frame.
With reference to the seventh aspect, in one possible implementation manner, the electronic device determines an average value of predicted motion vectors of blocks around the first vertex from the target reference frame to the predicted frame as the predicted motion vector of the first vertex from the target reference frame to the predicted frame.
With reference to the seventh aspect, in a possible implementation manner, the electronic device adds the coordinates of the first vertex in the target reference frame and the predicted motion vector of the first vertex to obtain the coordinates of the first vertex in the predicted frame.
In an eighth aspect, an embodiment of the present application provides an electronic device, where the electronic device includes: one or more processors, memory, and a display screen; a memory coupled to the one or more processors, the memory for storing computer program code, the computer program code including computer instructions, the one or more processors for invoking the computer instructions to cause the electronic device to perform: determining a predicted motion vector of a first vertex from a target reference frame to a predicted frame according to predicted motion vectors of blocks around the first vertex from the target reference frame to the predicted frame, wherein the first vertex is one vertex in the first block, the first block is one block in the target reference frame, the target reference frame is one frame determined from the first reference frame or a second reference frame according to the position of the predicted frame relative to the first reference frame or the second reference frame, and the first reference frame and the second reference frame are two adjacent frames in a video stream; determining the coordinate of the first vertex in the prediction frame according to the coordinate of the first vertex in the target reference frame and the prediction motion vector of the first vertex; determining a pixel block of the first block in the predicted frame according to the coordinates of the vertex of the first block in the predicted frame and the coordinates in the target reference frame; the predicted frame, including the pixel blocks, is displayed via a display screen.
With reference to the eighth aspect, in a possible implementation manner, the processor is specifically configured to: and determining the coordinates of the pixels in the first block in the predicted frame according to the coordinates of the vertexes of the first block in the predicted frame and the coordinates of the pixels in the target reference frame.
With reference to the eighth aspect, in a possible implementation manner, the processor is specifically configured to: respectively inputting coordinates of four vertexes of the first block in a prediction frame and coordinates of the four vertexes of the first block in a target reference frame into a homography transformation formula to obtain a homography equation set, wherein the homography equation set comprises four equations; solving a homography equation set to obtain a homography transformation matrix corresponding to the first block; and obtaining a homography transformation formula corresponding to the first block according to the homography transformation matrix corresponding to the first block, wherein the homography transformation formula corresponding to the first block is used for expressing the corresponding relation between the coordinates of the pixels of the first block in the prediction frame and the coordinates in the target reference frame.
With reference to the eighth aspect, in a possible implementation manner, the processor is specifically configured to: and inputting the coordinate of the first pixel in the target reference frame into a homography transformation formula corresponding to the first block to obtain the coordinate of the first pixel in the prediction frame, wherein the first pixel is a pixel of the first block.
With reference to the eighth aspect, in a possible implementation manner, the processor is specifically configured to: removing the coordinates of the pixels of the first block in the prediction frame from the coordinates in the pixel block area to obtain first coordinates; and inputting the first coordinate into a homography transformation formula corresponding to the first block to obtain a pixel of the first coordinate in the target reference frame.
With reference to the eighth aspect, in a possible implementation manner, the processor is specifically configured to: acquiring a first reference frame and a second reference frame; determining a target reference frame from the first reference frame and the second reference frame according to the position of the frame to be predicted; dividing the target reference frame into blocks according to square blocks of a first size; motion vectors for blocks from the target reference frame to the predicted frame are calculated.
With reference to the eighth aspect, in a possible implementation manner, the processor is specifically configured to: acquiring a motion vector of the first block from a first reference frame to a second reference frame; half of the motion vector of the first block from the first reference frame to the second reference frame is determined as the predicted motion vector of the first block from the target reference frame to the predicted frame.
With reference to the eighth aspect, in a possible implementation manner, the processor is specifically configured to: acquiring a motion vector of the first block from the second reference frame to the first reference frame; half of the negative value of the motion vector of the first block from the second reference frame to the first reference frame is determined as the predicted motion vector of the first block from the target reference frame to the predicted frame.
With reference to the eighth aspect, in a possible implementation manner, the processor is specifically configured to: the average of the predicted motion vectors of the blocks around the first vertex from the target reference frame to the predicted frame is determined as the predicted motion vector of the first vertex from the target reference frame to the predicted frame.
With reference to the eighth aspect, in a possible implementation manner, the processor is specifically configured to: and adding the coordinates of the first vertex in the target reference frame and the predicted motion vector of the first vertex to obtain the coordinates of the first vertex in the predicted frame.
In a ninth aspect, the present application provides an image frame generating method, which may include: determining a tenth position coordinate of an eleventh vertex of the prediction block in the prediction image frame according to the depth values of the tenth block and the twelfth block and the position coordinates of the tenth block and the twelfth block in the image frame; wherein the tenth block is a block in the first image frame; the twelfth block is a block which is determined in the second image frame according to a matching algorithm and is matched with the eleventh block; generating a prediction block according to the color data of a reference block and the tenth position coordinate, wherein the reference block is one of the tenth block or the twelfth block; generating the predicted image frame, the predicted image frame including the prediction block.
Wherein the color data may be an RGB value of each pixel included in the block.
As can be seen, in the embodiment of the present application, the depth value is combined to predict the block, so that the generated prediction block can be scaled according to the depth value, and thus, a picture displayed in the predicted image frame is more consistent with a scene drawn by the electronic device according to actual data; when the predicted image frame is inserted into the original image frame drawn according to the actual data by the electronic equipment, the transition between the original image frame and the predicted image frame is more natural and smooth, and the user experience is improved.
With reference to the ninth aspect, in a possible implementation manner, the determining position coordinates of the eleventh vertex of the prediction block in the prediction image frame is performed by: according to the first depth value of the eleventh block and the twelfth position coordinate of the eleventh block in the first coordinate system, a thirteenth position coordinate in a second coordinate system is obtained through calculation, and the second coordinate system is a three-dimensional coordinate system; calculating a fifteenth position coordinate in the second coordinate system according to the second depth value of the twelfth block and a fourteenth position coordinate of the twelfth block in the first coordinate system; calculating a sixteenth position coordinate under the second coordinate system according to the thirteenth position coordinate and the fifteenth position coordinate; and calculating the tenth position coordinate in the first coordinate system according to the sixteenth position coordinate.
Wherein the first coordinate system may be a screen coordinate system; the second coordinate system may be a camera coordinate system. Specifically, the position coordinates in the image frame in the embodiment of the present application are coordinates in a screen coordinate system.
Therefore, in the embodiment of the application, the electronic device may convert the position coordinate in the two-dimensional coordinate system into the three-dimensional space by combining the depth value, further realize prediction of the position coordinate by combining changes of three dimensions in the three-dimensional space, and then convert the position coordinate obtained by prediction in the three-dimensional space into the two-dimensional space to determine the position coordinate under the two-dimensional coordinate system. In this example, the position coordinates are predicted by combining three-dimensional data, and then a prediction block may be generated by performing a certain scale of enlargement and reduction based on the reference block, so that the finally generated prediction image frame is more similar to the result of the data drawing of the electronic device according to the application program.
With reference to the ninth aspect, in a possible implementation manner, before the generating the prediction block, the method further includes: determining a seventeenth position coordinate of a twelfth vertex of the prediction block in the prediction image frame according to the depth values of the thirteenth block and the fourteenth block and the position coordinates of the thirteenth block and the fourteenth block in the image frame; wherein the thirteenth block is a block adjacent to the eleventh block, the fourteenth block is a block adjacent to the twelfth block, and the thirteenth block and the fourteenth block are mutually matched blocks determined according to the matching algorithm; determining an eighteenth position coordinate of a thirteenth vertex of the prediction block in the prediction image frame according to the depth values of the fifteenth block and the sixteenth block and the position coordinates of the fifteenth block and the sixteenth block in the image frame; wherein the fifteenth block is a block adjacent to the thirteenth block, the sixteenth block is a block adjacent to the fourteenth block, and the fifteenth block and the sixteenth block are mutually matched blocks determined according to the matching algorithm; determining a nineteenth positional coordinate of a fourteenth vertex of the prediction block in the prediction image frame according to depth values of the seventeenth block and the eighteenth block, and positional coordinates of the seventeenth block and the eighteenth block in the image frame; wherein the seventeenth block is a block adjacent to both the tenth block and the fifteenth block, the eighteenth block is a block adjacent to both the twelfth block and the sixteenth block, and the seventeenth block and the eighteenth block are mutually matched blocks determined according to the matching algorithm.
Therefore, the electronic device can determine the position coordinates of the rest vertexes of the prediction block according to the adjacent blocks, so that the finally generated prediction image frame does not have the condition of overlapping or gaps, and the image display effect of the image frame is favorably improved.
With reference to the ninth aspect, in a possible implementation manner, the determining a seventeenth position coordinate of a twelfth vertex of the prediction block in the prediction image frame according to the depth values of the thirteenth block and the fourteenth block and the position coordinates of the thirteenth block and the fourteenth block in the image frame includes: calculating to obtain a twenty-first position coordinate under a second coordinate system according to a third depth value of the thirteenth block and a twentieth position coordinate of the thirteenth block under the first coordinate system, wherein the second coordinate system is a three-dimensional coordinate system; according to the second depth value of the fourteenth block and a twenty-second position coordinate of the fourteenth block in the first coordinate system, calculating to obtain a twenty-third position coordinate in the second coordinate system; calculating to obtain a twenty-fourth position coordinate under the second coordinate system according to the twenty-first position coordinate and the twenty-third position coordinate; and calculating the seventeenth position coordinate under the first coordinate system according to the twenty-fourth position coordinate.
With reference to the ninth aspect, in a possible implementation manner, the generating a prediction block according to color data of a reference block and the tenth location coordinate, where the reference block is one of the tenth block or the twelfth block includes: determining the corresponding relation between the tenth position coordinate, the seventeenth position coordinate, the eighteenth position coordinate and the nineteenth position coordinate and the position coordinates of four vertexes in a reference block respectively; generating the prediction block according to the correspondence and the color data of the reference block.
With reference to the ninth aspect, in a possible implementation manner, the color data includes color data of each pixel in the reference block, and the generating the prediction block according to the correspondence and the color data of the reference block includes: generating a homography transformation formula according to the corresponding relation; determining the position coordinates of each pixel in the prediction image frame according to the homography transformation formula and the position coordinates of each pixel in the reference image frame; and generating the prediction block according to the color data of each pixel and the position coordinate of each speed limit in the prediction image frame.
With reference to the ninth aspect, in one possible implementation manner, the reference block includes: determining the corresponding relation between the tenth position coordinate, the seventeenth position coordinate, the eighteenth position coordinate and the nineteenth position coordinate and the position coordinates of the four vertexes in the reference block respectively, wherein the step of determining the corresponding relation comprises the step of determining the position coordinates of the tenth position coordinate and the fifteenth vertex as a group of corresponding position coordinates; determining the seventeenth position coordinate and the sixteenth vertex position coordinate as a group of corresponding position vertexes; determining the eighteenth position coordinate and the seventeenth vertex position coordinate as a group of corresponding position coordinates; determining the nineteenth position coordinate and the eighteenth vertex position coordinate as a group of corresponding position coordinates; wherein the reference block is a quadrilateral, and a position of the fifteenth vertex relative to a center point of the reference block is the same as a position of the eleventh block relative to the fifteenth block; the position of the sixteenth vertex relative to the center point of the reference block is the same as the position of the thirteenth block relative to the seventeenth block; the position of the seventeenth vertex with respect to the center point of the reference block is the same as the position of the fifteenth block with respect to the eleventh block; the eighteenth vertex is located at the same position with respect to the center point of the reference block as the seventeenth block is located with respect to the thirteenth block.
For example, the reference block is a square, the fifteenth vertex is a vertex in the upper left corner of the reference block, the sixteenth vertex is a vertex in the upper right corner of the reference block, the seventeenth vertex is a vertex in the upper left corner of the reference block, and the eighteenth vertex is a vertex in the upper right corner of the reference block; the thirteenth block is a block right adjacent to the tenth block, the seventeenth block is a block right adjacent to the tenth block, the fifteenth block is a block adjacent to the thirteenth block and the seventeenth block, and a vertex of an upper left corner of the fifteenth block coincides with a vertex of a lower right corner of the eleventh block.
With reference to the ninth aspect, in a possible implementation manner, the calculating, according to the thirteenth position coordinate and the fifteenth position coordinate, a sixteenth position coordinate in the second coordinate system includes: calculating a displacement vector from the thirteenth position coordinate to the fifteenth position coordinate to obtain a first displacement vector; the sixteenth position coordinate is calculated from the thirteenth position coordinate or the fifteenth position coordinate and the first displacement vector of the first proportion.
With reference to the ninth aspect, in one possible implementation manner, the first ratio is equal to a preset numerical value.
It can be seen that, in the case that the first ratio is equal to the preset value, the electronic device can quickly determine the sixteenth position coordinate.
With reference to the ninth aspect, in a possible implementation manner, an image frame in which the reference block is located is a reference image frame, the first image frame is located before the second image frame in a data stream, and before the sixteenth position coordinate is obtained through the calculation, the method further includes: determining that the first ratio is equal to a ratio of an absolute value of the first time difference to an absolute value of the second time difference; the first time difference value is equal to a difference between a point in time of the predicted image frame in a data stream and a point in time of a reference frame in the data stream; the second time difference value is equal to a difference between a point in time of the first image frame in the data stream and a point in time of the second image frame in the data stream.
Therefore, the electronic device can determine the first proportion according to the difference between the predicted image frame and the reference image frame, so that a scene (relative to the reference image frame, the position and the zooming degree of the block) displayed by the predicted image frame is more consistent with a scene built by the electronic device according to the actual operation data at the moment of displaying the predicted image frame.
With reference to the ninth aspect, in a possible implementation manner, an image frame in which the reference block is located is a reference image frame, the first image frame is located before the second image frame in a data stream, and before the sixteenth position coordinate is obtained through the calculation, the method further includes: determining that the first ratio is equal to a ratio of the first value to the second value; the first value is equal to the first number plus one; the second number is equal to the second number plus one; the first number is equal to a number of image frames spaced between the reference image frame and the predicted image frame in the data stream; the second number is equal to a number of image frames spaced between the reference image frame and the predicted image frame in the data stream.
Therefore, the electronic device can determine the first proportion according to the number of the image frames which are separated between the predicted image frame and the reference image frame, so that in the process of displaying each image frame at a constant speed by the electronic device, a scene (relative to the reference image frame, the position and the zooming degree of a block) displayed by the predicted image frame is more consistent with a scene which is constructed by the electronic device according to actual operation data at the moment of displaying the predicted image frame.
With reference to the ninth aspect, in a possible implementation manner, the calculating the sixteenth position coordinate according to the thirteenth position coordinate or the fifteenth position coordinate and the first displacement vector of the first proportion includes: if the reference image frame is the first image frame, calculating the sixteenth position coordinate according to the thirteenth position coordinate and the first displacement vector of the first proportion; if the reference image frame is the second image frame, calculating the sixteenth position coordinate according to the fifteenth position coordinate and the first displacement vector of the first proportion.
With reference to the ninth aspect, in a possible implementation manner, an image frame in which the reference block is located is a reference image frame, the first image frame is located before the second image frame in a data stream, the eleventh vertex of the prediction block is determined before a tenth location coordinate in the prediction image frame, and the method further includes: determining one of the first image frame and the second image frame as the reference image frame; determining another one of the first image frame and the second image frame other than the reference image frame as a matching image frame; dividing the reference image frame into blocks to obtain the reference blocks; determining a block matched with the reference block in the matched image frame as a matched block according to the matching algorithm; wherein if the reference block is the tenth block, the matching block is the twelfth block; if the reference block is the twelfth block, the match is the tenth block.
The electronic device may designate one of the first image frame and the second image frame as a reference image frame, for example, the electronic device may designate the first image frame before being considered in the data stream as a reference image frame. Optionally, the electronic device may determine, based on the positions of the predicted image frames relative to the first image frame and the second image frame, that one of the first image frame and the second image frame is a reference image frame; specifically, if the predicted image frame is located between a first image frame and a second image frame in the data stream, determining the first image frame as a reference image frame; if the predicted image frame is located after the first image frame and the second image frame in the data stream, the second image frame is determined to be the reference image frame.
Therefore, the electronic device may perform block division on the reference image frame, search for a block in a matched image frame by using the block in the reference image frame as a unit, and further determine a mutually matched block in the first image frame and the second image frame.
With reference to the ninth aspect, in a possible implementation manner, the determining, according to the matching algorithm, that a block in the matched image frame that matches the reference block is a matched block includes: determining a block with the highest similarity to the reference block in a first region of the matched image frame as the matched block; the first region is a region of a preset shape of a first area centered on a reference position coordinate in the matching image frame, the reference position coordinate being a position coordinate of the reference block in the reference image frame.
Therefore, the electronic device can determine the first area in the matched image frame according to the position coordinates of the reference block, the search range is possibly narrowed to a certain extent, and the matching efficiency and the matching accuracy of the block are improved.
With reference to the ninth aspect, in a possible implementation manner, before determining that a block in the first area of the matched image frame that has the highest similarity with the reference block is the matched block, the method further includes: and calculating the first area according to the reference depth value of the reference block, wherein the larger the reference depth value is, the smaller the calculated first area is.
Therefore, the electronic device can adjust the size of the first area according to the size of the depth value, and the efficiency of determining the matching block is improved.
With reference to the ninth aspect, in a possible implementation manner, before the determining that a block with the highest similarity to the reference block in the first area of the matched image frame is the matched block, the method further includes: if the reference depth value of the reference block is larger than or equal to a first threshold value, determining that the first area is equal to a first preset area; if the reference depth value is larger than a second threshold value and smaller than the first threshold value, calculating to obtain the first area according to the reference depth value, wherein the larger the reference depth value is, the smaller the first area is; if the reference depth value is smaller than or equal to the second threshold value, determining that the first area is equal to a second preset area; the first threshold is larger than the second threshold, and the second preset area is larger than the first preset area.
For example, the depth value may be greater than or equal to 0 and less than or equal to 1, the first threshold is equal to 0.8, the second threshold is equal to 0.2, and the first preset area may be twice the area of the reference block; the second preset area may be equal to an area of the reference image frame.
Therefore, when the depth value is larger, the electronic equipment searches a matching block of the reference block in a small range; when the depth value is small, the electronic device may search for a matching block of the reference block within the entire image frame. The efficiency of determining the matching blocks is improved.
In a tenth aspect, the present application provides an electronic device comprising: one or more processors and memory; the memory coupled with the one or more processors, the memory to store computer program code, the computer program code including computer instructions, the one or more processors to invoke the computer instructions to cause the electronic device to perform: determining a tenth position coordinate of an eleventh vertex of the prediction block in the prediction image frame according to the depth values of the tenth block and the twelfth block and the position coordinates of the tenth block and the twelfth block in the image frame; wherein the tenth block is a block in the first image frame; the twelfth block is a block which is determined in the second image frame according to a matching algorithm and is matched with the eleventh block; generating a prediction block according to the color data of a reference block and the tenth position coordinate, wherein the reference block is one of the tenth block or the twelfth block; generating the predicted image frame, the predicted image frame including the predicted block.
With reference to the tenth aspect, in a possible implementation manner, the position coordinates in the image frame are position coordinates in a first coordinate system, the first coordinate system is a two-dimensional coordinate system, and in terms of determining a tenth position coordinate of an eleventh vertex of the prediction block in the prediction image frame, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: according to the first depth value of the eleventh block and the twelfth position coordinate of the eleventh block in the first coordinate system, a thirteenth position coordinate in a second coordinate system is obtained through calculation, and the second coordinate system is a three-dimensional coordinate system; calculating to obtain a fifteenth position coordinate under the second coordinate system according to the second depth value of the twelfth block and a fourteenth position coordinate of the twelfth block under the first coordinate system; calculating a sixteenth position coordinate under the second coordinate system according to the thirteenth position coordinate and the fifteenth position coordinate; and calculating the tenth position coordinate in the first coordinate system according to the sixteenth position coordinate.
With reference to the tenth aspect, in a possible implementation manner, before the generating a prediction block, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: determining a seventeenth position coordinate of a twelfth vertex of the prediction block in the prediction image frame according to the depth values of the thirteenth and fourteenth blocks, and the position coordinates of the thirteenth and fourteenth blocks in the image frame; wherein the thirteenth block is a block adjacent to the eleventh block, the fourteenth block is a block adjacent to the twelfth block, and the thirteenth block and the fourteenth block are mutually matched blocks determined according to the matching algorithm; determining an eighteenth position coordinate of a thirteenth vertex of the prediction block in the prediction image frame according to the depth values of the fifteenth block and the sixteenth block and the position coordinates of the fifteenth block and the sixteenth block in the image frame; wherein the fifteenth block is a block adjacent to the thirteenth block, the sixteenth block is a block adjacent to the fourteenth block, and the fifteenth block and the sixteenth block are mutually matched blocks determined according to the matching algorithm; determining a nineteenth position coordinate of a fourteenth vertex of the prediction block in the prediction image frame according to the depth values of the seventeenth block and the eighteenth block and the position coordinates of the seventeenth block and the eighteenth block in the image frame; wherein the seventeenth block is a block adjacent to both the tenth block and the fifteenth block, the eighteenth block is a block adjacent to both the twelfth block and the sixteenth block, and the seventeenth block and the eighteenth block are mutually matched blocks determined according to the matching algorithm.
With reference to the tenth aspect, in one possible implementation manner, the position coordinate in the image frame is a position coordinate in the first coordinate system, and in terms of determining a seventeenth position coordinate of a twelfth vertex of the prediction block in the prediction image frame according to the depth values of the thirteenth and fourteenth blocks, the position coordinate in the image frame of the thirteenth and fourteenth blocks, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: calculating to obtain a twenty-first position coordinate under a second coordinate system according to a third depth value of the thirteenth block and a twentieth position coordinate of the thirteenth block under the first coordinate system, wherein the second coordinate system is a three-dimensional coordinate system; calculating to obtain a twenty-third position coordinate under the second coordinate system according to the second depth value of the fourteenth block and a twenty-second position coordinate of the fourteenth block under the first coordinate system; according to the twenty-first position coordinate and the twenty-third position coordinate, calculating to obtain a twenty-fourth position coordinate under the second coordinate system; and calculating the seventeenth position coordinate in the first coordinate system according to the twenty fourth position coordinate.
With reference to the tenth aspect, in a possible implementation manner, in the generating a prediction block according to the color data of the reference block and the tenth position coordinate, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: determining the corresponding relation between the tenth position coordinate, the seventeenth position coordinate, the eighteenth position coordinate and the nineteenth position coordinate and the position coordinates of four vertexes in a reference block respectively; generating the prediction block according to the correspondence and the color data of the reference block.
With reference to the tenth aspect, in a possible implementation manner, color data of each pixel in the reference block is included in the color data, the generating the prediction block aspect according to the correspondence and the color data of the reference block, and the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: generating a homography transformation formula according to the corresponding relation; determining the position coordinates of each pixel in the prediction image frame according to the homography transformation formula and the position coordinates of each pixel in the reference image frame; and generating the prediction block according to the color data of each pixel and the position coordinate of each speed limit in the prediction image frame.
With reference to the tenth aspect, in one possible implementation manner, the reference block includes: a fifteenth vertex, a sixteenth vertex, a seventeenth vertex, and an eighteenth vertex, in the determining correspondence of the tenth, seventeenth, eighteenth, and nineteenth positional coordinates, respectively, to positional coordinates of four vertices in a reference block, the one or more processors being further configured to invoke the computer instructions to cause the electronic device to perform determining that the tenth positional coordinate and the fifteenth vertex are a set of corresponding positional coordinates; determining the seventeenth position coordinate and the sixteenth vertex position coordinate as a group of corresponding position vertices; determining the eighteenth position coordinate and the seventeenth vertex position coordinate as a group of corresponding position coordinates; determining the nineteenth position coordinate and the eighteenth vertex position coordinate as a set of corresponding position coordinates; wherein the reference block is a quadrangle, and a position direction of the fifteenth vertex relative to a center point of the reference block is the same as a position direction of the eleventh block relative to the fifteenth block; a positional direction of the sixteenth vertex with respect to the center point of the reference block is the same as a positional direction of the thirteenth block with respect to the seventeenth block; a positional direction of the seventeenth vertex with respect to the center point of the reference block is the same as a positional direction of the fifteenth block with respect to the eleventh block; a positional direction of the eighteenth vertex with respect to the center point of the reference block is the same as a positional direction of the seventeenth block with respect to the thirteenth block.
With reference to the tenth aspect, in a possible implementation manner, in terms of the sixteenth position coordinate in the second coordinate system calculated according to the thirteenth position coordinate and the fifteenth position coordinate, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: calculating a displacement vector from the thirteenth position coordinate to the fifteenth position coordinate to obtain a first displacement vector; and calculating the sixteenth position coordinate according to the thirteenth position coordinate or the fifteenth position coordinate and the first displacement vector with a first proportion.
With reference to the tenth aspect, in one possible implementation manner, the first ratio is equal to a preset numerical value.
With reference to the tenth aspect, in a possible implementation manner, an image frame in which the reference block is located is a reference image frame, the first image frame is located before the second image frame in a data stream, and before the sixteenth position coordinate is obtained through the calculation, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: determining that the first ratio is equal to a ratio of an absolute value of the first time difference to an absolute value of the second time difference; the first time difference value is equal to a difference between a point in time of the predicted image frame in a data stream and a point in time of a reference frame in the data stream; the second time difference value is equal to a difference between a point in time of the first image frame in the data stream and a point in time of the second image frame in the data stream.
With reference to the tenth aspect, in a possible implementation manner, an image frame in which the reference block is located is a reference image frame, the first image frame is located before the second image frame in a data stream, and before the sixteenth position coordinate is obtained through the calculation, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: determining that the first ratio is equal to a ratio of the first value to the second value; the first value is equal to the first number plus one; the second number is equal to the second number plus one; the first number is equal to a number of image frames spaced between the reference image frame and the predicted image frame in the data stream; the second number is equal to a number of image frames spaced between the reference image frame and the predicted image frame in the data stream.
With reference to the tenth aspect, in a possible implementation manner, in terms of the calculating the sixteenth position coordinate according to the thirteenth position coordinate or the fifteenth position coordinate and the first displacement vector of the first proportion, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: if the reference image frame is the first image frame, calculating the sixteenth position coordinate according to the thirteenth position coordinate and the first displacement vector of the first proportion; and if the reference image frame is the second image frame, calculating the sixteenth position coordinate according to the fifteenth position coordinate and the first displacement vector of the first proportion.
With reference to the tenth aspect, in a possible implementation manner, an image frame in which the reference block is located is a reference image frame, the first image frame is located before the second image frame in a data stream, and before a tenth location coordinate of an eleventh vertex of the determined prediction block in a prediction image frame, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: determining one of the first image frame and the second image frame as the reference image frame; determining another image frame of the first image frame and the second image frame except the reference image frame as a matching image frame; dividing the reference image frame into blocks to obtain the reference blocks; determining a block matched with the reference block in the matched image frame as a matched block according to the matching algorithm; wherein if the reference block is the tenth block, the matching block is the twelfth block; if the reference block is the twelfth block, the match is the tenth block.
With reference to the tenth aspect, in a possible implementation manner, in the determining, according to the matching algorithm, that a block in the matched image frame that matches the reference block is a matched block, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: determining a block with the highest similarity to the reference block in a first region of the matched image frame as the matched block; the first region is a region of a preset shape of a first area centered on a reference position coordinate in the matching image frame, the reference position coordinate being a position coordinate of the reference block in the reference image frame.
With reference to the tenth aspect, in a possible implementation manner, before the determining that a block with the highest similarity to the reference block in the first area of the matching image frame is the matching block, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: and calculating the first area according to the reference depth value of the reference block, wherein the larger the reference depth value is, the smaller the calculated first area is.
With reference to the tenth aspect, in a possible implementation manner, before the determining that the block with the highest similarity to the reference block in the first region of the matching image frame is the matching block, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: if the reference depth value of the reference block is larger than or equal to a first threshold value, determining that the first area is equal to a first preset area; if the reference depth value is larger than a second threshold value and smaller than the first threshold value, calculating to obtain the first area according to the reference depth value, wherein the larger the reference depth value is, the smaller the first area is; if the reference depth value is smaller than or equal to the second threshold value, determining that the first area is equal to a second preset area; the first threshold is larger than the second threshold, and the second preset area is larger than the first preset area.
In an eleventh aspect, a method for generating an image frame is provided, which may include: in a first drawing period, according to a drawing instruction of a target application program, when the drawing instruction meets a first condition, storing a drawing result into a seventh storage space to serve as a first drawing result, and when the drawing instruction meets a second condition, storing the drawing result into an eighth storage space to serve as a second drawing result; generating a seventh image frame according to the first drawing result and the second drawing result; in a second drawing cycle, according to a drawing instruction of the target application program, when the drawing instruction meets the first condition, storing a drawing result into the seventh storage space to serve as a third drawing result, and when the drawing instruction meets the second condition, not drawing; and generating an eighth image frame according to the third drawing result and the second drawing result. Wherein, the effect of the generated drawing result on the display screen when the drawing instruction meets the first condition can be a dynamic interface. The effect that the drawing result generated when the drawing instruction satisfies the second condition is presented on the display screen can be a control. Thus, when the method of the first aspect is implemented, the electronic device may store the drawing results obtained according to the drawing instructions of different conditions into different storage spaces according to the drawing instruction of the target application program, so that one drawing result may be shared in two drawing cycles; the drawing time of the electronic equipment is shortened when the drawing instruction meets the second condition, and the power consumption of the electronic equipment is reduced.
With reference to the eleventh aspect, in one possible implementation manner, the method further includes: in a third drawing period, according to a drawing instruction of a target application program, when the drawing instruction meets a first condition, storing a drawing result into the seventh storage space to serve as a fourth drawing result, and when the drawing instruction meets a second condition, storing the drawing result into the eighth storage space to serve as a fifth drawing result, wherein the third drawing period is positioned after the first drawing period; generating a ninth image frame according to the fourth drawing result and the fifth drawing result; in a fourth drawing period, generating a sixth drawing result according to the first drawing result and the fourth drawing result; generating a tenth image frame according to a sixth drawing result and the seventh drawing result; the seventh drawing result is a drawing result obtained when the drawing instruction of the target application program satisfies the second condition in a fifth drawing cycle, and the fifth drawing cycle is before the fourth drawing cycle. Among other things, the present example may be used in an inter-frame scenario. The electronic equipment can generate a predicted sixth drawing result according to the drawing result when the two drawing instructions meet the first condition; and generating a predicted interpolated image frame in conjunction with a seventh rendering result obtained when the rendering instruction satisfies the second condition in the previous rendering cycle. Since the electronic device can multiplex the seventh rendering result in the process of generating the predicted insertion image frame, power consumption in generating the insertion image frame is reduced; and in the process of generating the inserted image frame, because the effect of the seventh drawing result presented in the image frame does not need to be predicted, the electronic equipment only calculates the drawing result when the two drawing instructions meet the first condition to perform prediction calculation, and compared with a general frame inserting method, the electronic equipment does not need to calculate according to the data of the whole frame image frame, so that the power consumption of the electronic equipment is further reduced.
With reference to the eleventh aspect, in a possible implementation manner, in an aspect that the electronic device generates a seventh image frame according to the first drawing result and the second drawing result, the method includes: storing the first drawing result and the second drawing result into a ninth storage space; generating the seventh image frame according to the first drawing result and the second drawing result in the ninth storage space.
With reference to the eleventh aspect, in a possible implementation manner, in the generating a ninth image frame according to the fourth drawing result and the fifth drawing result, the method includes: storing the fourth drawing result and the fifth drawing result in the ninth storage space; generating the ninth image frame according to the fourth drawing result and the fifth drawing result in the ninth storage space.
With reference to the eleventh aspect, in a possible implementation manner, in the aspect of generating the tenth image frame according to the sixth drawing result and the seventh drawing result, the method includes: storing the sixth drawing result and the seventh drawing result in the ninth storage space; generating the tenth image frame according to the sixth drawing result and the seventh drawing result in the ninth storage space.
With reference to the eleventh aspect, in a possible implementation manner, the seventh storage space is composed of a tenth storage space and an eleventh storage space, the first rendering result is stored in the tenth storage space, the fourth rendering result is stored in the eleventh storage space, and in the aspect of generating a sixth rendering result according to the first rendering result and the fourth rendering result, the method includes: storing the first drawing result into the eleventh storage space; generating the sixth rendering result from the first rendering result and the fourth rendering result in the eleventh storage space.
With reference to the eleventh aspect, in a possible implementation manner, the seventh storage space is composed of at least three storage spaces, and the at least three storage spaces include a tenth storage space, an eleventh storage space, and a twelfth storage space; the first rendering result is stored in a tenth storage space, the fourth rendering result is stored in an eleventh storage space, and in the generating a sixth rendering result from the first rendering result and the fourth rendering result, the method includes: storing the first drawing result and the fourth drawing result into a twelfth storage space; and generating the sixth drawing result according to the first drawing result and the fourth drawing result in the twelfth storage space.
With reference to the eleventh aspect, in one possible implementation manner, the third drawing period is adjacent to the fourth drawing period; the fifth drawing cycle and the third drawing cycle are the same drawing cycle, and the fifth drawing result and the seventh drawing result are the same drawing result. When an insertion image frame is generated, the electronic device can reuse a drawing result used for generating an image of a previous frame, and power consumption of the electronic device is reduced.
With reference to the eleventh aspect, in a possible implementation manner, in the generating a seventh image frame according to the first drawing result and the second drawing result, the method includes: storing the second drawing result into the seventh storage space; generating the seventh image frame according to the first drawing result and the second drawing result in the seventh storage space.
With reference to the eleventh aspect, in a possible implementation manner, an electronic device applying the method includes a counting unit; the initialized value of the counting unit is a first value, and the value of the counting unit is switched between the first value and a second value once every time the value of the counting unit is updated; the counting unit updates at the moment when the drawing cycle starts; in a drawing cycle, if the updated numerical value of the counting unit is a second numerical value, emptying the eighth storage space at the beginning of the drawing cycle; when the drawing instruction meets the second condition, storing the drawing result into the eighth storage space; if the updated numerical value of the counting unit is a first numerical value, the second storage unit is not emptied at the moment when the drawing cycle starts; and when the drawing instruction meets the second condition, not executing drawing. Therefore, by arranging the counting unit, the electronic equipment can share one drawing instruction to obtain a drawing result meeting the second condition every two adjacent drawing cycles in the drawing process according to the drawing instruction of the target application program, the data processing amount in the drawing process of the electronic equipment is reduced, and the power consumption of the electronic equipment is reduced.
With reference to the eleventh aspect, in one possible implementation manner, the method further includes: in a third drawing period, generating a fourth drawing result according to the first drawing result and the third drawing result; and generating a ninth image frame according to the second drawing result and the fourth drawing result.
With reference to the eleventh aspect, in a possible implementation manner, the electronic device applying the method includes a counting unit, where an initialized value of the counting unit is a first value, the counting unit performs repeated updating switching among three values according to an order of the first value, a second value, and a third value, the counting unit performs updating at a time when a drawing cycle starts, and in one drawing cycle, if an updated value of the counting unit is the second value, the eighth storage space is emptied at the time when the drawing cycle starts; when the drawing instruction meets the second condition, storing the drawing result into the eighth storage space; if the updated numerical value of the counting unit is a first numerical value or a third numerical value, the second storage unit is not emptied at the beginning of the drawing cycle; and when the drawing instruction meets the second condition, not executing drawing. Therefore, by arranging the counting unit, the electronic equipment can obtain a drawing result when one drawing instruction is shared by every three drawing cycles in the drawing process according to the drawing instruction of the target application program and meets the second condition, the data processing amount in the drawing process of the electronic equipment is reduced, and the power consumption of the electronic equipment is reduced.
With reference to the eleventh aspect, in a possible implementation manner, the ninth storage space is composed of at least two storage spaces, and the at least two storage spaces rotate between the first state and the second state according to the first instruction of the target application program when the target application program runs; at the same time point, only one of the at least two storage spaces is in the first state, and the rest of the storage spaces are in the second state; when the at least two storage spaces are in the first state, the image frames in the at least two storage spaces can be transmitted to a display device for displaying; when the at least two storage spaces are in the second state, the electronic device can draw in the at least two storage spaces; two frame image frames generated by two adjacent drawing periods are respectively stored in two different storage spaces in the ninth storage space. Therefore, the electronic equipment can alternate the storage space in which the image frames are stored, and draw the next image frame in the process of displaying the previous image frame, so that the efficiency of generating and displaying the image frames by the electronic equipment is improved.
With reference to the eleventh aspect, in one possible implementation manner, the first condition is that: the drawing instruction comprises an instruction for starting the depth test; the second condition is: the drawing instruction comprises an instruction for closing the depth test. Therefore, division of the drawing result is realized by opening or closing the depth test instruction, so that the purpose of multiplexing the layers for displaying the space is achieved, the data processing amount in the process of drawing the image frame is reduced under the condition that the display effect according to the image frame is less influenced or even the display effect is not influenced, and the power consumption of the electronic equipment is reduced.
With reference to the eleventh aspect, in a possible implementation manner, the time nodes of any two adjacent drawing cycles are time points at which the target application program calls the second instruction. The first instruction and the second instruction may be the same instruction. That is, when the electronic device rotates the storage space storing the image frame once, the end of the previous drawing period and the start of the next drawing period in the two adjacent drawing periods can be considered.
In a twelfth aspect, an electronic device is provided, which includes: one or more processors and memory; the memory coupled with the one or more processors, the memory to store computer program code, the computer program code including computer instructions, the one or more processors to invoke the computer instructions to cause the electronic device to perform: in a first drawing cycle, according to a drawing instruction of a target application program, when the drawing instruction meets a first condition, storing a drawing result into a seventh storage space as a first drawing result, and when the drawing instruction meets a second condition, storing the drawing result into an eighth storage space as a second drawing result; generating a seventh image frame according to the first drawing result and the second drawing result; in a second drawing cycle, according to a drawing instruction of the target application program, when the drawing instruction meets the first condition, storing a drawing result into the seventh storage space as a third drawing result, and when the drawing instruction meets the second condition, not drawing; and generating an eighth image frame according to the third drawing result and the second drawing result.
With reference to the twelfth aspect, in a possible implementation manner, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: in a third drawing period, according to a drawing instruction of a target application program, when the drawing instruction meets a first condition, storing a drawing result into the seventh storage space to serve as a fourth drawing result, and when the drawing instruction meets a second condition, storing the drawing result into the eighth storage space to serve as a fifth drawing result, wherein the third drawing period is positioned after the first drawing period; generating a ninth image frame according to the fourth drawing result and the fifth drawing result; in a fourth drawing period, generating a sixth drawing result according to the first drawing result and the fourth drawing result; generating a tenth image frame according to a sixth drawing result and the seventh drawing result; the seventh drawing result is a drawing result obtained when the drawing instruction of the target application program satisfies the second condition in a fifth drawing cycle, and the fifth drawing cycle is before the fourth drawing cycle.
With reference to the twelfth aspect, in a possible implementation manner, in the aspect of generating a seventh image frame according to the first drawing result and the second drawing result, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: storing the first drawing result and the second drawing result into a ninth storage space; generating the seventh image frame according to the first drawing result and the second drawing result in the ninth storage space.
With reference to the twelfth aspect, in a possible implementation manner, in the generating a ninth image frame according to the fourth drawing result and the fifth drawing result, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: storing the fourth drawing result and the fifth drawing result in the ninth storage space; generating the ninth image frame according to the fourth drawing result and the fifth drawing result in the ninth storage space.
With reference to the twelfth aspect, in a possible implementation manner, in the generating a tenth image frame according to the sixth rendering result and the seventh rendering result, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: storing the sixth drawing result and the seventh drawing result in the ninth storage space; generating the tenth image frame from the sixth drawing result and the seventh drawing result in the ninth storage space.
With reference to the twelfth aspect, in a possible implementation manner, the seventh storage space is composed of a tenth storage space and an eleventh storage space, the first drawing result is stored in the tenth storage space, the fourth drawing result is stored in the eleventh storage space, and in the aspect of generating a sixth drawing result according to the first drawing result and the fourth drawing result, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: storing the first drawing result into the eleventh storage space; and generating the sixth drawing result according to the first drawing result and the fourth drawing result in the eleventh storage space.
With reference to the twelfth aspect, in a possible implementation manner, the seventh storage space is composed of at least three storage spaces, and the at least three storage spaces include a tenth storage space, an eleventh storage space, and a twelfth storage space; the first rendering result is stored in a tenth storage space, the fourth rendering result is stored in an eleventh storage space, and in terms of generating a sixth rendering result according to the first rendering result and the fourth rendering result, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: storing the first drawing result and the fourth drawing result into a twelfth storage space; and generating the sixth drawing result according to the first drawing result and the fourth drawing result in the twelfth storage space.
With reference to the twelfth aspect, in a possible implementation manner, the third drawing period is adjacent to the fourth drawing period; the fifth drawing cycle and the third drawing cycle are the same drawing cycle, and the fifth drawing result and the seventh drawing result are the same drawing result.
With reference to the twelfth aspect, in a possible implementation manner, in the aspect of generating the seventh image frame according to the first drawing result and the second drawing result, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: storing the second drawing result into the seventh storage space; generating the seventh image frame according to the first drawing result and the second drawing result in the seventh storage space.
With reference to the twelfth aspect, in a possible implementation manner, the electronic device includes a counting unit; the initialized value of the counting unit is a first value, and the value of the counting unit is switched between the first value and a second value once every time the value of the counting unit is updated; the counting unit updates at the moment when the drawing cycle starts; in a drawing cycle, if the updated numerical value of the counting unit is a second numerical value, emptying the eighth storage space at the beginning of the drawing cycle; when the drawing instruction meets the second condition, storing the drawing result into the eighth storage space; if the updated numerical value of the counting unit is a first numerical value, the second storage unit is not emptied at the moment when the drawing cycle starts; and when the drawing instruction meets the second condition, not executing drawing.
With reference to the twelfth aspect, in a possible implementation manner, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: in a third drawing period, generating a fourth drawing result according to the first drawing result and the third drawing result; and generating a ninth image frame according to the second drawing result and the fourth drawing result.
With reference to the twelfth aspect, in a possible implementation manner, the electronic device includes a counting unit, where an initialized value of the counting unit is a first value, the counting unit performs repeated updating switching among three values according to an order of the first value, a second value, and a third value, the counting unit performs updating at a time when a drawing cycle starts, and in one drawing cycle, if a value of the updated counting unit is a second value, the eighth storage space is cleared at the time when the drawing cycle starts; when the drawing instruction meets the second condition, storing the drawing result into the eighth storage space; if the updated numerical value of the counting unit is the first numerical value or the third numerical value, the second storage unit is not emptied at the moment when the drawing cycle starts; and when the drawing instruction meets the second condition, not executing drawing.
With reference to the twelfth aspect, in a possible implementation manner, the ninth storage space is composed of at least two storage spaces, and the at least two storage spaces rotate between the first state and the second state according to the first instruction of the target application program when the target application program runs; at the same time point, only one of the at least two storage spaces is in the first state, and the rest of the storage spaces are in the second state; when the at least two storage spaces are in the first state, the image frames in the at least two storage spaces can be transmitted to a display device for displaying; when the at least two storage spaces are in the second state, the electronic device can draw in the at least two storage spaces; two frame image frames generated by two adjacent drawing cycles are respectively stored in two different storage spaces in the ninth storage space.
With reference to the twelfth aspect, in a possible implementation manner, the first condition is that: the drawing instruction comprises an instruction for starting the depth test; the second condition is: the drawing instruction comprises an instruction for closing the depth test.
With reference to the twelfth aspect, in a possible implementation manner, the time nodes of any two adjacent drawing cycles are time points at which the target application program calls a second instruction.
In a thirteenth aspect, an embodiment of the present application provides a method for image frame prediction, where the method may include: when drawing a twenty-first drawing frame of the first application, the electronic equipment draws a drawing instruction of the twenty-first drawing frame according to a first drawing range to obtain a twenty-first drawing result, wherein the size of the first drawing range is larger than that of the twenty-first drawing frame of the first application; when drawing a twenty-second drawing frame of the first application, the electronic equipment draws a drawing instruction of the twenty-second drawing frame according to a second drawing range to obtain a twenty-second drawing result, wherein the size of a twenty-second memory space is larger than that of the twenty-second drawing frame, and the size of the twenty-first drawing frame is the same as that of the twenty-second drawing frame; and the electronic equipment predicts and generates a twenty-third predicted frame of the first application according to the twenty-first drawing result and the twenty-second drawing result, wherein the size of the twenty-third predicted frame is the same as that of the twenty-first drawing frame.
In this way, the electronic device may obtain a predicted frame. Under the condition of not increasing the drawing frame, the frame rate of the electronic equipment can be improved. Therefore, the fluency of the video interface displayed by the electronic equipment can be improved under the condition of saving the power consumption of the electronic equipment. Further, drawing contents that are not present in the twenty-first drawing frame and the twenty-second drawing frame displayed by the electronic device may be present in the predicted frame predicted by the electronic device. Thus, the drawing content in the predicted frame predicted by the electronic device is closer to the shooting content in the shooting field of view of the camera. Thus, the image frames predicted by the electronic device may be more accurate.
With reference to the thirteenth aspect, in a possible implementation manner, the electronic device draws the drawing instruction of the twenty-first drawing frame according to the first drawing range to obtain a twenty-first drawing result, and specifically includes: the electronic equipment modifies a first parameter in a first drawing instruction of a twenty-first drawing frame issued by a first application into a first drawing range; the first parameter is used for setting the drawing range size of a twenty-first drawing frame; and the electronic equipment draws the drawing instruction of the modified twenty-first drawing frame according to the first drawing range to obtain a twenty-first drawing result.
With reference to the thirteenth aspect, in a possible implementation manner, the size of the first rendering range is larger than the size of the twenty-first rendering frame of the first application, and specifically includes: the width of the first drawing range is K3 times the width of the twenty-first drawing frame, the height of the first drawing range is K4 times the height of the twenty-first drawing frame, and K3 and K4 are greater than 1.
With reference to the thirteenth aspect, in a possible implementation manner, K3 and K4 are determined by a fixed value configured by a system of the electronic device, or by the electronic device according to a drawing parameter included in a drawing instruction of the twenty-first drawing frame.
With reference to the thirteenth aspect, in a possible implementation manner, the electronic device draws the drawing instruction of the modified twenty-first drawing frame according to the first drawing range to obtain a twenty-first drawing result, and specifically includes: and the electronic equipment generates a first conversion matrix according to K3 and K4, and the electronic equipment adjusts the size of the drawing content in the drawing instruction of the modified twenty-first drawing frame according to the first conversion matrix and draws the drawing content in the first drawing range to obtain a twenty-first drawing result.
With reference to the thirteenth aspect, in a possible implementation manner, the electronic device draws the drawing instruction of the twenty-second drawing frame according to the second drawing range to obtain a twenty-first drawing result, which specifically includes: the electronic equipment modifies a second parameter in a second drawing instruction of a twenty-second drawing frame issued by the first application into a second drawing range; the second parameter is used for setting the drawing range size of a twenty-second drawing frame; and the electronic equipment draws the drawing instruction of the modified twenty-second drawing frame according to the second drawing range to obtain a twenty-second drawing result.
With reference to the thirteenth aspect, in a possible implementation manner, the size of the second drawing range is larger than the size of the twenty-second drawing frame of the first application, which specifically includes: the width of the second drawing range is K5 times the width of the twenty-second drawing frame, the height of the second drawing range is K6 times the height of the twenty-second drawing frame, and K5 and K6 are greater than 1.
With reference to the thirteenth aspect, in a possible implementation manner, K5 and K6 are determined by a fixed value configured by a system of the electronic device, or by the electronic device according to a drawing parameter included in a drawing instruction of the twenty-first drawing frame.
With reference to the thirteenth aspect, in a possible implementation manner, the electronic device draws the modified drawing instruction of the twenty-second drawing frame according to the second drawing range to obtain a twenty-second drawing result, and specifically includes: and the electronic equipment generates a second conversion matrix according to K5 and K6, and the electronic equipment adjusts the size of the drawing content in the drawing instruction of the modified twenty-second drawing frame according to the second conversion matrix and draws the drawing content in a second drawing range to obtain a twenty-first drawing result.
With reference to the thirteenth aspect, in a possible implementation manner, the predicting, by the electronic device, a twenty-third predicted frame of the first application according to the twenty-first drawing result and the twenty-second drawing result includes: the device now predicts a twenty-third drawing result of generating a twenty-third predicted frame based on the twenty-first drawing result and the twenty-second drawing result; the electronic device clips the twenty-third rendering result into a twenty-third predicted frame.
With reference to the thirteenth aspect, in a possible implementation manner, the predicting, by the electronic device, a twenty-third drawing result of the twenty-third predicted frame according to the twenty-first drawing result and the twenty-second drawing result includes: the electronic equipment determines a first motion vector of a twenty-second drawing result according to the twenty-first drawing result and the twenty-second drawing result; the electronic device predicts a twenty-third rendering result of a twenty-third predicted frame based on the twenty-second rendering result and the first motion vector.
With reference to the thirteenth aspect, in a possible implementation manner, the determining, by the electronic device, the first motion vector of the twenty-second drawing result according to the twenty-first drawing result and the twenty-second drawing result specifically includes: the electronic equipment divides the twenty-second drawing result into Q pixel blocks, and the electronic equipment takes out the first pixel block from the Q pixel blocks of the twenty-second drawing result; the electronic equipment determines a second pixel block matched with the first pixel block in the twenty-first drawing result; the electronic equipment obtains a motion vector of the first pixel block according to the displacement from the second pixel block to the first pixel block; the electronic device determines a first motion vector of a twenty-second rendering result from the motion vector of the first pixel block.
With reference to the thirteenth aspect, in a possible implementation manner, the determining, by the electronic device, a second pixel block matched with the first pixel block in the twenty-first rendering result specifically includes: the electronic equipment determines a plurality of candidate pixel blocks in the twenty-first drawing result through first pixel points in the first pixel blocks; the electronic equipment respectively calculates the difference value of the color values of the candidate pixel blocks and the first pixel block; the electronic equipment determines a second pixel block matched with the first pixel block according to the difference value of the color value of the first pixel block of the candidate pixel blocks, wherein the second pixel block is the candidate pixel block with the minimum difference value of the color value of the first pixel block in the candidate pixel blocks.
With reference to the thirteenth aspect, in a possible implementation manner, when drawing a twenty-first drawing frame of the first application, the electronic device draws a drawing instruction of the twenty-first drawing frame according to the first drawing range to obtain a twenty-first drawing result, which specifically includes: when drawing a twenty-first drawing frame of the first application, the electronic device draws a drawing instruction of the twenty-first drawing frame in a twenty-first memory space according to a first drawing range to obtain a twenty-first drawing result, wherein the size of the twenty-first memory space is larger than or equal to that of the first drawing range.
With reference to the thirteenth aspect, in a possible implementation manner, when drawing the twenty-second drawing frame of the first application, the electronic device draws the drawing instruction of the twenty-second drawing frame according to the second drawing range to obtain a twenty-second drawing result, which specifically includes: when the twenty-second drawing frame of the first application is drawn, the electronic device draws the drawing instruction of the twenty-second drawing frame in a twenty-second memory space according to a second drawing range to obtain a twenty-second drawing result, wherein the size of the twenty-second memory space is greater than or equal to the size of the second drawing range.
With reference to the thirteenth aspect, in a possible implementation manner, the predicting, by the electronic device, a twenty-third rendering result of a twenty-third predicted frame according to the twenty-second rendering result and the first motion vector includes: the electronic equipment predicts and generates a twenty-third drawing result according to a twenty-second drawing result and the first motion vector and a third drawing range; wherein the size of the third rendering range is larger than the size of the twenty-third predicted frame.
With reference to the thirteenth aspect, in a possible implementation manner, when drawing the twenty-first drawing frame of the first application, the electronic device draws the drawing instruction of the twenty-first drawing frame according to the first drawing range, and after obtaining a twenty-first drawing result, the method further includes: and the electronic equipment cuts the twenty-first drawing result into a twenty-first drawing frame.
With reference to the thirteenth aspect, in a possible implementation manner, when the twenty-second drawing frame of the first application is drawn, the electronic device draws the drawing instruction of the twenty-second drawing frame according to the second drawing range, and after obtaining a twenty-second drawing result, the method further includes: the electronic device clips the twenty-second drawing result into a twenty-second drawing frame.
In a fourteenth aspect, the present application provides a method of image frame prediction, which may include: when drawing a twenty-first drawing frame, the electronic equipment draws the drawing content of the drawing instruction of the twenty-first drawing frame into a twenty-first memory space to obtain a twenty-first drawing result, wherein the size of the twenty-first memory space is larger than that of a default memory space, and the default memory space is a memory space provided by an electronic equipment system and used for storing image frames for display; when drawing a twenty-second drawing frame, the electronic device draws the drawing content of the drawing instruction of the twenty-second drawing frame to a twenty-second memory space to obtain a twenty-second drawing result, wherein the size of the twenty-second memory space is larger than that of the default memory space; the electronic equipment generates a twenty-third drawing result in a twenty-third memory space according to the twenty-first drawing result and the twenty-second drawing result, wherein the size of the twenty-third memory space is larger than that of the default memory space; and the electronic equipment cuts the twenty-third drawing result into a drawing frame with the size same as that of the default memory space, so as to obtain a twenty-third prediction frame.
In this way, the electronic device may obtain a predicted frame. The frame rate of the electronic device can be increased without increasing the number of drawing frames. Therefore, the fluency of the video interface displayed by the electronic equipment can be improved under the condition of saving the power consumption of the electronic equipment. Further, drawing contents that are not present in the twenty-first drawing frame and the twenty-second drawing frame displayed by the electronic device may be present in the predicted frame predicted by the electronic device. Thus, the drawing content in the predicted frame predicted by the electronic device is closer to the photographic content in the photographic field of view of the camera. Thus, the image frames predicted by the electronic device may be more accurate.
With reference to the fourteenth aspect, in a possible implementation manner, the size of the twenty-first memory space is greater than the size of the default memory space, which specifically includes: the first size of the twenty-first memory space is K1 times the third size of the default memory space, the second size of the twenty-first memory space is K2 times the fourth size of the default memory space, and K1 and K2 are greater than 1.
The size of the twenty-second memory space is greater than the size of the default memory space, which specifically includes: the fifth size of the twenty-second memory space is K1 times the third size of the default memory space, and the fifth size of the twenty-second memory space is K2 times the fourth size of the default memory space.
The size of the twenty-third memory space is greater than the size of the default memory space, which specifically includes: a seventh size of the twenty-third memory space is K1 times the third size of the default memory space, and an eighth size of the twenty-third memory space is K2 times the fourth size of the default memory space.
Here, the first size of the twenty-first memory space may be a width of the twenty-first memory space, and the second size of the twenty-first memory space may be a height of the twenty-first memory space. The third size of the default memory space may be a width of the default memory space and the fourth size of the default memory space may be a height of the default memory space. The fifth size of the twenty-second memory space may be a width of the twenty-second memory space, and the sixth size of the twenty-second memory space may be a height of the twenty-second memory space. The seventh size of the twenty-third memory space may be a width of the twenty-third memory space, and the eighth size of the twenty-third memory space may be a height of the twenty-third memory space. In this way, the electronic device may enlarge the width and height of the twenty-first memory space by different sizes. The electronic device may enlarge the width and height of the twenty-second memory space by different sizes. The electronic device may enlarge the width and height of the twenty-third memory space by different sizes.
With reference to the fourteenth aspect, in a possible implementation manner, when drawing the twenty-first drawing frame, the electronic device draws the drawing content of the drawing instruction of the twenty-first drawing frame into the twenty-first memory space to obtain a twenty-first drawing result, which specifically includes: when drawing the twenty-first drawing frame, the electronic equipment draws the drawing content of the drawing instruction of the twenty-first drawing frame into a first drawing range of a twenty-first memory space to obtain a twenty-first drawing result; the size of the first rendering range is smaller than or equal to the size of a twenty-first memory space, and the size of the first rendering range is larger than the size of a default memory space.
With reference to the fourteenth aspect, in a possible implementation manner, the size of the first drawing range is smaller than or equal to the size of the twenty-first memory space, and the size of the first drawing range is larger than the size of the default memory space, specifically including: the ninth size of the first rendering range is K3 times the third size of the default memory space, the tenth size of the first rendering range is K4 times the fourth size of the default memory space, K3 is greater than 1 and less than or equal to K1, and K4 is greater than 1 and less than or equal to K2.
The ninth size of the first drawing range may be a width of the first drawing range, and the tenth size of the first drawing range may be a height of the first drawing range.
With reference to the fourteenth aspect, in one possible implementation manner, K3 is equal to K1, K4 is equal to K1, and K1, K2, K3, and K4 are fixed values configured for a system of the electronic device. The electronic device may configure K1, K2, K3, K4 based on empirical values. The electronic device directly configures the fixed value to reduce the amount of calculation.
With reference to the fourteenth aspect, in one possible implementation manner, K3 and K4 are determined by the electronic device according to drawing parameters included in the drawing instruction of the twenty-first drawing frame. In this way, K3 and K4 set by the electronic device can be decided according to the drawing parameters included in the drawing instruction of the twenty-first drawing frame. Thus, the magnification of the drawing range of different drawing frames may be different. In this way, the magnification of the drawing range by the electronic device is more consistent with the drawing contents in the drawing instruction of the drawing frame.
With reference to the fourteenth aspect, in a possible implementation manner, when drawing the twenty-second drawing frame, the electronic device draws the drawing content of the drawing instruction of the twenty-second drawing frame to the twenty-second memory space to obtain a twenty-second drawing result, which specifically includes: when drawing the twenty-second drawing frame, the electronic device draws the drawing content of the drawing instruction of the twenty-second drawing frame into a second drawing range of a twenty-second memory space to obtain a twenty-second drawing result; the size of the second rendering range is smaller than or equal to the size of the twenty-second memory space, and the size of the second rendering range is larger than the size of the default memory space.
With reference to the fourteenth aspect, in a possible implementation manner, the size of the second rendering range is smaller than or equal to the size of the twenty-second memory space, and the size of the second rendering range is larger than the size of the default memory space, which specifically includes: an eleventh size of the second rendering range is K5 times the third size of the default memory space, a twelfth size of the second rendering range is K6 times the fourth size of the default memory space, K5 is greater than 1 and less than or equal to K1, and K6 is greater than 1 and less than or equal to K2.
The eleventh size of the second drawing range may be a width of the second drawing range, and the twelfth size of the second drawing range may be a height of the second drawing range.
With reference to the fourteenth aspect, in one possible implementation manner, K5 and K6 are fixed values of a system configuration of the electronic device. The electronic device directly configures the fixed value to reduce the amount of calculation.
With reference to the fourteenth aspect, in one possible implementation manner, K5 and K6 are determined by the electronic device according to drawing parameters included in the drawing instruction of the twenty-second drawing frame. In this way, K5 and K6 set by the electronic device can be decided according to the drawing parameters contained in the drawing instruction of the twenty-second drawing frame. Thus, the magnification of the drawing range of different drawing frames may be different. In this way, the magnification of the drawing range by the electronic device is more consistent with the drawing contents in the drawing instruction of the drawing frame.
With reference to the fourteenth aspect, in a possible implementation manner, the electronic device generates a twenty-third drawing result in a twenty-third memory space according to the twenty-first drawing result and the twenty-second drawing result, where a size of the twenty-third memory space is greater than a size of a default memory space, and the method specifically includes: the electronic equipment determines a first motion vector of a twenty-second drawing result according to the twenty-first drawing result and the twenty-second drawing result; the electronic device generates a twenty-third rendering result in a twenty-third memory space according to the twenty-second rendering result and the first motion vector. In this way, the electronic device can predict a twenty-third rendering result of the twenty-third predicted frame from the twenty-first rendering frame and the twenty-second rendering frame.
With reference to the fourteenth aspect, in a possible implementation manner, the determining, by the electronic device, the first motion vector of the twenty-second drawing result according to the twenty-first drawing result and the twenty-second drawing result specifically includes: the electronic equipment divides the twenty-second drawing result into Q pixel blocks, and the electronic equipment takes out the first pixel block from the Q pixel blocks of the twenty-second drawing result; the electronic equipment determines a second pixel block matched with the first pixel block in the twenty-first drawing result; the electronic equipment obtains a motion vector of the first pixel block according to the displacement from the second pixel block to the first pixel block; the electronic device determines a first motion vector of a twenty-second rendering result from the motion vector of the first pixel block. Following the steps in this implementation, the electronic device can determine motion vectors for all of the Q pixel blocks of the twenty-second rendering result. Each pixel block includes f × f (e.g., 16 × 16) pixels.
In the above implementation manner, the electronic device divides the twenty-second drawing result into blocks to calculate the motion vector, and does not need to calculate the motion vector of each pixel point in the twenty-second drawing result. This can reduce the amount of computation, thereby reducing the power consumption of the electronic device.
With reference to the fourteenth aspect, in a possible implementation manner, the determining, by the electronic device, a second pixel block matched with the first pixel block in the twenty-first rendering result specifically includes: the electronic equipment determines a plurality of candidate pixel blocks in the twenty-first drawing result through first pixel points in the first pixel blocks; the electronic equipment respectively calculates the difference value of the color values of the candidate pixel blocks and the first pixel block; the electronic equipment determines a second pixel block matched with the first pixel block according to the difference value of the color value of the first pixel block of the candidate pixel blocks, wherein the second pixel block is the candidate pixel block with the minimum difference value of the color value of the first pixel block in the candidate pixel blocks.
In this way, the electronic device can more accurately find a matching pixel block for each pixel block, thereby being able to more accurately calculate a motion vector for each pixel block.
With reference to the fourteenth aspect, in a possible implementation manner, the generating, by the electronic device, a twenty-third drawing result in a twenty-third memory space according to the twenty-second drawing result and the first motion vector specifically includes: the electronic device determines a motion vector of a twenty-third rendering result according to the first motion vector, and generates a twenty-third rendering result according to the twenty-second rendering result and the motion vector of the twenty-third rendering result. The motion vector of the twenty-third rendering result is G times the first motion vector, and G is greater than 0 and smaller than 1.
In combination with the fourteenth aspect, in one possible implementation manner, G is equal to 0.5. Therefore, the object in each image frame moves at a constant speed, so that the electronic equipment can calculate conveniently, and the user experience is better when watching the video.
With reference to the fourteenth aspect, in a possible implementation manner, the generating, by the electronic device, a twenty-third drawing result in a twenty-third memory space according to the twenty-second drawing result and the first motion vector specifically includes: the electronic device generates a twenty-third drawing result in a third drawing range of a twenty-third memory space according to the twenty-second drawing result and the first motion vector; the size of the third rendering range is smaller than or equal to the size of the twenty-third memory space, and the size of the third rendering range is larger than the size of the default memory space.
With reference to the fourteenth aspect, in a possible implementation manner, the size of the third drawing range is smaller than or equal to the size of a twenty-third memory space, and the size of the third drawing range is larger than the size of the twenty-third memory space, which specifically includes: the thirteenth size of the third rendering range is K7 times the third size of the default memory space, the fourteenth size of the third rendering range is K8 times the fourth size of the default memory space, K7 is greater than 1 and less than or equal to K1, and K8 is greater than 1 and less than or equal to K2.
The thirteenth size of the third drawing range may be the width of the third drawing range, and the fourteenth size of the third drawing range may be the height of the third drawing range.
With reference to the fourteenth aspect, in a possible implementation manner, when drawing the twenty-first drawing frame, the electronic device may further include: the electronic device creates a twenty-first memory space, a twenty-second memory space, and a twenty-third memory space, where the twenty-first memory space may be used to store a twenty-first drawing result of a twenty-first drawing frame, the twenty-second memory space may be used to store a twenty-second drawing result of a twenty-second drawing frame, and the twenty-third memory space may be used to store a twenty-third drawing result of a twenty-third prediction frame.
With reference to the fourteenth aspect, in a possible implementation manner, when drawing the twenty-first drawing frame, the electronic device draws the drawing content of the drawing instruction of the twenty-first drawing frame into the twenty-first memory space, and after obtaining a twenty-first drawing result, the method further includes: and the electronic equipment cuts the twenty-first drawing result into the drawing frame with the size same as that of the default memory space, so as to obtain a twenty-first drawing frame.
With reference to the fourteenth aspect, in a possible implementation manner, when the twenty-second drawing frame is drawn, the electronic device draws the drawing content of the drawing instruction of the twenty-second drawing frame to the twenty-second memory space, and after a twenty-second drawing result is obtained, the method may further include: and the electronic equipment cuts the twenty-second drawing result into a drawing frame with the size same as the default memory space, so as to obtain a twenty-second drawing frame.
In a fifteenth aspect, an electronic device is provided, comprising: one or more processor CPUs, a graphics processor GPU, a memory and a display screen; the memory is coupled to the one or more processors; the CPU is coupled with the GPU; wherein:
the memory may be used to store computer program code, the computer program code comprising computer instructions; the CPU may be configured to instruct the GPU to perform rendering when rendering the twenty-first rendering frame, and instruct the GPU to perform rendering when rendering the second rendering frame;
in this way, the electronic device may obtain a predicted frame. Under the condition of not increasing the drawing frame, the frame rate of the electronic equipment can be improved. Therefore, the fluency of the video interface displayed by the electronic equipment can be improved under the condition of saving the power consumption of the electronic equipment. Further, drawing contents that are not present in the twenty-first drawing frame and the twenty-second drawing frame displayed by the electronic device may be present in the predicted frame predicted by the electronic device. Thus, the drawing content in the predicted frame predicted by the electronic device is closer to the photographic content in the photographic field of view of the camera. Thus, the image frames predicted by the electronic device may be more accurate.
With reference to the fifteenth aspect, in a possible implementation manner, the GPU may be configured to draw, when drawing a twenty-first drawing frame, drawing contents of a drawing instruction of the twenty-first drawing frame into a twenty-first memory space to obtain a twenty-first drawing result, where a size of the twenty-first memory space is greater than a size of a default memory space, and the default memory space is a memory space provided by the electronic device system and used for storing image frames for display; when drawing a twenty-second drawing frame, drawing the drawing content of the drawing instruction of the twenty-second drawing frame to a twenty-second memory space to obtain a twenty-second drawing result, wherein the size of the twenty-second memory space is larger than that of the default memory space; generating a twenty-third drawing result in a twenty-third memory space according to the twenty-first drawing result and the twenty-second drawing result, wherein the size of the twenty-third memory space is larger than that of the default memory space; and cutting the twenty-third drawing result into a drawing frame with the size same as the default memory space to obtain a twenty-third prediction frame.
With reference to the fifteenth aspect, in a possible implementation manner, the size of the twenty-first memory space is greater than the size of the default memory space, which specifically includes: the first size of the twenty-first memory space is K1 times the third size of the default memory space, the second size of the twenty-first memory space is K2 times the fourth size of the default memory space, and K1 and K2 are greater than 1.
The size of the twenty-second memory space is greater than the size of the default memory space, which specifically includes: the fifth size of the twenty-second memory space is K1 times the third size of the default memory space, and the fifth size of the twenty-second memory space is K2 times the fourth size of the default memory space.
The size of the twenty-third memory space is greater than the size of the default memory space, which specifically includes: a seventh size of the twenty-third memory space is K1 times the third size of the default memory space, and an eighth size of the twenty-third memory space is K2 times the fourth size of the default memory space.
Here, the first size of the twenty-first memory space may be a width of the twenty-first memory space, and the second size of the twenty-first memory space may be a height of the twenty-first memory space. The third size of the default memory space may be a width of the default memory space and the fourth size of the default memory space may be a height of the default memory space. The fifth size of the twenty-second memory space may be a width of the twenty-second memory space, and the sixth size of the twenty-second memory space may be a height of the twenty-second memory space. The seventh size of the twenty-third memory space may be a width of the twenty-third memory space, and the eighth size of the twenty-third memory space may be a height of the twenty-third memory space. In this way, the electronic device may enlarge the width and height of the twenty-first memory space by different sizes. The electronic device may enlarge the width and height of the twenty-second memory space by different sizes. The electronic device may enlarge the width and height of the twenty-third memory space by different sizes.
With reference to the fifteenth aspect, in a possible implementation manner, the GPU may further be configured to: when drawing the twenty-first drawing frame, drawing the drawing content of the drawing instruction of the twenty-first drawing frame into a first drawing range of a twenty-first memory space to obtain a twenty-first drawing result; the size of the first rendering range is smaller than or equal to the size of the twenty-first memory space, and the size of the first rendering range is larger than the size of the default memory space.
With reference to the fifteenth aspect, in a possible implementation manner, the size of the first drawing range is smaller than or equal to the size of the twenty-first memory space, and the size of the first drawing range is larger than the size of the default memory space, specifically including: the ninth size of the first rendering range is K3 times the third size of the default memory space, the tenth size of the first rendering range is K4 times the fourth size of the default memory space, K3 is greater than 1 and less than or equal to K1, and K4 is greater than 1 and less than or equal to K2.
The ninth size of the first drawing range may be a width of the first drawing range, and the tenth size of the first drawing range may be a height of the first drawing range.
With reference to the fifteenth aspect, in a possible implementation manner, K3 is equal to K1, K4 is equal to K1, and K1, K2, K3, and K4 are fixed values of a system configuration of the electronic device. The electronic device may configure K1, K2, K3, K4 based on empirical values. The electronic device directly configures the fixed value to reduce the amount of calculation.
With reference to the fifteenth aspect, in a possible implementation manner, K3 and K4 are determined by the electronic device according to drawing parameters included in the drawing instruction of the twenty-first drawing frame. In this way, K3 and K4 set by the electronic device can be decided according to the drawing parameters included in the drawing instruction of the twenty-first drawing frame. Thus, the magnification of the drawing range of different drawing frames may be different. In this way, the magnification of the drawing range by the electronic device is more consistent with the drawing contents in the drawing instruction of the drawing frame.
With reference to the fifteenth aspect, in a possible implementation manner, the GPU may further be configured to: when drawing a twenty-second drawing frame, drawing the drawing content of the drawing instruction of the twenty-second drawing frame into a second drawing range of a twenty-second memory space to obtain a twenty-second drawing result; the size of the second rendering range is smaller than or equal to the size of the twenty-second memory space, and the size of the second rendering range is larger than the size of the default memory space.
With reference to the fifteenth aspect, in a possible implementation manner, the size of the second rendering range is smaller than or equal to the size of the twenty-second memory space, and the size of the second rendering range is larger than the size of the default memory space, specifically including: an eleventh size of the second rendering range is K5 times the third size of the default memory space, a twelfth size of the second rendering range is K6 times the fourth size of the default memory space, K5 is greater than 1 and less than or equal to K1, and K6 is greater than 1 and less than or equal to K2.
The eleventh size of the second drawing range may be a width of the second drawing range, and the twelfth size of the second drawing range may be a height of the second drawing range.
With reference to the fifteenth aspect, in one possible implementation manner, K5 and K6 are fixed values of a system configuration of the electronic device. The electronic device directly configures the fixed value to reduce the amount of calculation.
With reference to the fifteenth aspect, in one possible implementation manner, K5 and K6 are determined by the electronic device according to the drawing parameters included in the drawing instruction of the twenty-second drawing frame. In this way, K5 and K6 set by the electronic device can be decided according to the drawing parameters contained in the drawing instruction of the twenty-second drawing frame. Thus, the magnification of the drawing range of different drawing frames may be different. In this way, the magnification of the drawing range by the electronic device is more consistent with the drawing contents in the drawing instruction of the drawing frame.
With reference to the fifteenth aspect, in one possible implementation manner, the GPU may be configured to: determining a first motion vector of a twenty-second drawing result according to the twenty-first drawing result and the twenty-second drawing result; and generating a twenty-third drawing result in a twenty-third memory space according to the twenty-second drawing result and the first motion vector. In this way, the GPU may predict a twenty-third rendering result for the twenty-third predicted frame from the twenty-first rendering frame and the twenty-second rendering frame.
With reference to the fifteenth aspect, in one possible implementation manner, the GPU may be configured to: the electronic equipment divides the twenty-second drawing result into Q pixel blocks, takes out the first pixel block from the Q pixel blocks of the twenty-second drawing result, and determines a second pixel block matched with the first pixel block from the twenty-first drawing result; obtaining a motion vector of the first pixel block according to the displacement from the second pixel block to the first pixel block; a first motion vector of a twenty-second rendering result is determined based on the motion vector of the first pixel block. Following the steps in this implementation, the GPU may determine the motion vectors for all of the Q pixel blocks of the twenty-second rendering result. Each pixel block includes f × f (e.g., 16 × 16) pixels.
In the above implementation manner, the GPU calculates the motion vector by blocking the twenty-second rendering result, without calculating the motion vector of each pixel point in the twenty-second rendering result. This may reduce the amount of computations and thus reduce the power consumption of the GPU in the electronic device.
With reference to the fifteenth aspect, in one possible implementation manner, the GPU may be configured to: determining a plurality of candidate pixel blocks in the twenty-first drawing result through first pixel points in the first pixel blocks; respectively calculating the color values of the candidate pixel blocks and the first pixel block; and determining a second pixel block matched with the first pixel block according to the difference of the color values of the first pixel blocks of the candidate pixel blocks, wherein the second pixel block is the candidate pixel block with the minimum difference of the color values of the first pixel block in the candidate pixel blocks.
In this way, the GPU in the electronic device can more accurately find the matching pixel block of each pixel block, thereby being able to more accurately calculate the motion vector of each pixel block.
With reference to the fifteenth aspect, in one possible implementation manner, the GPU may be configured to: and determining a motion vector of a twenty-third drawing result according to the first motion vector, and generating a twenty-third drawing result according to the motion vectors of the twenty-second drawing result and the twenty-third drawing result. The motion vector of the twenty-third rendering result is G times the first motion vector, and G is greater than 0 and smaller than 1.
With reference to the fifteenth aspect, in one possible implementation, G is equal to 0.5. Therefore, the object in each image frame moves at a constant speed, calculation of a GPU in the electronic equipment is facilitated, and the user experience is better when watching videos.
With reference to the fifteenth aspect, in a possible implementation manner, the GPU may be further configured to: generating a twenty-third drawing result in a third drawing range of a twenty-third memory space according to the twenty-second drawing result and the first motion vector; the size of the third rendering range is smaller than or equal to the size of the twenty-third memory space, and the size of the third rendering range is larger than the size of the default memory space.
With reference to the fifteenth aspect, in a possible implementation manner, the size of the third drawing range is smaller than or equal to the size of the twenty-third memory space, and the size of the third drawing range is larger than the size of the twenty-third memory space, which specifically includes: the thirteenth size of the third rendering range is K7 times the third size of the default memory space, the fourteenth size of the third rendering range is K8 times the fourth size of the default memory space, K7 is greater than 1 and less than or equal to K1, and K8 is greater than 1 and less than or equal to K2.
The thirteenth size of the third drawing range may be the width of the third drawing range, and the fourteenth size of the third drawing range may be the height of the third drawing range.
With reference to the fifteenth aspect, in a possible implementation manner, the GPU may be configured to: creating a twenty-first memory space, a twenty-second memory space, and a twenty-third memory space, where the twenty-first memory space may be used to store a twenty-first drawing result of a twenty-first drawing frame, the twenty-second memory space may be used to store a twenty-second drawing result of a twenty-second drawing frame, and the twenty-third memory space may be used to store a twenty-third drawing result of a twenty-third prediction frame.
With reference to the fifteenth aspect, in a possible implementation manner, the GPU may be further configured to: and the electronic equipment cuts the twenty-first drawing result into the drawing frame with the size same as that of the default memory space, so as to obtain a twenty-first drawing frame.
With reference to the fifteenth aspect, in a possible implementation manner, the GPU may further be configured to: and the electronic equipment cuts the twenty-second drawing result into a drawing frame with the size same as that of the default memory space, so as to obtain a twenty-second drawing frame.
A sixteenth aspect provides an image frame prediction apparatus, which may include a first drawing unit, a second drawing unit, a generation unit; wherein:
the first drawing unit may be configured to, when drawing a twenty-first drawing frame of a first application, draw a drawing instruction of the twenty-first drawing frame according to a first drawing range to obtain a twenty-first drawing result, where a size of the first drawing range is greater than a size of the twenty-first drawing frame of the first application;
the second drawing unit may be configured to, when a twenty-second drawing frame of the first application is drawn, draw the drawing instruction of the twenty-second drawing frame according to a second drawing range to obtain a twenty-second drawing result, where a size of the twenty-second memory space is greater than a size of the twenty-second drawing frame, and a size of the twenty-first drawing frame is the same as a size of the twenty-second drawing frame;
The generation unit may be configured to generate a twenty-third prediction frame of the first application by prediction based on the twenty-first drawing result and the twenty-second drawing result, where a size of the twenty-third prediction frame is the same as a size of the twenty-first drawing frame.
With reference to the sixteenth aspect, in a possible implementation manner, the first drawing unit may be further configured to draw, when drawing the twenty-first drawing frame, drawing contents of the drawing instruction of the twenty-first drawing frame into a twenty-first memory space to obtain a twenty-first drawing result, where a size of the twenty-first memory space is greater than a size of a default memory space, and the default memory space is a memory space provided by the electronic device system and used for storing the image frame for display.
With reference to the sixteenth aspect, in a possible implementation manner, the second drawing unit may be further configured to, when drawing the twenty-second drawing frame, draw the drawing content of the drawing instruction of the twenty-second drawing frame to a twenty-second memory space by the electronic device, so as to obtain a twenty-second drawing result, where a size of the twenty-second memory space is greater than a size of the default memory space.
With reference to the sixteenth aspect, in a possible implementation manner, the generating unit may be further configured to generate a twenty-third drawing result in a twenty-third memory space according to the twenty-first drawing result and the twenty-second drawing result, where a size of the twenty-third memory space is greater than a size of the default memory space.
With reference to the sixteenth aspect, in a possible implementation manner, the image frame prediction apparatus may further include a clipping unit, where the clipping unit may be configured to clip the twenty-third rendering result to be the same as the default memory space in size, so as to obtain a twenty-third prediction frame.
In this way, the image frame prediction apparatus can obtain a predicted frame. The frame rate of the image frame prediction apparatus can be increased without increasing the number of rendering frames. Therefore, under the condition of saving the power consumption of the image frame prediction device, the fluency of a video interface displayed by the image frame prediction device can be improved. Further, the predicted frame predicted by the image frame prediction means may have drawing contents which are not present in the twenty-first drawing frame and the twenty-second drawing frame displayed by the image frame prediction means. Thus, the drawing content in the prediction frame predicted by the image frame prediction means is closer to the shooting content in the shooting field of the camera. Thus, the image frame predicted by the image frame prediction means can be more accurate.
With reference to the sixteenth aspect, in a possible implementation manner, the size of the twenty-first memory space is greater than the size of the default memory space, and the method specifically includes: the first size of the twenty-first memory space is K1 times the third size of the default memory space, the second size of the twenty-first memory space is K2 times the fourth size of the default memory space, and K1 and K2 are greater than 1.
The size of the twenty-second memory space is greater than the size of the default memory space, which specifically includes: the fifth size of the twenty-second memory space is K1 times the third size of the default memory space, and the fifth size of the twenty-second memory space is K2 times the fourth size of the default memory space.
The size of the twenty-third memory space is greater than the size of the default memory space, which specifically includes: a seventh size of the twenty-third memory space is K1 times the third size of the default memory space, and an eighth size of the twenty-third memory space is K2 times the fourth size of the default memory space.
Here, the first size of the twenty-first memory space may be a width of the twenty-first memory space, and the second size of the twenty-first memory space may be a height of the twenty-first memory space. The third size of the default memory space may be a width of the default memory space and the fourth size of the default memory space may be a height of the default memory space. The fifth size of the twenty-second memory space may be a width of the twenty-second memory space, and the sixth size of the twenty-second memory space may be a height of the twenty-second memory space. The seventh size of the twenty-third memory space may be a width of the twenty-third memory space, and the eighth size of the twenty-third memory space may be a height of the twenty-third memory space. Thus, the image frame prediction apparatus may enlarge the width and height of the twenty-first memory space by different sizes. The image frame prediction means may enlarge the width and height of the twenty-second memory space by different sizes. The image frame prediction apparatus may enlarge the width and height of the twenty-third memory space by different sizes.
With reference to the sixteenth aspect, in a possible implementation manner, the first drawing unit may be further configured to: when drawing a twenty-first drawing frame, drawing the drawing content of the drawing instruction of the twenty-first drawing frame into a first drawing range of a twenty-first memory space to obtain a twenty-first drawing result; the size of the first rendering range is smaller than or equal to the size of a twenty-first memory space, and the size of the first rendering range is larger than the size of a default memory space.
With reference to the sixteenth aspect, in a possible implementation manner, the size of the first drawing range is smaller than or equal to the size of the twenty-first memory space, and the size of the first drawing range is larger than the size of the default memory space, specifically including: the ninth size of the first rendering range is K3 times the third size of the default memory space, the tenth size of the first rendering range is K4 times the fourth size of the default memory space, K3 is greater than 1 and less than or equal to K1, and K4 is greater than 1 and less than or equal to K2.
The ninth size of the first drawing range may be a width of the first drawing range, and the tenth size of the first drawing range may be a height of the first drawing range.
With reference to the sixteenth aspect, in a possible implementation manner, K3 is equal to K1, K4 is equal to K1, and K1, K2, K3, and K4 are fixed values configured by a system of the image frame prediction apparatus. The image frame prediction means may configure K1, K2, K3, K4 according to empirical values. The image frame prediction apparatus directly configures the fixed value to reduce the amount of calculation.
With reference to the sixteenth aspect, in a possible implementation manner, K3 and K4 are determined by the electronic device according to drawing parameters included in the drawing instruction of the twenty-first drawing frame. In this way, K3 and K4 set by the image frame prediction means can be decided according to the drawing parameters contained in the drawing instruction of the twenty-first drawing frame. Thus, the magnification of the drawing range of different drawing frames may be different. In this way, the magnification of the drawing range by the image frame prediction means more agrees with the drawing contents in the drawing instruction of the drawing frame.
With reference to the sixteenth aspect, in a possible implementation manner, the second drawing unit may be further configured to: when drawing a twenty-second drawing frame, drawing the drawing content of the drawing instruction of the twenty-second drawing frame into a second drawing range of a twenty-second memory space to obtain a twenty-second drawing result; the size of the second rendering range is smaller than or equal to the size of a twenty-second memory space, and the size of the second rendering range is larger than the size of the default memory space.
With reference to the sixteenth aspect, in a possible implementation manner, the size of the second rendering range is smaller than or equal to the size of the twenty-second memory space, and the size of the second rendering range is larger than the size of the default memory space, specifically including: an eleventh size of the second rendering range is K5 times the third size of the default memory space, a twelfth size of the second rendering range is K6 times the fourth size of the default memory space, K5 is greater than 1 and less than or equal to K1, and K6 is greater than 1 and less than or equal to K2.
The eleventh size of the second drawing range may be a width of the second drawing range, and the twelfth size of the second drawing range may be a height of the second drawing range.
With reference to the sixteenth aspect, in one possible implementation manner, K5 and K6 are fixed values of a system configuration of the image frame prediction apparatus. The image frame prediction apparatus directly configures the fixed value to reduce the amount of calculation.
With reference to the sixteenth aspect, in one possible implementation manner, K5 and K6 are determined by the image frame prediction apparatus according to the drawing parameters included in the drawing instruction of the twenty-second drawing frame. In this way, K5 and K6 set by the image frame prediction means can be decided according to the drawing parameters contained in the drawing instruction of the twenty-second drawing frame. Thus, the magnification of the drawing range of different drawing frames may be different. In this way, the magnification of the drawing range by the image frame prediction means more agrees with the drawing contents in the drawing instruction of the drawing frame.
With reference to the sixteenth aspect, in a possible implementation manner, the generating unit may be further configured to: determining a first motion vector of a twenty-second drawing result according to the twenty-first drawing result and the twenty-second drawing result; and generating a twenty-third rendering result in a twenty-third memory space according to the twenty-second rendering result and the first motion vector. In this way, the generation unit in the image frame prediction apparatus can predict the twenty-third drawing result of the twenty-third prediction frame from the twenty-first drawing frame and the twenty-second drawing frame.
With reference to the sixteenth aspect, in a possible implementation manner, the generating unit may be further configured to: dividing the twenty-second drawing result into Q pixel blocks, and taking out the first pixel block from the Q pixel blocks of the twenty-second drawing result; determining a second pixel block matched with the first pixel block in the twenty-first drawing result; obtaining a motion vector of the first pixel block according to the displacement from the second pixel block to the first pixel block; a first motion vector of a twenty-second rendering result is determined from the motion vector of the first pixel block. According to a step in this implementation, the image frame prediction apparatus may determine motion vectors of all pixel blocks of the Q pixel blocks of the twenty-second rendering result. Each pixel block includes f × f (e.g., 16 × 16) pixels.
In the above implementation manner, the image frame prediction apparatus divides the twenty-second rendering result into blocks to calculate the motion vector, without calculating the motion vector of each pixel point in the twenty-second rendering result. This can reduce the amount of computation, thereby reducing the power consumption of the electronic device.
With reference to the sixteenth aspect, in a possible implementation manner, the generating unit may be further configured to: determining a plurality of candidate pixel blocks in the twenty-first drawing result through first pixel points in the first pixel blocks; respectively calculating the color values of the candidate pixel blocks and the first pixel block; and determining a second pixel block matched with the first pixel block according to the difference of the color values of the first pixel blocks of the candidate pixel blocks, wherein the second pixel block is the candidate pixel block with the minimum difference of the color values of the first pixel block in the candidate pixel blocks.
Thus, the image frame prediction apparatus can more accurately find a matching pixel block of each pixel block, thereby more accurately calculating a motion vector of each pixel block.
With reference to the sixteenth aspect, in a possible implementation manner, the generating unit may be further configured to: and determining a motion vector of a twenty-third drawing result according to the first motion vector, and generating a twenty-third drawing result according to the motion vectors of the twenty-second drawing result and the twenty-third drawing result. The motion vector of the twenty-third rendering result is G times the first motion vector, and G is greater than 0 and smaller than 1.
In combination with the sixteenth aspect, in one possible implementation, G is equal to 0.5. Therefore, the object in each image frame moves at a constant speed, so that the image frame prediction device can calculate the object conveniently, and the user experience is better when watching the video.
With reference to the sixteenth aspect, in a possible implementation manner, the generating unit may be further configured to: generating a twenty-third drawing result in a third drawing range of a twenty-third memory space according to the twenty-second drawing result and the first motion vector; the size of the third rendering range is smaller than or equal to the size of a twenty-third memory space, and the size of the third rendering range is larger than the size of the twenty-third memory space.
With reference to the sixteenth aspect, in a possible implementation manner, the size of the third rendering range is smaller than or equal to the size of the twenty-third memory space, and the size of the third rendering range is larger than the size of the twenty-third memory space, specifically including: the thirteenth size of the third rendering range is K7 times the third size of the default memory space, the fourteenth size of the third rendering range is K8 times the fourth size of the default memory space, K7 is greater than 1 and less than or equal to K1, and K8 is greater than 1 and less than or equal to K2.
The thirteenth size of the third drawing range may be the width of the third drawing range, and the fourteenth size of the third drawing range may be the height of the third drawing range.
With reference to the sixteenth aspect, in a possible implementation manner, the image frame prediction apparatus may further include a creating unit, where the creating unit may be configured to: creating a twenty-first memory space, a twenty-second memory space, and a twenty-third memory space, where the twenty-first memory space may be used to store a twenty-first drawing result of a twenty-first drawing frame, the twenty-second memory space may be used to store a twenty-second drawing result of a twenty-second drawing frame, and the twenty-third memory space may be used to store a twenty-third drawing result of a twenty-third prediction frame.
With reference to the sixteenth aspect, in a possible implementation manner, the cutting unit may further be configured to: and cutting the twenty-first drawing result into the drawing frame with the size same as the default memory space to obtain a twenty-first drawing frame.
With reference to the sixteenth aspect, in a possible implementation manner, the cutting unit may further be configured to: and cutting the twenty-second drawing result into a drawing frame with the size same as the default memory space to obtain a twenty-second drawing frame.
In a seventeenth aspect, an electronic device is provided, comprising: one or more processors; one or more memories; the one or more memories store one or more computer programs, the one or more computer programs comprising instructions, which when executed by the one or more processors, cause the electronic device to perform an implementation as any one of the first, fifth, seventh, ninth, eleventh, thirteenth, and fourteenth aspects.
In an eighteenth aspect, embodiments of the present application provide a chip applied to an electronic device, where the chip includes one or more processors, and the processor is configured to invoke computer instructions to cause the electronic device to execute any one of the possible implementation manners as in the first aspect, the fifth aspect, the seventh aspect, the ninth aspect, the eleventh aspect, the thirteenth aspect, and the fourteenth aspect.
A nineteenth aspect provides a computer program product which, when run on a computer, causes the computer to perform any one of the possible implementations of the first, fifth, seventh, ninth, eleventh, thirteenth and fourteenth aspects.
A twentieth aspect provides a computer-readable storage medium comprising instructions that, when executed on an electronic device, cause the electronic device to perform any possible implementation manner as in any one of the first, fifth, seventh, ninth, eleventh, thirteenth and fourteenth aspects.
It is understood that the electronic device provided by the seventeenth aspect, the chip provided by the eighteenth aspect, the computer program product provided by the nineteenth aspect, and the computer storage medium provided by the twentieth aspect are all used to execute the method provided by the embodiments of the present application.
Drawings
Fig. 1 is a schematic diagram of a user interface 100 of a tablet computer 10 provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a static object of the Nth drawing frame in the user interface 100 according to the embodiment of the present application;
fig. 3 is a schematic diagram of a dynamic object of an nth drawing frame in the user interface 100 according to an embodiment of the present application;
Fig. 4A is a schematic diagram of a rendering frame 300A provided by an embodiment of the present application;
FIG. 4B is a diagram of a predicted frame 300B provided by an embodiment of the present application;
fig. 5A is a schematic diagram of a drawing frame 500 provided by an embodiment of the present application;
fig. 5B is a schematic diagram of a part of pixel blocks of a rendering frame 500 according to an embodiment of the present application;
FIG. 5C is a schematic diagram of a pixel block in a predicted frame predicted from the pixel block in FIG. 5B according to an embodiment of the present application;
fig. 6A is a logic block diagram of a method for image frame prediction according to an embodiment of the present application;
fig. 6B is a flowchart of a method for image frame prediction according to an embodiment of the present application;
fig. 7A is a schematic diagram of an nth drawing frame according to an embodiment of the present application;
fig. 7B is a schematic diagram illustrating depth attachment and color attachment of a dynamic object in an nth drawing frame according to an embodiment of the present application;
FIG. 7C is a diagram illustrating depth attachments and color attachments of static objects in an Nth drawing frame according to an embodiment of the disclosure;
fig. 8A is a schematic diagram of an N +2 th drawing frame according to an embodiment of the present application;
fig. 8B is a schematic diagram of a depth attachment and a color attachment of a dynamic object in an N +2 th drawing frame according to an embodiment of the present application;
FIG. 8C is a schematic diagram illustrating depth attachments and color attachments of static objects in an N +2 th drawing frame according to an embodiment of the present disclosure;
9A-9C are schematic diagrams of a process for calculating a motion vector of a pixel block 902 in an N +2 th frame by a diamond search according to an embodiment of the present application;
FIG. 10A is a schematic diagram illustrating color attachment of a dynamic object in a predicted N +3 th predicted frame according to an embodiment of the present application;
FIG. 10B is a schematic diagram illustrating color attachment of a static object in a predicted N +3 th predicted frame according to an embodiment of the present application;
FIG. 10C is a diagram illustrating a predicted N +3 th predicted frame according to an embodiment of the present application;
FIG. 11 is a block diagram of 90fps frame insertion logic provided in an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
FIG. 13 is a system framework diagram of an electronic device provided by an embodiment of the application;
fig. 14 is a schematic view of a user interface of the tablet pc 10 according to an embodiment of the present application;
fig. 15 is a W drawing frame diagram provided in the embodiment of the present application;
fig. 16 is a W +2 th drawing frame diagram provided in the embodiment of the present application;
fig. 17 is a flowchart illustrating a method for image frame prediction according to an embodiment of the present application;
FIG. 18 is a schematic diagram of an electronic device acquiring an object attribute provided in an embodiment of the present application;
fig. 19A is a schematic diagram of a moving object transformed in different coordinate systems according to an embodiment of the present application;
FIG. 19B is a schematic diagram of a static object in a different coordinate system according to an embodiment of the present application;
20A-20C are diagrams of a GPU rendering a template image of a Wth render frame object according to embodiments of the present application;
21A-21C are diagrams illustrating a GPU rendering a template image of a W +2 th render frame object according to an embodiment of the present disclosure;
22A-22B are schematic diagrams of a process for calculating a motion vector of a moving object according to an embodiment of the present application;
FIG. 23 is a schematic diagram of a correlation module for image frame prediction provided by an embodiment of the present application;
FIG. 24 is a flow chart of a frame prediction method disclosed in an embodiment of the present application;
FIG. 25 is a diagram illustrating a method for obtaining a reference frame according to an embodiment of the present disclosure;
FIG. 26 is a diagram illustrating another example of obtaining a reference frame disclosed in an embodiment of the present application;
FIG. 27 is a diagram illustrating a method for determining a target reference frame according to an embodiment of the present disclosure;
FIG. 28 is a diagram illustrating a partitioned target reference frame according to an embodiment of the present disclosure;
FIG. 29 is a flowchart illustrating a process of calculating a predicted motion vector for a block according to an embodiment of the disclosure;
FIG. 30 is a diagram illustrating an example of obtaining blocks from a target reference frame to a matching frame according to an embodiment of the present disclosure;
FIG. 31A is a diagram illustrating a method for determining a predicted motion vector for a block according to an embodiment of the disclosure;
FIG. 31B is a diagram illustrating another example of determining a predicted motion vector for a block according to an embodiment of the present disclosure;
FIG. 32A is a block diagram illustrating a method for determining a vertex bounding box according to an embodiment of the present disclosure;
FIG. 32B is a diagram illustrating a method for determining predicted motion vectors for vertices according to an embodiment of the present disclosure;
FIG. 33 is a schematic diagram illustrating a method for determining vertex coordinates according to an embodiment of the present disclosure;
FIG. 34 is a flow chart of a method for determining coordinates of pixels in a block in a predicted frame as disclosed in an embodiment of the present application;
fig. 35A is a schematic diagram of a homography transformation formula corresponding to an acquisition block disclosed in an embodiment of the present application;
FIG. 35B is a schematic diagram illustrating a method for determining coordinates of a pixel in a predicted frame according to an embodiment of the present disclosure;
FIG. 35C is a diagram of a predicted frame according to an embodiment of the present disclosure;
FIG. 36 is a diagram of another predicted frame disclosed in an embodiment of the present application;
fig. 37 is a schematic flowchart of an image frame generating method according to an embodiment of the present application;
FIG. 38 is a schematic illustration of a positional relationship provided by an embodiment of the present application;
FIG. 39A is a schematic diagram of an image frame provided by an embodiment of the present application;
fig. 39B is a schematic diagram of a divided image frame provided by an embodiment of the present application;
40A-40C are diagrams of a set of exact match blocks provided by embodiments of the present application;
FIG. 41A is a flowchart illustrating a method for determining position coordinates in a predicted image frame according to an embodiment of the present disclosure;
41B-41C are schematic diagrams of a set of matching blocks provided by embodiments of the present application;
41D-41F are schematic diagrams of predicted position coordinates in a set of camera coordinate systems according to an embodiment of the present application;
FIG. 42 is a diagram illustrating a method for determining vertices of a prediction block according to an embodiment of the present application;
FIG. 43 is a diagram illustrating a prediction block generation provided by an embodiment of the present application;
FIGS. 44A-44B are a set of schematic diagrams related to a prediction block provided by an embodiment of the present application;
FIGS. 45A-45B are schematic diagrams of a set of prediction blocks provided by an embodiment of the present application;
fig. 46A to 46I are schematic diagrams of a group of generated image frames provided by an embodiment of the present application;
fig. 47 is a block diagram of a software structure of a set of electronic devices 100 provided in an embodiment of the present application;
fig. 48 is a schematic structural diagram of another electronic device 100 according to an embodiment of the present application;
fig. 49A is a schematic diagram of creating a frame buffer object according to an embodiment of the present application;
FIG. 49B is a diagram illustrating a rendering and display of an original image frame according to an embodiment of the present application;
FIG. 49C is a diagram illustrating a method for rendering and displaying a predicted image frame according to an embodiment of the present application;
FIG. 50A is a diagram illustrating a customized dynamic layer provided by an embodiment of the present application;
FIG. 50B is a schematic diagram of a customized UI layer provided by an embodiment of the application;
FIG. 50C is a schematic diagram of an image frame provided by an embodiment of the present application;
FIG. 50D is a schematic illustration of a rendering process provided by an embodiment of the present application;
FIG. 50E is a diagram illustrating an example of generating an image frame according to the present application;
fig. 51A is a schematic flowchart of a method for generating an image frame according to an embodiment of the present application;
fig. 51B and 51C are schematic diagrams of a combined image frame according to an embodiment of the present application;
fig. 51D and 51E are diagrams illustrating possible effects presented by a drawing result according to an embodiment of the present application;
fig. 51F is a schematic diagram of generating an image frame according to an embodiment of the present application;
FIG. 52 is a schematic view of a modular interaction provided by an embodiment of the present application;
fig. 53A is a schematic flowchart of a method for generating an image frame according to an embodiment of the present application;
FIGS. 53B-53D are schematic diagrams of a set of prediction processes provided by embodiments of the present application;
FIGS. 53E-53I are diagrams of a set of rendering processes provided by embodiments of the present application;
fig. 54A-54C are schematic diagrams of a set of user interfaces of the tablet pc 10 according to an embodiment of the present application;
fig. 55A is a schematic diagram of a rendering frame a, a rendering frame B, and a predicted frame obtained from the rendering frame a and the rendering frame B according to an embodiment of the present application;
fig. 55B is a schematic view of a camera shooting view provided by an embodiment of the present application;
FIG. 56 is a flow chart of a method for image frame prediction according to an embodiment of the present application;
FIG. 57 is a diagram illustrating default memory space provided by an embodiment of the present application;
fig. 58 is a schematic diagram of a twenty-first memory space provided in the embodiment of the present application;
fig. 59 is a twenty-second schematic memory space diagram provided in the present embodiment;
fig. 60 is a schematic diagram of a twenty-third memory space provided in the present embodiment;
fig. 61 is a schematic diagram of a first drawing range, a twenty-first drawing result, and a U-th drawing frame in a twenty-first memory space according to an embodiment of the present application;
fig. 62 is a schematic diagram of a second rendering range, a twenty-second rendering result, and a U +2 th rendering frame in a twenty-second memory space according to the embodiment of the present application;
FIGS. 63A-63C are diagrams illustrating a process for calculating a motion vector of a pixel block 6205 in a U +2 th rendering frame by a diamond search according to an embodiment of the present application;
Fig. 64 is a schematic diagram of a third rendering range, a twenty-third rendering result, and a U +3 th predicted frame in a twenty-third memory space according to an embodiment of the present application.
Detailed Description
The terminology used in the following embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in the specification of the present application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the listed items.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of the application, unless stated otherwise, "plurality" means two or more.
Since the embodiments of the present application relate to an application of an image frame prediction method, for ease of understanding, related terms and concepts related to the embodiments of the present application will be described below.
(1) Image frame
In the embodiment of the present application, each frame of image that the electronic device uses to display in the display screen is called an image frame. In the embodiment of the present application, the image frame may be a frame image of a certain application, may be a drawing result drawn by the electronic device according to a drawing instruction of the application, and may also be a prediction result predicted according to an existing drawing result. As shown in fig. 1, the electronic device (i.e., tablet computer 10) displays a user interface 100. At time T0, an nth frame image frame is displayed in the user interface 10. The nth frame image frame is a drawing frame. The timing diagram 101 in fig. 1 shows the image frames that the electronic device can display from time T0 to time Tn.
(2) Drawing frame
In the embodiment of the present application, when the electronic device runs the application program, an image frame drawn according to the drawing instruction and the drawing parameter of the application program is called a drawing frame. The drawing instruction and the drawing parameter of the application program can be automatically generated by an application graphic framework and an application engine, and can also be written by an application developer. The rendering parameters corresponding to the rendering frame may include one or more objects. The electronic device may render one or more objects as corresponding elements in the rendering frame. For example, the user interface 100 shown in fig. 1 may be a frame of rendered frames. The element 102 (shown in fig. 2) and the element 103 (shown in fig. 3) in the user interface 100 are both obtained by rendering and rendering the object in the rendering parameter.
It is understood that the drawing parameters of the drawing frame include attributes of a plurality of objects. The attributes of the object may include one or more of a color value (e.g., an RGB value) of each pixel point in the object, a depth value of each pixel point in the object, a stencil buffer (stencil buffer), a transfer matrix of the object, and so on. The CPU may send, to the GPU, rendering instructions for instructing the GPU to perform rendering according to the rendering parameters. The GPU may draw an object according to a drawing instruction. Objects drawn by the GPU from rendering instructions that carry the transfer matrix may be referred to as moving objects. An object drawn by the GPU according to a drawing instruction that does not carry a transition matrix may be referred to as a static object. Optionally, the drawing instruction corresponding to the static object may carry a transfer matrix, where the transfer matrix is an all-0 matrix, that is, elements in each row and each column in the matrix are 0.
In the embodiment of the present application, taking a game application as an example, a moving object may move in a game scene. The position of the first element in the user interface drawn and rendered by the moving object in two adjacent image frames is changed. The user can see in the user interface that the position of the first element is moving. The static object is stationary in the game scene, but the position of the static object in different image frames may be different due to the change of the photographing angle of the camera. The position of the second element in the user interface drawn and rendered by the static object in the two adjacent image frames can be changed, and the size of the position change is related to the shooting position and the shooting angle of the camera. For example, the first element may be element 103 of FIG. 3, where element 103 is a moving cart. The second element may be element 102 in fig. 2, element 102 being a static background. The position of the static background in different image frames may change due to the change in the shooting angle of the camera.
(3) Predicting frames
In the embodiment of the application, the electronic device generates a new image frame, called a prediction frame, according to the existing drawing frame data. And obtaining the drawing parameters of the predicted frame according to the drawing parameters of the two drawn frames. For example, the electronic device may generate a first predicted frame from the first rendered frame and the second rendered frame. The first predicted frame is a frame image frame subsequent to the second rendered frame. That is, after the electronic device displays the second rendered frame, the first predicted frame is displayed. The first drawing frame is an image frame before the second drawing frame (an image frame may exist between the first drawing frame and the second drawing frame). I.e. the first drawing frame is displayed in the display screen of the electronic device before the second drawing frame. It is understood that if the nth frame image frame is a drawing frame, it may be referred to as an nth drawing frame in the embodiment of the present application. If the nth frame image frame is a predicted frame, it may be referred to as an nth predicted frame in the embodiments of the present application.
It is understood that the object contained in the drawing parameter of the predicted frame is the same as the object contained in the drawing parameter of the drawing frame displayed in the frame preceding the predicted frame. The object contained in the drawing parameters of the image frame herein may be simply referred to as an object contained in the image frame. For example, an object included in the rendering parameters of the prediction frame may be simply referred to as an object included in the prediction frame.
Here, the specific process of the electronic device generating the predicted frame through the two-frame drawing frame may refer to the following, which is not described herein at first.
(4) Image frame prediction
In the embodiment of the application, the process of generating the first prediction frame by the electronic device through the first drawing frame and the second drawing frame is called image frame prediction.
(5) Color accessory (color attribute)
In this embodiment of the present application, a color attachment (color attachment) is a memory space, and is used to store color data (for example, RGB values of pixels) of each pixel in a drawing result when an electronic device draws according to a drawing instruction. The color attachments may be part of the FBO.
(6) Depth accessory
In this embodiment of the present application, a depth attachment (depth attachment) is a memory space, and is used for storing depth data of each pixel point in a drawing result when the electronic device performs drawing according to a drawing instruction. The color attachments may be part of the FBO. It can be appreciated that the smaller the depth value of the pixel points in the depth attachment, the closer the distance to the camera. When synthesizing an image frame, for two pixel points with equal coordinate values in two color attachments, one pixel point with a small depth value can cover the other pixel point with a large depth value. Namely, the color displayed by the pixel point of the final display screen is the color of the pixel point with small depth value in the two color attachments.
To improve the frame rate and improve the fluency of the video, the electronic device may insert prediction frames between the rendered frames of the application. The electronic device may perform image frame prediction based on the applied rendering frame to obtain a predicted frame. The moving speed and moving direction of a moving object and a static object in an image frame may be different, i.e., the motion vectors are different. In the process of image frame prediction, if the motion vector is calculated together with the moving object and the static object in the image frame, the calculated motion vector may be inaccurate. This can result in distortion or holes in the predicted image frame.
Fig. 4A exemplarily shows a schematic diagram of drawing a frame. Fig. 4B schematically shows a predicted frame obtained from the rendered frame in fig. 4A. As shown in fig. 4A, a rendered frame 300A may include a static object (stationary background) 301 and a moving object (moving cart) 302. The electronic device may predict the predicted frame 300B from the rendered frame 300A and the motion vectors of the rendered frame. Such as predicted frame 300B shown in fig. 4B. The objects contained in the predicted frame 300B are the same as those in the drawn frame 300A. I.e., the predicted frame 300B may contain a static object 301 and a moving object 302. The moving object 301 in the predicted frame 300B is moved forward with respect to the moving object 302 in the drawn frame 300A, and the moving portion is a portion 303 shown in fig. 4B. When the moving object moves in the direction indicated in the figure, the region 303 of the predicted frame 300B where the moving object 302 moves may lack pixel information. Since the part of the pixels in the drawing frame 300A is covered by the moving object 301, if the electronic device predicts the prediction frame with the drawing frame as a whole, the electronic device may not obtain the pixel information of the part after the moving object moves in the prediction process. Thus, the region 303 in the predicted frame 300B may be missing pixel information.
Fig. 5A exemplarily shows a drawing frame 500. The drawing frame 500 may include a static background and a moving cart 521. The electronic device may divide the drawing frame 500 into Q pixel blocks. The drawing frame 400 in fig. 5A may be divided into Q pixel blocks. The size of Q is related to the resolution of the display screen and tile in the GPU. The size of Tile is f. Here, N is equal to 20 for explanation. The rendering frame 500 may be divided into 20 pixel blocks 501-520. Wherein the pixel block 513 includes both the moving cart 512 and the static background. Pixel block 501-512, pixel block 514-520 comprise only static background. The static background in the drawing frame 500 is different from the moving vehicle of the moving vehicle. However, when the motion vector is calculated for the entire pixel block, the motion vector of the static background in the pixel block is the same as the motion vector of the moving vehicle. Thus, the static background in the predicted frame may not be a complete one, and there may be a hole area (i.e., no pixel area) in the static background. Next, a description will be given of an example of a pixel block 507 to a pixel block 509, and a pixel block 512 to a pixel block 514 in the drawing frame 500.
Fig. 5A exemplarily shows a pixel block 507-a pixel block 509, a pixel block 512-a pixel block 514 in the rendering frame 500 in fig. 5A. Pixel block 507-509 and pixel 512 and pixel block 514 include only a static background. The motion vectors of the pixel blocks such as the pixel block 507-the pixel block 509, the pixel block 512, and the pixel block 514 calculated by the electronic device may be the same, i.e. the motion vector of the static background. The pixel block 513 includes a moving cart 521 and a static background. The motion vector of pixel block 513 computed by the electronic device may be the motion vector of motion cart 521. Thus, the motion vector of the pixel block 513 calculated by the electronic device is not the same as the motion vectors of the pixel blocks 507-509 and the pixel blocks 512 and 514. The electronics can predict blocks of pixels 507-509 and blocks of pixels 512-514 in the predicted frame from blocks of pixels 507-509 and blocks of pixels 512-514.
FIG. 5C shows a block 507-509 of pixels, and a block 512-514 of pixels, in the predicted frame predicted by the electronic device from the block of pixels in FIG. 5B. Since the motion vector of the pixel block in fig. 5B is different from the motion vectors of the other pixel blocks, the predicted displacement of the pixel block 513 in fig. 5C from the movement of the other pixel blocks is different, resulting in no connection between the pixel block 513 and the pixel block 512 and a hollow region 522 between the pixel block 512 and the pixel block 513. The displacement of the block 513 is larger than the displacement of the block 514, resulting in the block 513 overlapping the block 514.
In order to improve fluency of an application program video interface displayed by electronic equipment and save power consumption of the electronic equipment, the embodiment of the application provides an image frame prediction method. The method can comprise the following steps: firstly, when a first drawing frame is drawn, the electronic equipment writes color data of a first drawing object into a first color accessory and writes color data of a second drawing object into a second color accessory, wherein the drawing instruction of the first drawing object indicates that spatial information of the first drawing object changes, and the drawing instruction of the second drawing object does not indicate that the spatial information of the second drawing object changes. Then, when the second drawing frame is drawn, the electronic device writes color data of a third drawing object in the third color accessory and color data of a fourth drawing object in the fourth color accessory, wherein the drawing instruction of the third drawing object indicates that spatial information of the third drawing object changes, and the drawing instruction of the fourth drawing object does not indicate that spatial information of the fourth drawing object changes. Then, the electronic device predicts a fifth color attachment of the first predicted frame from the first color attachment and the third color attachment, and predicts a sixth color attachment of the first predicted frame from the second color attachment and the fourth color attachment. Finally, the electronic device synthesizes the fifth color attachment and the sixth color attachment into the first predicted frame.
The spatial information indicating the first drawing object in the drawing instruction of the first drawing object is changed, that is, the first drawing object is a moving object. The drawing instruction of the second drawing object does not indicate that the spatial information of the second drawing object is changed, that is, the second drawing object is a static object. The electronic equipment writes color data of the moving object in the first drawing frame into the first color attachment and writes color data of the static object in the first drawing frame into the second color attachment. In this way, the electronic device stores color data of the dynamic object and color data of the static object in the first drawing frame in different color attachments, respectively. Likewise, when the second drawing frame is drawn, spatial information indicating the third drawing object in the drawing instruction of the third drawing object in the second drawing frame is changed, that is, the third drawing object is a moving object. The drawing instruction of the fourth drawing object does not indicate that the spatial information of the fourth drawing object is changed, that is, the fourth drawing object is a static object. In this way, the electronic device stores the color data of the dynamic object and the color data of the static object in the second drawing frame in different color attachments, respectively. Then, a moving object in the prediction frame is predicted according to the moving object in the drawing frame, a static object in the prediction frame is predicted according to the static object in the drawing frame, and then the color attachment of the moving object and the color attachment of the static object are combined into an image frame. In this way, the predicted frame can be predicted more accurately.
A method for image frame prediction according to an embodiment of the present application will be described in detail below with reference to the accompanying drawings. First, fig. 6A exemplarily shows a process of drawing an N-th drawing frame, an N + 2-th drawing frame, and obtaining an N + 3-th prediction frame by an electronic device in a method for image frame prediction provided by an embodiment of the present application.
Fig. 6A (a) illustrates an exemplary process in which the electronic device draws an nth drawing frame. As shown in (a) of fig. 6A, the electronic device drawing the nth drawing frame may include the steps of:
601. and the electronic equipment acquires the drawing instruction of the Nth drawing frame and judges whether the drawing instruction carries the transfer matrix. If yes, 602a is executed, and if no, 602b is executed.
There may be a plurality of drawing instructions in the nth drawing frame. The electronic device may draw an object according to a drawing instruction. It is to be understood that after the electronic device finishes drawing one drawing instruction, another drawing instruction of the nth drawing frame is drawn until all drawing instructions in the nth drawing are drawn. One drawing instruction may carry information such as vertex coordinates, vertex ID, depth information, color information, and the like of an object drawn according to the draw call instruction. And if the drawing instruction carries the transfer matrix, the electronic equipment draws an object according to the drawing instruction, wherein the object is a moving object. And if the drawing instruction does not carry the transfer matrix, the electronic equipment draws the object according to the drawing instruction, wherein the object is a static object.
Optionally, all drawing instructions in the nth drawing frame may carry a transition matrix. And if the transfer matrix carried in the drawing instruction is an all-0 matrix, the electronic equipment draws an object according to the drawing instruction, wherein the object is a static object. And if the numerical values of the elements in the transfer matrix carried by the drawing instruction are not all 0, the object drawn by the electronic equipment according to the drawing instruction is a moving object. And if the drawing content of the drawing instruction is a moving object, the electronic equipment draws the drawing content of the drawing instruction in the first memory space. The first memory space may be referred to as D1FBO (dynamic frame buffer object) for short. If the drawing content of the drawcall instruction is a static object, the electronic equipment draws the drawing content of the drawcall instruction in the second memory space. The second memory space may be referred to as S1FBO (static frame buffer object).
Here, the frame buffer object FBO is a block of memory space that can be used to store color data, depth data, and the like of a drawing object.
In this embodiment, the electronic device may draw the moving object and the static object in the drawing frame according to information carried in the drawing instruction. In the embodiment of the present application, the drawing instruction may be referred to as a draw call instruction. In the embodiment of the application, the drawing instructions can be distinguished according to whether the transfer matrix is carried or not, or whether elements in the carried transfer matrix are all 0 or not.
If the drawing instructions are distinguished according to whether the drawing instructions carry the transfer matrix, the electronic device may divide the drawing instructions into two types, one type is a drawing instruction carrying the transfer matrix, and the other type is a drawing instruction not carrying the transfer matrix. And drawing contents of the drawing instruction carrying the transfer matrix are moving objects. The drawing content of the drawing instruction which does not carry the transfer matrix is a static object. In this embodiment of the present application, the electronic device may draw the two types of drawing instructions in different memory spaces.
If the drawing instructions are distinguished according to whether the transfer matrix carried in the drawing instructions is a full 0 matrix, the electronic device may divide the drawing instructions into two types, one type is a drawing instruction whose elements in the carried transfer matrix are not all 0, and the other type is a drawing instruction whose elements in the carried transfer matrix are all 0 (i.e., a full 0 matrix). The drawing content of the drawing instruction which carries the transition matrix and has the elements not all 0 is the moving object. The drawing content of the drawing instruction with the transition matrix being the all 0 matrix is a static object. In this embodiment, the electronic device may draw the two types of drawing instructions in different memory spaces.
602a, the electronic device writes the drawing content of the drawing instruction into the color attachment a and the depth attachment a in the first memory space.
The electronic device writes the drawing contents of the drawing instructions into the color accessory a and the depth accessory a in the first memory space. The nth drawing frame may have a plurality of drawing commands with drawing contents as moving objects, and the electronic device may draw the drawing contents of the plurality of drawing commands into the color attachment a and the depth attachment a in sequence. In an implementation manner, if L drawing instructions in the nth drawing frame carry a transition matrix, the drawing contents of the drawing instructions are moving objects. The electronic device may sequentially draw the drawing contents with color information of the L drawing instructions in the nth drawing frame in the canvas 1, and the resulting drawing result with color information of all moving objects may be referred to as a color attachment a. Specifically, the electronic device may draw the drawing content having the color information of the first drawing instruction of the L drawing instructions in the nth drawing frame onto the canvas 1 of the electronic device. Then, the electronic device draws the drawing content with the color information of the second drawing instruction of the L drawing instructions in the nth drawing frame onto the canvas 1 of the electronic device. Until the electronic device draws the drawing content with the color information of the lth drawing instruction in the L drawing instructions in the nth drawing frame onto the canvas 1 of the electronic device, the finally obtained drawing result with the color information of all moving objects may be referred to as a color attachment a in the embodiment of the present application.
The electronic device may draw the drawing contents having the depth information and not having the color information of the L drawing instructions in the nth drawing frame in the canvas 2 in sequence, and the drawing result finally having the depth information of all moving objects may be referred to as a depth attachment a. Specifically, the electronic device may draw a drawing content having depth information, which does not include color information, of a first drawing instruction of the L drawing instructions onto the canvas 2. Then, the electronic device may draw the drawing content of the second of the L drawing instructions, which has depth information and does not contain color information, onto the canvas 2. Until the electronic device draws the drawing content, which has depth information and does not contain color information, of the lth drawing instruction in the L drawing instructions onto the canvas 2, the resulting drawing result having depth information of all moving objects may be referred to as a depth attachment a in the embodiment of the present application.
It can be understood that the canvas 1 and the canvas 2 are both in the first memory space.
For depth attachment a and color attachment a, reference may be made to the description in step S105 of fig. 6B, which is not repeated herein.
602B, the electronic device writes the drawing content of the drawing instruction into the color attachment B and the depth attachment B in the second memory space.
A description will be given of an example of a rendering instruction in which M rendering contents are static objects in the nth rendering frame. The electronic device may sequentially draw the drawing contents with color information of the M drawing instructions in the nth drawing frame in the canvas 3. Specifically, the electronic device may draw the drawing content with color information of a first one of the M drawing instructions in the canvas 3 and then draw the drawing content with color information of a second one of the M drawing instructions in the canvas 3. Until the electronic device draws the drawing content with color information of the mth drawing instruction among the M drawing instructions in the canvas 3, the resulting drawing result with color information of all static objects may be referred to as a color attachment B.
The electronic device may draw the drawing contents of the M drawing instructions in the nth drawing frame, which have depth information and do not include color information, into the canvas 4 in sequence, and finally obtain the depth attachment B. Specifically, the electronic device may draw the drawing content with depth information and without color information of the first of the M drawing instructions in the canvas 4, and then draw the drawing content with depth information and without color information of the second of the M drawing instructions in the canvas 4. Until the electronic device draws the drawing content with depth information and without color information of the mth drawing instruction of the M drawing instructions in the canvas 4, the resulting drawing result with depth information of all static objects may be referred to as a depth attachment B.
It will be appreciated that canvas 3 and canvas 4 are in the second memory space.
603. The electronic device synthesizes color annex A and color annex B into an image frame of the Nth drawing frame according to the depth annex A and the depth annex B.
The electronic device may synthesize the color annex a and the color annex B into the image frame of the nth drawing frame in the seventh memory space. It is understood that the color attachment a may contain a plurality of moving objects in the nth drawing frame, and the color attachment B may include a plurality of static objects in the nth drawing frame. The electronic device can acquire the depth information of each moving object and each pixel point in the color accessory A in the depth accessory A. The electronic device can acquire each static object and depth information of each pixel point in the color accessory B from the depth accessory B. The electronic device may combine the color attachment a and the color attachment B into an image frame according to the depth information of each pixel point in the color attachment a and the depth information of each pixel point in the color attachment B. The image frame may contain moving objects in color annex a and static objects in color annex B. For example, in the image frame, at the first pixel point, if the depth value of the first pixel point in the color attachment a is smaller than the depth value of the first pixel point in the color attachment B, the first pixel point of the color attachment a covers the first pixel point of the color attachment B in the image frame. At the first pixel point, if the depth value of the first pixel point in the color attachment a is greater than the depth value of the first pixel point in the color attachment B, then in the image frame, the first pixel point of the color attachment B covers the first pixel point of the color attachment a.
604. The electronic device displays the nth drawing frame.
The electronic device may send the image frame of the nth drawing frame to the display screen for displaying, and finally, the display screen in the electronic device may display the nth drawing frame.
Fig. 6A (b) illustrates an exemplary process in which the electronic device draws an N +2 th drawing frame. As shown in (b) of fig. 6A, the electronic device drawing the (N + 2) th drawing frame may include the steps of:
605. the electronic equipment acquires the drawing instruction of the (N + 2) th drawing frame and judges whether the drawing instruction carries a transfer matrix. If yes, 606a is performed, otherwise 606b is performed.
The electronic device may obtain a plurality of drawing instructions for the (N + 2) th drawing frame. The drawing instruction of the (N + 2) th drawing frame may or may not carry a transfer matrix. If the draw instruction carries a branch matrix, go to step 606a, and if the draw instruction does not carry a branch matrix, go to step 606b.
Step 605 may refer to the description in step 601, which is not described herein again.
606a, the electronic device writes the drawing content of the drawing instruction into the color attachment C and the depth attachment C in the third memory space.
And if the drawing content of the drawing instruction is a moving object, the electronic equipment writes the drawing content of the drawing instruction into the color attachment C and the depth attachment C in the third memory space. If there are K drawing instructions carrying the transition matrix in the N +2 th drawing frame, the drawing content of the drawing instruction is a moving object. The electronic device may draw the drawing contents with color information of the K drawing instructions in the N +2 th drawing frame in turn in the canvas 5, and the resulting drawing result with color information of all moving objects may be referred to as a color attachment C. Specifically, the electronic device may draw the drawing content having the color information of the first drawing instruction of the K drawing instructions in the N +2 th drawing frame onto the canvas 5 of the electronic device. Then, the electronic device draws the drawing content with the color information of the second drawing instruction of the K drawing instructions in the N +2 th drawing frame onto the canvas 5 of the electronic device. Until the electronic device draws the drawing content with color information of the kth drawing instruction of the K drawing instructions in the N +2 th drawing frame onto the canvas 5 of the electronic device, the finally obtained drawing result with color information of all moving objects may be referred to as a color attachment C in the embodiment of the present application.
The electronic device may sequentially draw the drawing contents having the depth information and not having the color information of the K drawing instructions in the N +2 th drawing frame in the canvas 6, and finally obtain a drawing result having the depth information of all moving objects, which may be referred to as a depth attachment C. Specifically, the electronic device may draw the drawing content of the first drawing instruction of the K drawing instructions, which has depth information and does not contain color information, onto the canvas 6. Then, the electronic device may draw the drawing content of the second drawing instruction of the K drawing instructions, which has depth information and does not contain color information, onto the canvas 6. Until the electronic device draws the drawing content, which has depth information and does not contain color information, of the kth drawing instruction of the K drawing instructions onto the canvas 6, and finally, the image having the drawing result of the depth information of all moving objects may be referred to as a depth attachment C in the embodiment of the present application.
It will be appreciated that both canvas 5 and canvas 6 are in the third memory space.
For the depth attachment C and the color attachment C, the following description in step S109 of fig. 6B may be specifically referred to, and details are not repeated here.
606b, the electronic device writes the drawing content of the drawing instruction into the color attachment D and the color attachment D in the fourth memory space.
If the drawing content of the drawing instruction of the (N + 2) th drawing frame is a static object, the electronic device writes the drawing content of the drawing instruction into the color attachment D and the depth attachment D in the fourth memory space. If J drawing instructions in the (N + 2) th drawing frame do not carry the transfer matrix, the drawing content of the drawing instructions is a static object. The electronic device may draw the drawing contents with color information of the J drawing instructions in the N +2 th drawing frame in turn in the canvas 7, and the resulting drawing result with color information of all the static objects may be referred to as a color attachment D. Specifically, the electronic device may draw the drawing content with color information of the first drawing instruction of the J drawing instructions in the N +2 th drawing frame onto the canvas 7 of the electronic device. Then, the electronic device draws the drawing content with the color information of the second drawing instruction of the J drawing instructions in the N +2 th drawing frame onto the canvas 7 of the electronic device. Until the electronic device draws the drawing content with the color information of the J-th drawing instruction in the N + 2-th drawing frame onto the canvas 7 of the electronic device, the finally obtained drawing result with the color information of all the static objects may be referred to as a color attachment D in the embodiment of the present application.
The electronic device may sequentially draw the drawing contents having the depth information and not having the color information of the J drawing instructions in the N +2 th drawing frame in the canvas 8, and finally obtain a drawing result having the depth information of all the static objects, which may be referred to as a depth attachment D. Specifically, the electronic device may draw the drawing content of the first of the J drawing instructions, which has depth information and does not contain color information, onto the canvas 8. Then, the electronic device may draw the drawing content of the second of the J drawing instructions, which has depth information and does not contain color information, onto the canvas 8. Until the electronic device draws the drawing content, which has depth information and does not contain color information, of the J-th drawing instruction of the J drawing instructions onto the canvas 8, the finally obtained drawing result having depth information of all static objects may be referred to as a depth attachment D in the embodiment of the present application.
It will be appreciated that canvas 7 and canvas 8 are both in the third memory space.
For the depth attachment D and the color attachment D, the following description in step S109 of fig. 6B may be specifically referred to, and details are not repeated here.
607. The electronic device synthesizes color annex C and color annex D into an image frame of the Nth drawing frame according to the depth annex C and the depth annex D.
The electronic device may combine the color attachment C and the color attachment D into an image frame of the N +2 th drawing frame in the seventh memory space. It is understood that the color attachment C may contain a plurality of moving objects in the N +2 th drawing frame, and the color attachment D may include a plurality of static objects in the N +2 th drawing frame. The electronic device can acquire the depth information of each moving object and each pixel point in the color accessory C in the depth accessory C. The electronic device can acquire each static object and depth information of each pixel point in the color accessory D from the depth accessory D. The electronic device may combine the color attachment C and the color attachment D into an image frame according to the depth information of each pixel point in the color attachment C and the depth information of each pixel point in the color attachment D. The image frame may contain a moving object in color annex C and a static object in color annex D. For example, in the image frame, at the first pixel point, if the depth value of the first pixel point in the color component C is smaller than the depth value of the first pixel point in the color component D, the first pixel point of the color component C covers the first pixel point of the color component D in the image frame. In the image frame, at the first pixel point, if the depth value of the first pixel point in the color attachment C is greater than the depth value of the first pixel point in the color attachment D, then in the image frame, the first pixel point of the color attachment D covers the first pixel point of the color attachment C.
608. The electronic device displays the (N + 2) th drawing frame.
The electronic device may send the image frame of the (N + 2) th drawing frame to the display screen for displaying, and finally, the display screen in the electronic device may display the (N + 2) th drawing frame.
Fig. 6A (c) illustrates an example of how the electronic device predicts the N +3 th predicted frame. As shown in (c) of fig. 6A, the process is as follows:
1. the electronic equipment calculates a motion vector A according to the color attachment A and the color attachment C, and calculates a motion vector A according to the depth attachment B and the color
The accessory D calculates the motion vector B.
As shown in (C) of fig. 6A, the electronic device may calculate a motion vector a from the color accessory a and the color accessory C. The electronic device calculates a motion vector B from the depth appendage B and the color appendage D. For a specific calculation process, reference may be made to the description in step S112 in fig. 6B, which is not repeated herein.
2. The electronic device obtains a color attachment E according to the color attachment C and the motion vector A, and obtains a color attachment F according to the color attachment D and the motion vector B.
The electronic device may derive a color attachment E for the (N + 3) th predicted frame from the color attachment of the (N + 2) th rendered frame and the motion vector a. That is, the electronic apparatus can predict the moving object in the N +3 th frame from the moving object of the N +2 th drawing frame and the motion vector of the moving object.
The electronic device may derive the color attachment F of the N +3 th frame from the color attachment D of the N +2 th drawing frame and the motion vector B. That is, the electronic device may predict the static object in the N +3 th frame from the static object of the N +2 th drawing frame and the moving object of the static object.
Here, the description in step S114 in fig. 6B may be specifically referred to, and is not repeated here.
3. The electronic device synthesizes color annex E and color annex F into an image frame of the (N + 3) th predicted frame in a seventh memory space.
In this embodiment, the depth information of the pixel point at the first coordinate in the color attachment E of the (N + 3) th prediction frame may be the same as the depth information of the pixel point at the first coordinate in the color attachment C of the (N + 2) th rendering frame. The depth information of the pixel point at the second coordinate in the color attachment F of the (N + 3) th predicted frame may be the same as the depth information of the pixel point at the second coordinate in the color attachment D of the (N + 2) th drawn frame. That is, the electronic device can take the depth value of each pixel in the depth attachment C as the depth value of each pixel in the color attachment E. The electronic device can take the depth value of each pixel point in the depth attachment D as the depth value of each pixel point in the color attachment F. The electronic device may take out the depth values of the pixel points in the depth attachment C and the depth attachment D, and combine the color attachment E and the color attachment F into the image frame of the (N + 3) th predicted frame.
4. The electronic device displays the N +3 th predicted frame.
The electronic device may send the image frame of the (N + 3) th predicted frame to the display screen for displaying, and finally, the display screen in the electronic device may display the (N + 3) th predicted frame.
Fig. 6A schematically illustrates the basic steps of the image frame prediction method provided in the embodiment of the present application, and fig. 6B illustrates a flowchart of the image frame prediction method provided in the embodiment of the present application in detail, and as shown in fig. 6B, a method for image frame prediction provided in the embodiment of the present application may include:
S101-S102, the electronic equipment starts to execute the image frame prediction method.
S101, when the target application starts to draw, the CPU of the electronic equipment sends an instruction for instructing the GPU to create a memory space to the GPU.
The target application is an application with animation effects in the user interface, such as a game-like application. The embodiments of the present application are described below by taking a target application as an example of a game application. When a game application installed in the electronic device runs, the CPU of the electronic device sends an instruction to the GPU instructing the GPU to create a memory space.
S102, a GPU of the electronic equipment creates a first memory space, a second memory space, a third memory space, a fourth memory space, a fifth memory space, a sixth memory space and a seventh memory space in a memory.
In response to an instruction sent by the CPU to instruct the GPU to create the memory space, the GPU may create a first memory space, a second memory space, a third memory space, a fourth memory space, a fifth memory space, a sixth memory space, and a seventh memory space in the memory. The first memory space may be configured to store a drawing result of the drawing instruction carrying the transfer matrix in the nth drawing frame, such as a depth attachment and a color attachment. The second memory space may be configured to store a drawing result of a drawing instruction that does not carry a transfer matrix in the nth drawing frame. The third memory space may be configured to store a drawing result of a drawing instruction carrying the transfer matrix in the N +2 th drawing frame. The fourth memory space may be used to store the drawing result of the drawing instruction that does not carry the transfer matrix in the N +2 th drawing frame. The GPU may predict the predicted drawing result a of the N +3 th predicted frame in the fifth memory space according to the drawing result in the first memory space and the drawing result in the third memory space. The GPU may predict, in the sixth memory space, the predicted drawing result B of the N +3 th predicted frame according to the drawing result in the second memory space and the drawing result in the fourth memory space. The GPU may compose the image frame in a seventh memory space. For example, the GPU synthesizes the nth draw frame, and the N +2 th draw frame, the N +3 th predicted frame, in the seventh memory space.
S103-S106, the electronic equipment draws the Nth drawing frame.
S103, the CPU of the electronic equipment acquires the drawing parameters of the Nth drawing frame.
When a target application in the electronic device performs drawing, the target application may call a drawing instruction to perform drawing. The CPU of the electronic device 100 may acquire the drawing parameters of the nth drawing frame of the application program through an interface in the three-dimensional image processing library. And the drawing parameters of the Nth drawing frame are used for drawing and rendering the Nth drawing frame. The drawing parameters of the nth drawing frame may include information carried in a drawing instruction (e.g., a draw call instruction) of the nth drawing frame, such as coordinates, pixel values, depth values, and the like of each vertex in the drawing contents of the draw call instruction.
S104, the CPU of the electronic device sends a drawing instruction for instructing the GPU to draw the Nth drawing frame to the GPU according to the drawing parameters of the Nth drawing frame.
The CPU of the electronic device may send, to the GPU, a drawing instruction for instructing the GPU to draw the nth drawing frame according to the drawing parameter of the nth drawing frame. It is understood that the drawing parameters of the nth drawing frame acquired by the CPU may include information of a plurality of drawing instructions. In this way, the CPU may sequentially send a plurality of drawing instructions for instructing the GPU to draw the nth drawing frame to the GPU. In the embodiment of the present application, the drawing instruction includes an execution drawing (draw call) instruction and a drawing state setting instruction.
The execution of the drawing instruction may be used to trigger the GPU to perform drawing rendering on the current drawing state data, and generate a drawing result, for example, a glDrawElements instruction in OpenGL. OpenGL is a cross-language, cross-platform Application Programming Interface (API) for rendering 2D, 3D vector graphics.
The rendering state setting instruction may be configured to set current rendering state data on which the rendering instruction depends, for example, to set the state data to include a vertex information cache index on which rendering depends, for example, a glBindBuffer in OpenGL, where the vertex information cache index is used to indicate vertex information data of a rendering object, and the vertex information data is a set of coordinate position, color, and the like data used to describe a vertex of a two-dimensional or three-dimensional vector model used for rendering in a rendering process.
The drawing state setting instruction may further include an instruction to set a vertex index, texture information, a spatial position, and the like of the drawing object, for example, a glActiveTexture, a glBindBufferRange instruction, and the like in OpenGL. A drawing object may be an object that can be drawn by the electronic device according to all vertices and vertex information included in one drawing instruction.
For a more visual illustration, one possible OpenGL rendering instruction may be as follows in execution order:
glbindbufferange (target = GL _ unitform _ BUFFER, index =1, BUFFER =738, offset =0, size = 352)// indicates that the GPU modifies the partial rendering global information, such as the position of the cart;
glBindBuffer (target = GL _ ARRAY _ BUFFER, BUFFER = BUFFER 0)// instructing the GPU to store BUFFER0 index information, which holds vertex information (e.g., information of a position, a color, and the like of a vertex) of the cart, into GL _ ARRAY _ BUFFER;
glBindBuffer (target = GL _ ELEMENT _ ARRAY _ BUFFER, BUFFER = BUFFER 1)// instructing the GPU to save the index of BUFFER1, which saves vertex index information (e.g., drawing order of vertices) of the cart, into GL _ ELEMENT _ ARRAY _ BUFFER;
glActiveTexture(texture=GL_TEXTURE0)
glBindTexture (target = GL _ TEXTURE _2D, TEXTURE = TEXTURE 1)// instructs the GPU to save the index of TEXTURE1 in which the TEXTURE information of the cart is saved into GL _ TEXTURE 0;
glDrawElements (GLenumomode, GLsizeicount, GLenumtype, constvoid indices)// indicates that the GPU performs the cart rendering.
In one possible implementation, the CPU may determine whether to modify the memory Space (FBO) stored in the drawing result of the drawing instruction based on the current drawing state data according to whether the spatial position of the drawing Object in the world coordinate system changes with respect to the previous frame in the drawing state setting instruction, where the parameter or data of the spatial information of the drawing Object may be a transfer matrix parameter in an XXX instruction (e.g., glndbufferrrange in OpenGL), where the transfer matrix is used to describe a mapping relationship between a Local coordinate system (Local/Object Space) of the model and the world coordinate system, for example, if a vertex in the drawing Object in the Local coordinate system is U (x 1, y1, z1, 1), and the transfer matrix is S, then a relationship between a position W (x 2, y2, z2, 1) of the vertex in the world coordinate system and a coordinate U (x 1, y1, z 1) in the Local coordinate system is W (x 2, y1, z 1): w = U × T.
One possible embodiment is that, if there is a transition matrix parameter in the drawing state setting instruction, that is, the transition matrix parameter of the corresponding drawing object is refreshed, the electronic device determines that the spatial position of the drawing object in the world coordinate system relative to the previous frame has changed, and then the electronic device modifies the memory space stored in the drawing result of the drawing instruction based on the current drawing state data to the memory space (for example, the first memory space) for storing the moving object. If the drawing state setting instruction does not have the transfer matrix parameter, that is, the transfer matrix parameter of the corresponding drawing object is not refreshed, the electronic device determines that the spatial position of the drawing object in the world coordinate system relative to the previous frame does not change. The electronic device modifies the memory space in which the drawing result of the drawing instruction based on the current drawing state data is stored to the memory space (e.g., the second memory space) for storing the static object.
One possible embodiment is that, if there is a transition matrix parameter different from a transition matrix parameter of a corresponding drawing object in the drawing state setting instruction, the electronic device determines that a spatial position of the drawing object in the world coordinate system with respect to a previous frame of the drawing object has changed, and then the electronic device modifies a memory space in which a drawing result of the drawing instruction based on the current drawing state data is stored to a memory space (for example, a first memory space) for storing a moving object. If the transition matrix parameter in the drawing state setting instruction is the same as the transition matrix parameter of the corresponding drawing object, the electronic equipment judges that the spatial position of the drawing object in the world coordinate system relative to the previous frame does not change. The electronic device modifies the memory space in which the drawing result of the drawing instruction based on the current drawing state data is stored to the memory space (e.g., the second memory space) for storing the static object.
For example, in glbindbufferarange (target = GL _ unitofrm _ BUFFER, index =1, BUFFER =738, offset =0, size = 352), the parameter target represents the type of binding BUFFER, the parameter index represents the index of the binding point, the parameter BUFFER represents the index of the BUFFER participating in the binding operation, the parameter offset represents the offset of the position needing to be bound with respect to the start position of the BUFFER, and the parameter size represents the size of the part of the BUFFER participating in the binding. If the parameter size of the glbindbufferrage function is 352 and the buffer ID (738) is already saved, it indicates that the spatial location information corresponding to this buffer is the location information modified by the glBufferSubData instruction. The electronic device determines that the spatial position of the drawing object in the world coordinate system has changed with respect to the previous frame. Otherwise, the electronic equipment judges that the spatial position of the drawing object in the world coordinate system relative to the previous frame does not change. The electronic device may obtain the transfer matrix parameters in the drawing frame of the program through a hook function. For example, the electronic device may obtain the transition matrix parameters through a glBindBufferRange function.
S105, if the drawing instruction of the Nth drawing frame carries the transfer matrix, the GPU writes the drawing content of the drawing instruction into a color attachment A and a depth attachment A in a first memory space; and if the drawing instruction of the Nth drawing frame does not carry the transfer instruction, the GPU writes the drawing content of the drawing instruction into the color accessory B and the depth accessory B in the second memory space.
The drawing instructions of the nth drawing frame may include drawing instructions carrying a transition matrix and drawing instructions not carrying a transition matrix. As shown in fig. 7A, fig. 7A exemplarily shows an nth drawing frame, which may include a static background 102 and a moving cart 103. The drawing instructions for the nth drawing frame may include drawing instructions for drawing the static background 102 and drawing instructions for drawing the moving cart 103.
And if one drawing instruction of the Nth drawing frame carries the transfer matrix, the GPU writes the drawing content of the drawing instruction into the color attachment A and the depth attachment A in the first memory space. For example, the transfer matrix may be carried in the drawing instruction for drawing a moving cart. The GPU writes the drawing contents of the drawing instructions of the drawing moving cart 103 into the color accessory a and the depth accessory a in the first memory space. As shown in fig. 7B, fig. 7B exemplarily shows the color attachment a and the depth attachment a of the first memory space. The color attachment a and the depth attachment a write drawing contents of drawing instructions of the moving cart 103. It will be appreciated that color attachment a is actually a block of memory space in memory. Fig. 7B schematically shows the color data in the memory space in the form of image scale for the sake of visual understanding. Similarly, deep attachment a is also effectively a block of memory space in memory. Fig. 7B schematically shows the depth data in the memory space in the form of image scale for visual understanding.
In the embodiment of the present application, the color accessory a may be referred to as a first color accessory, and the depth accessory a may be referred to as a first depth accessory.
And if one drawing instruction of the Nth drawing frame does not carry the transfer matrix, the GPU writes the drawing content of the drawing instruction into the color attachment B and the depth attachment B in the second memory space. For example, the static background 102 shown in fig. 7A, the drawing instruction for drawing the static background 102 may not carry a transition matrix. The GPU writes the rendering content of the rendering instruction that renders the static background 102 into the color attachment B and the depth attachment B of the second memory space. As shown in fig. 7C, fig. 7C exemplarily shows a color attachment B and a depth attachment B of the second memory space. The color attachment B and the depth attachment B write the drawing contents of the drawing instruction of the static background 102. It will be appreciated that the color attachment B is actually a block of memory space in memory. Fig. 7C schematically shows the color data in the memory space in the form of image scale for the sake of visual understanding. Likewise, deep attachment B is also effectively a block of memory space in memory. Fig. 7C schematically shows the depth data in the memory space in the form of image scale for visual understanding.
In the embodiment of the present application, the color accessory B may be referred to as a second color accessory, and the depth accessory B may be referred to as a second depth accessory.
It is understood that there may be multiple drawing instructions carrying a transition matrix and multiple drawing instructions not carrying a transition matrix in the nth drawing frame. And the GPU draws each drawing instruction of the Nth drawing frame in turn. When the GPU draws one drawing instruction, it may be determined whether the drawing instruction carries a transfer matrix. If the drawing instruction carries the transfer matrix, the GPU writes the drawing content of the drawing instruction into the color attachment A and the depth attachment A in the first memory space. And if the drawing instruction does not carry the transfer matrix, the GPU writes the drawing content of the drawing instruction into the color attachment B and the depth attachment B in the second memory space. Finally, in the color accessory a according to the embodiment of the present application, all the drawing contents with color information, which carry the drawing instruction of the transition matrix, in the nth drawing frame are sequentially written. The depth attachment a sequentially writes the drawing contents with the depth information of all the drawing instructions carrying the transfer matrix in the nth drawing frame. Similarly, in the color attachment B according to the embodiment of the present application, drawing contents with color information of all drawing instructions not carrying a transition matrix in the nth drawing frame are sequentially written. And the depth attachment B sequentially writes the drawing contents with the depth information of all the drawing instructions which do not carry the transfer matrix in the Nth drawing frame.
It is understood that the color accessory a and the depth accessory a may be two separate memory spaces in the first memory space. Optionally, the color accessory a and the depth accessory a may also be a block of memory space in the first memory space, that is, the color data and the depth data may be written in the block of memory space. Likewise, color accessory B and depth accessory B may be two separate memory spaces in the second memory space. Optionally, the color accessory B and the depth accessory B may also be a block of memory space in the second memory space, that is, the color data and the depth data may be written in the block of memory space.
Step S105 can refer to steps 602a and 602b above.
S106, the GPU of the electronic equipment synthesizes the color attachment A and the color attachment B of the Nth drawing frame into the Nth drawing frame according to the depth attachment A and the depth attachment B in the seventh memory space.
The GPU of the electronic device may synthesize, in the seventh memory space, the color attachment a and the color attachment B of the nth drawing frame into the nth drawing frame according to the depth attachment a and the depth attachment B. For synthesizing the color accessory a and the color accessory B according to the depth accessory, reference may be made to the description of the depth test about synthesizing two color accessories based on the depth information in the prior art, and details are not described herein.
S107-S110, the electronic equipment draws the (N + 2) th drawing frame.
S107, the CPU of the electronic device obtains the drawing parameters of the (N + 2) th drawing frame.
The CPU of the electronic device may acquire the rendering parameter of the (N + 2) th rendering frame. Specifically, the CPU of the electronic device 100 may acquire the rendering parameter of the N +2 th rendering frame of the application program through an interface in the three-dimensional image processing library. And the drawing parameters of the (N + 2) th drawing frame are used for drawing and rendering the (N + 2) th drawing frame. The drawing parameters of the N +2 th drawing frame may include information carried in a drawing instruction (e.g., draw call instruction) of the N +2 th drawing frame, such as coordinates of each vertex in the drawing contents of the draw call instruction, pixel values, depth values, and the like.
It is to be appreciated that the electronic device displays the (N + 1) th frame before the (N + 2) th frame is rendered. If the (N + 1) th frame is a drawing frame, the electronic device may draw the (N + 1) th frame according to the steps of drawing the (N + 2) th frame in steps S107 to S110. If the (N + 1) th frame is a predicted frame, the electronic device may predict the (N + 1) th predicted frame according to steps S111-S115.
And S108, the CPU of the electronic equipment sends a drawing instruction for instructing the GPU to draw the (N + 2) th drawing to the GPU according to the drawing parameters of the (N + 2) th drawing frame.
The CPU of the electronic device may send, to the GPU, a drawing instruction for instructing the GPU to draw the N +2 th drawing frame according to the drawing parameter of the N +2 th drawing frame. It is understood that the drawing parameters of the N +2 th drawing frame acquired by the CPU may include information of a plurality of drawing instructions. In this way, the CPU may sequentially send a plurality of drawing instructions for instructing the GPU to draw the N +2 th drawing frame to the GPU. Here, the description in step S104 may be specifically referred to, and is not repeated here.
S109, if the drawing instruction of the (N + 2) th drawing frame carries the transfer matrix, the GPU writes the drawing content of the drawing instruction of the (N + 2) th frame into a color attachment C and a depth attachment C in a third memory space; and if the drawing instruction of the (N + 2) th drawing frame does not carry the transfer matrix, the GPU writes the drawing content of the drawing instruction of the (N + 2) th frame into the color attachment D and the depth attachment D in the fourth memory space.
The drawing instructions of the N +2 th drawing frame may include drawing instructions carrying a transition matrix and drawing instructions not carrying a transition matrix. As shown in fig. 8A, fig. 8A exemplarily shows an N +2 th drawing frame, which may include a static background 102 and a moving cart 103. The drawing instruction of the N +2 th drawing frame may include a drawing instruction for drawing the static background 102 and a drawing instruction for drawing the moving cart 103.
And if the drawing instruction carries the transfer matrix, the GPU writes the drawing content of the drawing instruction into the color attachment C and the depth attachment C in a third memory space. For example, the transfer matrix may be carried in the drawing instruction for drawing a moving cart. The GPU writes the drawing contents of the drawing instructions of the cart 103 drawing the motion into the color accessory C and the depth accessory C in the third memory space. As shown in fig. 8B, fig. 8B exemplarily shows a color enclosure C and a depth enclosure C of the third memory space. The color attachment C and the depth attachment C have written therein the drawing contents of the drawing instruction of the moving cart 103. It will be appreciated that the color attachment C is actually a block of memory space in memory. Fig. 8B schematically shows the color data in the memory space in the form of image scale for the sake of visual understanding. Similarly, deep attachment C is also effectively a block of memory space in memory. Fig. 8B schematically shows the depth data in the memory space in the form of image scale for visual understanding.
And if the drawing instruction does not carry the transfer matrix, the GPU writes the drawing content of the drawing instruction into the color attachment D and the depth attachment D in the fourth memory space. For example, the static background 102 shown in fig. 8A, the drawing instruction for drawing the static background 102 may not carry the transition matrix. The GPU writes the drawing contents of the drawing instruction that draws the static background 102 into the color attachment D and the depth attachment D of the fourth memory space. As shown in fig. 8C, fig. 8C exemplarily shows a color attachment D and a depth attachment D of the fourth memory space. The color attachment D and the depth attachment D have the drawing contents of the drawing instruction of the static background 102 written therein. It will be appreciated that the color attachment D is actually a block of memory space in memory. Fig. 8C schematically shows the color data in the memory space in the form of image scale for the sake of visual understanding. Similarly, the depth attachment D is also actually a block of memory space in the memory. Fig. 8C schematically shows the depth data in the memory space in the form of image scale for visual understanding.
In the embodiment of the present application, the color accessory C may be referred to as a third color accessory, and the color accessory D may be referred to as a fourth color accessory. Depth appendage C may be referred to as a third depth appendage and depth appendage D may be referred to as a fourth depth appendage.
It is understood that there may be multiple drawing instructions carrying a transition matrix and multiple drawing instructions not carrying a transition matrix in the N +2 th drawing frame. And the GPU draws each drawing instruction of the (N + 2) th drawing frame in turn. When the GPU draws one drawing instruction, it may be determined whether the drawing instruction carries a transfer matrix. And if the drawing instruction carries the transfer matrix, the GPU writes the drawing content of the drawing instruction into the color attachment C and the depth attachment C in a third memory space. And if the drawing instruction does not carry the transfer matrix, the GPU writes the drawing content of the drawing instruction into the color attachment D and the depth attachment D in the fourth memory space. Finally, the color attachment C of the embodiment of the present application sequentially writes, in the N +2 th drawing frame, all the drawing contents with color information, which carry the drawing instruction of the transition matrix. And the depth attachment C sequentially writes the drawing contents with the depth information of all the drawing instructions carrying the transfer matrix in the Nth drawing frame. Similarly, in the color attachment D according to the embodiment of the present application, drawing contents with color information of all drawing instructions not carrying a transition matrix in the N +2 th drawing frame are sequentially written. The depth attachment D sequentially writes the drawing contents with depth information of all drawing instructions not carrying the transfer matrix in the N +2 th drawing frame.
It is understood that color attachment a and depth attachment a may be two separate memory spaces in the first memory space. Optionally, the color accessory a and the depth accessory a may also be a block of memory space in the first memory space, that is, the block of memory space may have color data and depth data written therein. Likewise, color accessory B and depth accessory B may be two separate memory spaces in the second memory space. Optionally, the color accessory B and the depth accessory B may also be a block of memory space in the second memory space, that is, the color data and the depth data may be written in the block of memory space. Step S109 may refer to steps 606a and 606b above.
And S110, in a seventh memory space, the GPU synthesizes the color attachment C and the color attachment D into an N +2 th drawing frame according to the depth attachment C and the depth attachment D.
The GPU of the electronic device may synthesize, in the seventh memory space, the color attachment C and the color attachment D of the (N + 2) th rendering frame into the (N + 2) th rendering frame according to the depth attachment C and the depth attachment D. For synthesizing the color accessory C and the color accessory D according to the depth accessory, reference may be made to the description of the depth test about synthesizing two color accessories based on the depth information in the prior art, and details are not described herein.
It can be understood that, after the GPU displays the image frame of the nth frame in the seventh memory space, the image frame of the nth frame stored in the seventh memory space may be cleared. The GPU may then synthesize the image frame of the (N + 2) th frame in the seventh memory space. Similarly, when the GPU displays the image frame of the (N + 2) th frame in the seventh memory space, the image frame of the (N + 2) th frame stored in the seventh memory space may be cleared.
S111-S115, the electronic device predicts the (N + 3) th predicted frame.
S111, the CPU of the electronic device sends an instruction for instructing the GPU to calculate the motion vector to the GPU.
The CPU of the electronic device may send instructions to the GPU instructing the GPU to calculate motion vectors. The instruction is to instruct a shader in the GPU to compute a motion vector. The instruction may be a dispatch instruction. The embodiment of the present application does not limit the specific form of the instruction for calculating the motion vector.
And S112, the GPU of the electronic equipment calculates a motion vector A of a color attachment C of the (N + 2) th drawing frame by using the color attachment A, and calculates a motion vector B of a color attachment D of the (N + 2) th drawing frame by using the depth attachment B and the depth attachment D.
The GPU of the electronic device may calculate the motion vector of the N +2 th frame moving object and the motion vector of the static object, respectively. The GPU of the electronic device may calculate a motion vector of the moving object (color attachment C) in the N +2 th frame, i.e., a motion vector a, by color attachment a and color attachment C. The GPU of the electronic device may calculate a motion vector of the static object (color attachment D) in the N +2 th frame, i.e., motion vector B, according to depth attachments B and D.
In the embodiment of the present application, the motion vector V may be referred to as a first motion vector, and the motion vector B may be referred to as a second motion vector.
In a possible implementation manner, the calculating, by the GPU of the electronic device, the motion vector a of the color accessory C of the (N + 2) th drawing frame by using the color accessory a may specifically include the following steps:
1. the GPU may divide the color attachment C of the N +2 th frame into Q pixel blocks. Each pixel block may contain f x f (e.g., 16 x 16) pixels. As shown in fig. 9A (b).
2. The GPU takes the first pixel block (e.g., pixel block 902 in fig. 9A (b) in color attachment C, and searches for a matching pixel block matching the first pixel block in color attachment a of the nth rendering frame, e.g., pixel block 901 in fig. 9A (a)).
In the embodiment of the present application, of all candidate blocks in the nth frame, the candidate block having the smallest absolute difference from the RGB value of the first pixel block is referred to as a matching pixel block matching the first pixel block. The electronic device needs to find a matching pixel block 901 matching the pixel block in the nth rendering frame. Optionally, the GPU of the electronic device may find a matching pixel block matching the first pixel block in the nth rendering frame by a diamond search algorithm. As shown in fig. 9B, the GPU may search through a diamond search algorithm in the nth frame of the (B) diagram for a matching pixel block that matches the pixel block 902 found in the (a) N +2 th frame of the (a) diagram. The electronic device can perform diamond search by using the pixel point 9011 at the upper left corner of the pixel block 901. The electronic device can match pixel block 901, which has pixel point 1104 as the top left corner, with pixel block 902 in the nth rendering frame. The diamond search algorithm may specifically refer to descriptions in the prior art, and details are not described here.
As shown in fig. 9C, fig. 9C exemplarily shows that the electronic device finds a matching pixel block 901 with the pixel block 901 in the nth rendering frame. As shown in fig. 9 (a), the electronic device performs a diamond search on the upper left pixel point 9011 of the pixel block 902. In an implementation manner, as shown in fig. 9C (b), the electronic device first performs a diamond search with the pixel point 9012 in the nth drawing frame as a center point. The coordinates of the pixel point 9011 are the same as the coordinates of the pixel point 9012. The electronic device can calculate that the pixel block 902 is respectively different from the pixel block with the upper left corner pixel point being the pixel block of the pixel point 9012, the upper left corner pixel point being the pixel block of the pixel point 1001, the upper left corner pixel point being the pixel block of the pixel point 1002, the upper left corner pixel point being the pixel block of the pixel point 1003, the upper left corner pixel point being the pixel block of the pixel point 1004, the upper left corner pixel point being the pixel block of the pixel point 1005, the upper left corner pixel point being the pixel block of the pixel point 1006, the upper left corner pixel point being the pixel block of the pixel point 1007, and the upper left corner pixel point being the pixel block of the pixel point 1008. The electronic device can perform small diamond search by taking the upper left pixel point in the pixel block with the minimum pixel value difference as the center. For example, in the above pixel blocks, the difference between the pixel block of the pixel point 1003 and the pixel block 902 at the upper left corner is the minimum. The electronic device may perform a small diamond search centered on pixel point 1003. That is, the electronic device can calculate that the pixel block 902 is a pixel block with the top left pixel point as the pixel block 1101, the top left pixel point is a pixel block with the pixel point 1102, the top left pixel point is a pixel block with the pixel point 1103, and the top left pixel point is a pixel block with the pixel point 1104. Finally, the electronic device may determine that the pixel block with the upper-left pixel point as the pixel block 1104 (i.e., the pixel block 901 shown in fig. 9B) has the smallest pixel value difference with the pixel block 902. The electronic device may determine that pixel block 901 in the nth frame is a matching pixel block to pixel block 902.
3. And the GPU calculates a first displacement from the matched pixel block to the first pixel block, and determines a motion vector A1 of the first pixel block according to the first displacement.
For example, as shown in fig. 9B, the matching block with the pixel block 902 in fig. 9B is a pixel block 901. The motion vector of the pixel block 902 is the motion vector V1 illustrated in (B) of fig. 9B.
4. The GPU can calculate the motion vector of each of the Q pixel blocks in the color accessory C, i.e. V1, V2, \ 8230;, VQ, according to the above steps 1 to 3. The motion vector of color annex C is V = (V1, V2, \8230;, VQ).
In a possible implementation manner, the calculating, by the GPU of the electronic device, the motion vector B of the color attachment D of the N +2 th drawing frame by using the depth attachment B and the depth attachment D may specifically include the following steps:
1. the GPU may divide the color attachment D for the N +2 th frame into Q pixel blocks. Each tile may contain f x f (e.g., 16 x 16) pixels.
2. When the GPU draws the nth frame, the corner information (stored in a 4 × 4 matrix) of the camera position of the nth frame may be acquired through the hook glBufferSubData interface, and recorded as M1.
3. When the GPU draws the (N + 2) th frame, the corner information (stored in a 4 × 4 matrix) of the camera position of the (N + 2) th frame may be acquired through the hook glBufferSubData interface, and recorded as M2. The GPU calculates a transformation matrix T between M1 and M2. The matrix T characterizes the transformation between the two frames Camera.
Here, the first and second liquid crystal display panels are,
Figure PCTCN2021106928-APPB-000001
Figure PCTCN2021106928-APPB-000002
wherein the matrix R1 represents the rotation attribute of the camera in the nth drawing frame, and the matrix T1 represents the translation attribute of the camera in the nth drawing frame. The matrix R2 represents the rotation attribute of the camera in the N +2 th drawing frame, and the matrix T2 represents the translation attribute of the camera in the N +2 th drawing frame. The matrix P is a projection matrix, and the GPU can be directly acquired through a hook glBufferSubData interface.
4. And the GPU takes the second pixel block and the position2 of the second pixel block in the (N + 2) th frame in the color attachment D of the (N + 2) th frame, and obtains the depth value D2 of the second pixel block in the depth attachment D.
5. The GPU calculates the position1 of the second pixel block in the nth rendering frame using the position2 of the second pixel block, the depth value D2, and the matrix T. Then the motion vector B1= position2-position1 for the second pixel block. The GPU may calculate the motion vectors B1, B2, \ 8230;, BQ for each pixel block in the color accessory D in accordance with the step of calculating the motion vector for the second pixel block. The motion vector B = (B1, B2, \8230;, BQ) of the color accessory D.
Here, vector G2= (position 2, D2, 1), vector G1= G2 × T,
position1=(G1.x/G1.w,G1.y/G1.w)
where G1 is a vector of 1 x 4, G1.X is the first element of G1, G1.Y is the second element of G1, and G1.W is the fourth element of G1.
S113, the CPU of the electronic device sends an instruction for instructing the GPU to draw the prediction frame to the GPU.
The CPU of electronic device 100 may send an instruction to the GPU to draw the N +3 th predicted frame after the GPU calculates motion vector a for color attachment C and motion vector B for color attachment D.
S114, the GPU predicts the color attachment E of the (N + 3) th predicted frame according to the color attachment C and the motion vector A, and predicts the color attachment F of the (N + 3) th drawn frame according to the color attachment D and the motion vector B.
In response to an instruction instructing the GPU to draw the N +3 th predicted frame, the GPU may draw a color attachment E to write a moving object and a color attachment F to write a static object in the N +3 th predicted frame. Specifically, the GPU may predict the color attachment E of the (N + 3) th predicted frame from the color attachment C and the motion vector a, and predict the color attachment F of the (N + 3) th rendered frame from the color attachment D and the motion vector B. Illustratively, color attachment E may be as shown in fig. 10A, and color attachment F may be as shown in fig. 10B.
In the embodiment of the present application, the color accessory E may be referred to as a fifth color accessory, and the color accessory F may be referred to as a sixth color accessory.
It is understood that the object in the nth drawing frame and the object in the N +2 th drawing frame are the same. For the same object a, the position of the object a in the nth drawing frame may be different from the position of the object a in the N +2 th drawing frame. Likewise, the objects in the N +2 th rendered frame and the N +3 th predicted frame are the same, except that the positions of the objects in the N +2 th rendered frame and the N +3 th predicted frame are different. The GPU may use the object of the (N + 2) th draw frame and the motion vector of the object to predict the (N + 3) th predicted frame. In the embodiment of the present application, the nth drawing frame may be referred to as a first drawing frame. The N +2 th drawing frame may be referred to as a second drawing frame. The N +3 th predicted frame may be referred to as a first predicted frame.
And S115, in a seventh memory space, the GPU synthesizes the color attachment E and the color attachment F into an image frame of the (N + 3) th predicted frame according to the depth attachment C and the depth attachment D.
The GPU of the electronic device may synthesize the color attachment E and the color attachment F of the nth drawing frame into the image frame of the N +3 th drawing frame according to the depth attachment C and the depth attachment D in the seventh memory space. Illustratively, the N +3 th predicted frame may be as shown in fig. 10C. For synthesizing the color accessory E and the color accessory F according to the depth accessory, reference may be made to the description of the depth test about synthesizing two images based on the depth accessory, which is not described herein again. It can be understood that since the time between the N +2 th drawing frame and the N +3 th prediction frame is short, the N +3 th prediction frame can be synthesized using the depth information in the N +2 th drawing frame. Here, the GPU may also predict depth attachments for moving objects and depth attachments for static objects for the N +3 th frame. Then, the color attachment E and the color attachment F are synthesized using the depth attachment of the N +3 th frame moving object and the depth attachment of the static object. This is not a limitation of the present application.
The main process for drawing a frame by an electronic device is that the GPU executes a number of drawcall instructions. The GPU draws the contents of each drawcall instruction one by one onto the FBO. Each drawcall instruction then requires the flow of the rendering pipeline to be executed once on the GPU. The rendering pipeline is mainly divided into vertex shader, tessellation (not necessary), geometry shader (not necessary), primitive assembly (not necessary), rasterization, fragment shader, and test blend stage (not necessary). Thus, the GPU is expensive to draw a frame of draw frames. In addition, vertex information, coordinate information, and the like required for the GPU to execute each drawcall instruction require the GPU to prepare, and these preparations are also a high amount of computation for the cpu.
The process of predicting a frame of image frames is computationally less expensive for the CPU and requires only a portion of the instructions to be sent to the GPU. For a GPU, only motion vectors for moving objects and motion vectors for static objects need to be calculated. All the calculations are parallel, only one calculation is needed, and each calculation unit executes a small amount of basic operations, so that the calculation of the GPU can be reduced, and the performance can be improved.
It can be understood that the embodiment of the present application is not limited to the electronic device predicting the N +3 th predicted frame through the nth drawing frame and the N +2 th drawing frame. Optionally, the electronic device may also predict the (N + 2) th frame from the (N) th frame and the (N + 1) th frame. Optionally, the electronic device may also predict the N +4 th frame from the nth frame and the N +3 th frame. The embodiment of the present application does not limit this.
It is understood that the electronic device may predict the predicted frame according to different strategies during the process of displaying the video interface of the target application program. Namely, the electronic device predicts the strategy of the (N + 3) th frame according to the (N) th frame and the (N + 2) th frame in the first time period, and predicts the strategy of the (N + 2) th frame according to the (N) th frame and the (N + 1) th frame in the second time period. For example, when the GPU performs more tasks, the electronic device may first predict the N +3 frame from the nth frame and the N +2 frame. When the GPU is performing fewer tasks, the electronic device may predict the N +2 th frame from the nth frame and the N +1 th frame. The embodiment of the present application does not limit this.
A logical block diagram of frame insertion for transmitting 90frames per second (90frames per second, 90fps) is shown in fig. 11. In fig. 11, the 0 th frame to the 2 nd frame are drawing frames. Starting from frame 3, odd frames are predicted frames and even frames are drawn frames. Odd frames such as frame 3, frame 5, frame 7, etc. are predicted frames. The even frames of frame 4, frame 6, frame 8, etc. are rendered frames. The 3 rd frame is generated by the electronic device from the 0 th frame and the 2 nd frame. The 5 th frame is generated by the electronic device from the 2 nd and 4 th frames. The 7 th frame is generated by the electronic device from the 4 th frame and the 6 th frame. The 89 th frame is generated by the electronic device from the 86 th and 88 th frames. The 0 th frame may be referred to herein as a 0 th drawing frame. The 3 rd frame may be referred to as a 3 rd predicted frame.
An exemplary electronic device 100 provided by embodiments of the present application is first described below.
Fig. 12 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application.
The following describes an embodiment specifically by taking the electronic device 100 as an example. It should be understood that electronic device 100 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 100 may include: the mobile terminal includes a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose-input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bidirectional synchronous serial bus including a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 through an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, so as to implement a function of answering a call through a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 and the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, and the like.
The SIM interface may be used to communicate with the SIM card interface 195, implementing functions to transfer data to or read data from the SIM card.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger can be a wireless charger or a wired charger.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), general Packet Radio Service (GPRS), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), time division code division multiple access (time-division multiple access, TD-SCDMA), long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, and the application processor, etc.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor, which processes input information quickly by referring to a biological neural network structure, for example, by referring to a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The internal memory 121 may include one or more Random Access Memories (RAMs) and one or more non-volatile memories (NVMs).
The random access memory may include static random-access memory (SRAM), dynamic random-access memory (DRAM), synchronous dynamic random-access memory (SDRAM), double data rate synchronous dynamic random-access memory (DDR SDRAM), such as fifth generation DDR SDRAM generally referred to as DDR5SDRAM, and the like.
The nonvolatile memory may include a magnetic disk storage device, a flash memory (flash memory).
The FLASH memory may include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc. according to the operation principle, may include single-level cells (SLC), multi-level cells (MLC), three-level cells (TLC), four-level cells (QLC), etc. according to the level order of the memory cells, and may include universal FLASH memory (UFS), embedded multimedia memory cards (eMMC), etc. according to the storage specification.
The random access memory may be read and written directly by the processor 110, may be used to store executable programs (e.g., machine instructions) of an operating system or other programs in operation, and may also be used to store data of users and applications, etc.
The nonvolatile memory may also store executable programs, data of users and application programs, and the like, and may be loaded into the random access memory in advance for the processor 110 to directly read and write.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into analog audio signals for output, and also used to convert analog audio input into digital audio signals. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into a sound signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it is possible to receive voice by placing the receiver 170B close to the human ear.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or sending voice information, the user can input a voice signal to the microphone 170C by uttering a voice signal close to the microphone 170C through the mouth of the user. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association) standard interface of the USA.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but have different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C to assist in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic apparatus 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense ambient light brightness. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L can also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs a boost on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided via the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects in response to touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication.
Fig. 13 is a block diagram of a software configuration of the electronic device 100 according to the embodiment of the present application.
The system framework 1300 for implementing image frame prediction provided by the embodiments of the present application includes a software architecture and hardware devices. Wherein, the layered architecture divides the software into a plurality of layers, and each layer has clear roles and division of labor. The layers communicate with each other through a software interface. In some embodiments, the system is divided into four layers, which are an application layer, an application framework layer, a system library and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 13, the application layer may be a target application 1301. The application layer may also include camera (not shown in fig. 13), gallery (not shown in fig. 13), calendar (not shown in fig. 13), phone call (not shown in fig. 13), map (not shown in fig. 13), navigation (not shown in fig. 13), and the like applications (also may be referred to as applications). The target application 1301 may be a game application, among others.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions. In an embodiment of the present application, the application framework layer may include an application engine 1310. The application engine 1310 may include a Rendering System (Rendering System) 1311. When the electronic device 100 is running the target application 1301, the rendering system 1311 in the application engine 1310 corresponding to the target application 1301 can obtain the drawing parameters of the target application 1301. The rendering system 1311 may also call an interface in the three-dimensional graphics processing library 1330 according to the drawing parameters to achieve rendering of the image frames of the target application 1301. The application engine 1310 may be a game engine corresponding to a game application. The three-dimensional graphics processing library 1330 may be Vulkan, openGL, openGL ES.
The system library may include a plurality of functional modules. For example: a surface manager (not shown in FIG. 13), a Media library (not shown in FIG. 13), a platform interface 1320, a three-dimensional graphics processing library 1330 (e.g., openGL ES), a two-dimensional graphics engine (e.g., SGL) (not shown in FIG. 13), and the like.
The surface manager is used to manage the display subsystem and provides a fusion of two-Dimensional (2-Dimensional, 2D) and three-Dimensional (3-Dimensional, 3D) layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The platform interface 1320 may be used to receive an API for configuring the cache transmitted by the three-dimensional graphics processing library 1330. In response to the API for configuring the cache, platform interface 1320 may drive the motion ram through a driver in the driver layer. In turn, the platform interface 1320 may configure the memory space in the motion RAM for use by the target application. Platform interface 1320 in the embodiments of the present application may be EGL. EGL is the interface between Khronos rendering APIs (e.g., openGL ES, or OpenVG) and the underlying native platform windowing system. EGL handles graphics context management, surface/buffer binding, rendering synchronization, and enables "high performance, accelerated, mixed-mode 2D and 3D rendering using other Khronos APIs".
The three-dimensional graphic processing library is used for realizing 3D graphic drawing, image rendering, synthesis, layer processing and the like. The three-dimensional graphics processing library 1330 may be OpenGL ES. OpenGL ES is an application programming interface/function library that is a subset of OpenGL three-dimensional graphics APIs. Various functional function/application programming interfaces are included in OpenGL ES, such as the glBindFrameBuffer interface 1333', the glDrawArrays interface (not shown). The electronic device 100 may call OpenGL ES to achieve drawing of the image frame.
The HOOK module (HOOK System) 1331 may obtain parameters for calling the interfaces such as the glBindFrameBuffer interface 1333', the gldrawrarrays interface, etc. in the three-dimensional image processing library 1330 by hooking some interfaces in the three-dimensional image processing library 1330. For example, the HOOK module (HOOK System) 1331 HOOKs the glBindFrameBuffer interface 1333 'in the three-dimensional image processing library 1330 through the glBindFrameBuffer interface 1333, and may obtain a parameter for calling the glBindFrameBuffer interface 1333' in the three-dimensional image processing library 1330.
In the embodiment of the present application, when the target application 1301 makes a drawing, the rendering system 1311 in the application engine 1310 may call the eglsswapbuffers interface 1332, the glbindframebuffer interface 1333, and other interfaces in the hook module 1331. Then, a HOOK module (HOOK System) 1331 HOOKs some interfaces in the three-dimensional image processing library 1330, so as to obtain parameters for calling the interfaces such as the glBindFrameBuffer interface 1333', the gldrawrarrays interface, and the like in the three-dimensional image processing library 1330, to realize the insertion of the predicted frame into the target application 1301, and to realize the calculation of the motion vector according to the drawn frame of the target application 1301 and the obtaining of the predicted frame.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer may include drivers 1340. Among the drivers 1340 may be a variety of drivers for implementing drivers for hardware devices. For example, drivers 1340 may include graphics memory driver 1341, GPU driver 1342, and the like.
The hardware device may include: a Display device Display1350, a graphics processor GPU1351, a cache 1352, and an application processor 1353. The display device 1350 may be the display screen 194 shown in fig. 12. The graphics processor GPU1351 and the application processor 1353 may be integrated in the processor 110 shown in fig. 12. The cache 1352 may be the internal memory 121 shown in fig. 12. The display device 1350 may be referred to herein above with respect to the display screen 194. The graphics processor 1351 may refer to the description of the GPU above. The application processor 1353 may refer to the description of fig. 12 above. The cache 1352 may refer to the description of the internal memory 121 above. And will not be described in detail herein.
The following describes exemplary workflow of the software and hardware of the electronic device 100 in connection with capturing a photo scene.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of the application framework layer, starts the camera application, further starts the camera drive by calling the kernel layer, and captures a still image or a video through the camera 193.
In the embodiment of the present application, the color attachment C is a drawing result of a motion drawing object in the drawing instruction of the N +2 th frame. The method of calculating the motion vector of the color attachment C can refer to the method described in fig. 9A to 9C above. Alternatively, the motion vector calculation method of the color accessory C may also refer to a calculation process of calculating a motion vector of a moving object in fig. 17 to 22B.
In one possible implementation, the electronic device performing image frame prediction specifically may include: first, the electronic device divides the first and second rendering frames into R × S blocks, respectively. The electronic device then determines, in the second rendered frame, a first matching block that matches the first candidate block in the first rendered frame. Next, the electronic device calculates a motion vector E of the candidate block to the first matching block. The electronic device determines a motion vector F of the first matching block from the second rendered frame to the first predicted frame based on the motion vector E. The electronic device renders the first matching block in the first predicted frame based on the motion vector F and the position of the first matching block in the second rendered frame. According to the above steps, the electronic device may determine a matching block in the second rendering frame for which each candidate block in the first rendering frame matches. The electronic device may draw the first predicted frame. Here, the value of R × S is determined by tile size and display resolution in a GPU of an electronic device (e.g., a mobile electronic device such as a mobile phone or a tablet computer). The GPU may divide the display screen into tiles (tiles) for rendering. tiles are typically 16 x 16 or 32 x 32 in size. If the display screen resolution of the electronic device is U × V (e.g., 1580 × 720), then R × S is equal to (U/16) × (V/16) or (U/32) × (V/32). For example, the W-th drawing frame is divided into (1580/16) × (720/16) small blocks. Further, the process of the electronic device determining the first matching block matching the first candidate block in the first rendering frame in the second rendering frame may refer to the description in fig. 9A to 9C in the W +2 th frame of the W-th frame, and is not repeated here.
As shown in fig. 14, the electronic device (i.e., tablet computer 10) displays a user interface 1400. At time T0, the W-th frame image frame is displayed in the user interface 10. The W-th frame image frame is a drawing frame. Timing diagram 1403 in FIG. 14 shows the image frames that the electronic device can display from time T0 to time Tn.
Fig. 15 exemplarily shows a diagram in a W-th frame divided into 16 × 16 blocks. In the embodiment of the present application, R × S is equal to 16 × 16. The W-th frame in fig. 15 is divided into 16 × 16 small blocks. The object 1501 and the object 1502 are each composed of a plurality of small blocks. The electronic device needs to calculate a motion vector for each block in the object 1501 from the W-th frame to the W + 2-th frame. Take the candidate block 1511 in the object 1501 as an example. The electronic device needs to determine a matching block at frame W +2 that matches the candidate block 1511. Fig. 16 exemplarily shows the schematic diagram in the W +2 th frame divided into 16 × 16 blocks. In the embodiment of the present application, R × S is equal to 16 × 16. The W +2 th frame in fig. 16 is divided into 16 × 16 small blocks. The object 301 and the object 302 are each composed of a plurality of small blocks. The block 1611 in the object 1601 shown in fig. 15 and 16 matches the candidate block 1511. However, the electronic device needs to search to find a block 1611 that matches the candidate block 1511. The electronic device may use a diamond search algorithm to find the block 1611 matching the candidate block 1511, where the specific process of diamond search may refer to the descriptions in fig. 9B to 9C, and is not described herein again.
In order to improve fluency of an application video interface displayed by an electronic device and save power consumption of the electronic device, an embodiment of the present application provides a method for image frame prediction, where the method may include: first, the electronic device determines a first moving object in a first drawing frame according to a first drawing instruction, and determines a second moving object in a second drawing frame according to a second drawing instruction. Then, the electronic device determines that the first moving object and the second moving object match according to the attribute of the first moving object and the attribute of the second moving object. Next, the electronic device calculates a first coordinate of a center point of the first moving object, and a second coordinate of the second moving object. Then, the electronic device determines a motion vector E from the first moving object to the second moving object according to the displacement from the first coordinate to the second coordinate. And finally, the electronic equipment determines a first pixel point of a third moving object in the first prediction frame according to the motion vector E and the first pixel point of the second moving object. In this way, the electronic device may insert a frame of predicted image frames between every two rendered frames of the application, which may increase the frame rate at which the electronic device displays the video interface of the application. Therefore, the fluency of the electronic equipment for displaying the video interface of the application program can be improved. Also, the electronic device may represent the motion vector of the entire object using the motion vector of the geometric center point of the object, without separately determining the motion vector of each block of the object. In this way, the amount of computation of the electronic device can be reduced, and the power consumption of the electronic device can be saved.
In the embodiment of the present application, the attribute of the first moving object may be referred to as a first attribute, and the attribute of the second moving object may be referred to as a second attribute.
A method for image frame prediction according to an embodiment of the present application will be described in detail below with reference to the software and hardware structure of the above exemplary electronic device 100. The embodiment of the application is explained by taking the example that the electronic equipment predicts the W +3 th predicted frame according to the W th drawing frame and the W +2 th drawing frame. As shown in fig. 17, a method for image frame prediction provided by an embodiment of the present application may include:
S701-S709: the electronic device draws the W-th drawing frame.
S701, when the target application performs drawing, the CPU of the electronic device 100 acquires drawing parameters of the W-th drawing frame.
The target application is an application with animation effect in the user interface, such as game application. The embodiments of the present application are described below by taking a target application as an example of a game application. When a game application installed in the electronic device runs, the game application may call a drawing instruction to draw. The CPU of the electronic apparatus 100 may acquire the W-th rendering frame to the application program through an interface in the three-dimensional image processing library. And the drawing parameters of the W-th drawing frame are used for drawing and rendering the W-th drawing frame. The rendering parameters of the W-th rendering frame may include attributes of each object in the W-th rendering frame, such as attributes of a vertex cache ID, a number of vertices, a texture ID, a shader ID, a transformation matrix, a stencil cache, and the like of each object.
Specifically, the CPU may acquire the drawing parameter of the W-th drawing frame of the application program through the GL HOOK interface. As shown in fig. 18, the GL HOOK interface may obtain the rendering parameters for each frame of image frame of the application from the game application.
Further, in some embodiments, the CPU may obtain the texture ID from OpenGL through the HOOKGlBindTexture interface and store it in the object structure. The CPU can acquire a vertex Buffer ID from OpenGL through HOOKGlBindBuffer and store the vertex Buffer ID in an object structure body. The CPU may acquire data for calculating the number of vertices from OpenGL through the HOOKGlBufferData interface, and store the calculated number of vertices in the object structure. The CPU can acquire a transformation matrix of an object in a world coordinate system from OpenGL through a HOOKGlBufferSubData interface and store the transformation matrix into a global matrix variable.
Further, an object structure for storing object information may be defined by the following code:
Figure PCTCN2021106928-APPB-000003
it is understood that an object structure may be used to store vertexbuffer id, vertexNum, textureID, bounding box X-axis coordinate minimum, bounding box X-axis coordinate maximum, bounding box Y-axis coordinate minimum, bounding box Y-axis coordinate maximum, bounding box center X-axis coordinate, bounding box center Y-axis coordinate. Here, the minimum value of the bounding box X-axis coordinates of the object is the minimum value of the X-axis coordinates of all vertices of the object. The maximum value of the X-axis coordinate of the bounding box of the object is the maximum value of the X-axis coordinates of all the vertexes of the object. The minimum value of the bounding box Y-axis coordinates of the object is the minimum value of the Y-axis coordinates of all the vertexes of the object. The maximum value of the bounding box Y-axis coordinates of the object is the maximum value of the Y-axis coordinates of all the vertexes of the object. The CPU can determine the coordinates of the X axis of the bounding box center and the coordinates of the Y axis of the bounding box center according to the minimum value of the X axis of the bounding box, the maximum value of the X axis of the bounding box, the minimum value of the Y axis of the bounding box and the maximum value of the Y axis of the bounding box. It is to be understood that the target structure in the embodiments of the present application is not limited to the above-described target structure, and the embodiments of the present application do not limit this.
It is to be understood that, in the embodiment of the present application, the attribute of the object may or may not include the transformation matrix. If the attributes of the object include the transformation matrix, the object is a moving object in the embodiment of the present application. As shown in fig. 19A, an object (i.e., a sphere) corresponding to element 1401 in fig. 14 is taken as an example. The sphere in fig. 19A, which is a moving object, includes a transformation matrix. If the attribute of the object does not include the transformation matrix, the object is a static object in the embodiment of the present application. As shown in fig. 19B, an object (i.e., a rectangular parallelepiped) corresponding to the element 1402 in fig. 14 is taken as an example. The rectangular solid in the diagrams (a) and (B) in fig. 19B does not contain a transformation matrix, and the rectangular solid is a static object. When the CPU determines that an object is a moving object, the moving object may be set with a first flag. When the CPU determines that an object is a static object, the static object may be set with the second flag. For example, the CPU may set a first numerical value at a pixel bit of a tencel buffer of the moving object. The CPU may set the second value at a pixel bit of a tenclip buffer of the static object. In some possible examples, the first value may be 1 and the second value may be 0.
In an embodiment of the present application, the electronic device 100 may be a smart phone, a tablet device, a smart watch, a computer, and the like, which is not limited herein.
S702, the CPU of the electronic device 100 sends an instruction a for applying for a memory space to the GPU.
When or before the CPU acquires the drawing parameters of the image frame, the CPU of the electronic device 100 may send an instruction a for applying for a memory space to the GPU. Namely, when the target application program starts to call the drawing instruction for drawing, the CPU applies for a memory space a from the GPU, where the memory space a is used for storing a stencil buffer of the W-th drawing frame, that is, a tencel buffer1. Stencil buffer (stencil buffer) or die buffer is another data buffer, besides color buffer, pixel buffer, and depth buffer, which is commonly used in computer image hardware such as OpenGL three-dimensional graphics. Stencil buffers are pixel-by-pixel, integer-value buffers, typically assigning each pixel a value of one byte length.
Here, the instruction a may be an instruction glGenTextures (1, & texture) and an instruction glBindTexture (target, texture). The CPU may send an instruction glGenTextures (1, & texture) and an instruction glBindTexture (target, texture) to the GPU for applying for memory space A from the GPU. It should be understood that the embodiment of the present application does not limit the specific form of the instruction a.
S703, the GPU of the electronic device 100 allocates a memory space a for the stencil buffer of the W-th rendering frame.
In response to instruction a sent by the CPU, the GPU allocates memory space a for the stencil buffer of the W-th draw frame. For example, as shown in fig. 14, during the time period from T0 to Tn shown in fig. 14, the drawing frames of the application are W-th, W + 2-th, W + 4-th, W + 6-th, \8230;, W + n-th frames. The memory space a stores the stencil buffer of the W-th rendering frame. And after the GPU sends the W-th drawing frame to the display screen for displaying, the memory space A is used for storing the template buffer of the W + 4-th drawing frame. And after the GPU sends the W +2 th drawing frame to the display screen for displaying, the memory space A is used for storing the template buffer of the W +6 th drawing frame. And after the GPU sends the W +4 th drawing frame to a display screen for displaying, the memory space A is used for storing the template buffer of the W +8 th drawing frame. And after the GPU sends the W +6 th drawing frame to the display screen for displaying, the memory space A is used for storing the template buffer of the W +10 th drawing frame. And after all the drawing frames of the application program are drawn, rendered and displayed in the display screen, the GPU can release the memory space A.
It can be understood that, after the GPU sends the W-th drawing frame to the display screen for display, the memory space a is used for storing the stencil buffer of the W + 4-th drawing frame, and specifically includes: and when the GPU sends the W-th drawing frame to the display screen for displaying, the GPU clears the template buffer of the W-th drawing frame in the memory space A and stores the template buffer of the W + 4-th drawing frame.
Further, the memory space a may include a first template cache and a second template cache. And the first template cache is used for the GPU to draw a template image corresponding to the object of the W-th drawing frame. That is, the GPU may draw the object in the first stencil cache. The second template cache is used for backing up the objects drawn in the first template cache. The template image corresponding to the object may refer to an image drawn by the GPU according to the tencel buffer of the object, and the image is not rendered using the corresponding shader of the object rendering ID. If the pixel bit in the tenclip buffer of the object is a first numerical value (for example, 1), that is, the object is a moving object, the GPU may render the template image of the moving object as a template image with a first color. If the pixel bit in the tencel buffer of the object is a second value (e.g., 0), the GPU may render the template image of the still image as a template image having a second color. The first color and the second color are different. Optionally, the first color is black and the second color is white.
S704, the CPU of the electronic apparatus 100 determines the moving object included in the W-th drawing frame and the attribute of the moving object from the drawing parameters.
The CPU of the electronic apparatus 100 determines a plurality of objects included in the W-th drawing frame and determines attributes of the plurality of objects according to the drawing parameters of the W-th frame drawing frame. For example, if parameters of two objects (object a and object B) are included in the rendering parameters of the W-th rendering frame, the CPU can determine the object a and the object B in the W-th rendering frame.
S705, the CPU of the electronic apparatus 100 creates a first index table according to the moving object and the attribute of the moving object included in the W-th drawing frame.
The CPU may create a first index table based on a plurality of objects included in the W-th drawing frame and attributes of the plurality of objects. The first index table stores only index information of the moving object in the W-th drawing frame (for example, an ID of the moving object and an attribute corresponding to the moving object). If there are only two moving objects in the W-th drawing frame, moving object 1 and moving object 2. The first index table of the W-th drawing frame may be as shown in table 1. The moving object 1 corresponds to the object ID01 in table 1, i.e., the ID indicating the moving object 1 is 01. Object ID01 may include attributes such as vertex cache ID01, number of vertices, texture ID01, shader ID01, transformation matrix M01, stencil cache, and so on. The object ID01 is a moving object, and therefore, the attribute of the object ID01 includes the transformation matrix M01. The moving object 2 corresponds to the object ID11 in table 1, i.e., indicates that the ID of the object 2 is 11. Object ID11 may include attributes of vertex cache ID11, number of vertices, texture ID11, shader ID11, transformation matrix M11, stencil cache, and so forth. The object ID11 is a moving object, and therefore, the attribute of the object ID11 includes the transformation matrix M11.
TABLE 1
Figure PCTCN2021106928-APPB-000004
S706, the CPU of the electronic device 100 sends an instruction B for memory mapping to the GPU.
The CPU of electronic device 100 may send instruction B to the GPU for mapping the first index table. Here, instruction B may be glMapBufferRange (). The CPU may cause the GPU to map the first index table by sending an instruction glMapBufferRange () to the GPU. The embodiment of the present application does not limit the specific form of the instruction B.
S707, responding to the instruction B, the GPU of the electronic device 100 maps the first index table in the CPU.
In response to instruction B, the GPU of electronic device 100 may map a first index table in the CPU. Here, the CPU may store the first index table on hardware, and the GPU maps the first index table in the CPU, which means that the GPU can access and read the first index table on the hardware.
S708, the CPU of the electronic device 100 sends a first rendering instruction for rendering the W-th rendering frame to the GPU.
The CPU may send a draw instruction to the GPU for drawing a W-th draw frame. The drawing instruction of the W-th drawing frame may be glDrawArrays (). That is, the CPU may send an instruction glDrawArrays () to the GPU, in response to which the GPU starts drawing the W-th drawing frame. It is to be understood that a plurality of instructions for drawing the object in the W-th drawing frame may be included in the drawing instruction of the W-th drawing frame. The GPU may render the plurality of objects in the W-th rendering frame according to a plurality of instructions for rendering the objects.
S709, the GPU of the electronic device 100 renders the object in the W-th rendering frame and renders the template image of the moving object in the memory space a.
The GPU may draw all objects in the Wth draw frame and render for display. The process of drawing a complete drawing frame by the GPU may refer to the process of drawing by the GPU in the prior art, and is not described herein again.
The GPU may also draw a template image of the moving object in the memory space a, which is used for motion vector calculation in the subsequent steps. The specific process of the motion vector calculation is described in S719-S721 and will not be described here.
The process of the GPU drawing the moving object template image may refer to the template image drawing process illustrated in fig. 20A to 20C. As shown in fig. 20A (b), the GPU may render a template image of the object 1 of the W-th rendering frame in the first template cache. Object 1 is a moving object, and the pixel bit of the template image of object 1 drawn by the GPU is 1 (which may be illustrated in black). As shown in fig. 20A (a), the second stencil buffer initial state is empty. That is, in the initial state, no content is stored in the second template cache. And the GPU stores the objects drawn in the first template cache into the second template cache after drawing one object in the first template cache. As shown in fig. 20B (c), the GPU saves the object drawn in the first stencil buffer to the second stencil buffer. As shown in (d) in fig. 20B, the GPU may render a template image of the object 2 of the W-th rendering frame in the first template buffer. Object 2 is a static object, so the pixel bit of the template image of object 2 drawn by the GPU is 0 (which may be illustrated in white). As shown in fig. 20C (e), the GPU may again save the template image of the object 2 drawn in the first template cache into the second template cache. Here, the W-th drawing frame may be the W-th drawing frame shown in fig. 15 above. Object 1 may be a spherical object corresponding to element 1501 in fig. 15. Object 2 may be a cuboid object corresponding to element 1502.
Alternatively, the GPU may draw the template image of the moving object of the W-th drawing frame only in the first template buffer.
S710-S718: the electronic device draws the W +2 th drawing frame.
S710, the CPU of the electronic device 100 acquires the drawing parameter of the W +2 th drawing frame.
The CPU may acquire the drawing parameter of the W +2 th drawing frame of the application program through the GL HOOK interface. And the drawing parameters of the W +2 th drawing frame are used for drawing and rendering the W +2 th drawing frame. The rendering parameters of the W +2 th rendering frame may include object attributes of each object in the W +2 th rendering frame, such as vertex cache ID, number of vertices, texture ID, shader ID, transformation matrix, stencil cache, and the like attributes of each object. Here, the description in step S701 may be specifically referred to, and is not repeated here.
S711, the CPU of the electronic device 100 sends an instruction C for applying for the memory space to the GPU.
The CPU of the electronic device 100 may send an instruction C to the GPU to apply for memory space. Instruction C may be used to apply for memory space B. The memory space B is used for storing a stencil buffer of the W +2 th drawing frame, i.e., a tencel buffer2. Here, the description of instruction C may refer to the description of instruction a, and is not repeated here. Step S711 may refer to the description of step S702, and will not be described herein.
S712, the GPU of the electronic device 100 allocates the memory space B for the stencil buffer of the W +2 th drawing frame.
In response to an instruction C sent by the CPU, the GPU allocates memory space B for the stencil buffer of the W-th draw frame. For example, as shown in fig. 14, during the time period from T0 to Tn shown in fig. 14, the drawing frames of the application are W-th, W + 2-th, N + 4-th, N + 6-th, \ 8230;, and N + N-th frames. The memory space B stores the stencil buffer of the W +2 th drawing frame. And after the GPU sends the W +2 th drawing frame to the display screen for displaying, the memory space B is used for storing the template buffer of the (N + 6) th drawing frame.
After the GPU sends the W +2 th rendering frame to the display screen for display, the memory space B is used for storing a template buffer of the N +6 th rendering frame, and specifically includes: and when the GPU sends the W-th drawing frame to the display screen for displaying, the GPU clears the template buffer of the W + 2-th drawing frame in the memory space B and stores the template buffer of the (N + 6) -th drawing frame.
Further, the memory space B may include a third template cache and a fourth template cache. And the third template cache is used for drawing the template image corresponding to the object of the W +2 th drawing frame by the GPU. That is, the GPU may draw the object in the third stencil buffer. The fourth template cache is used for backing up the objects drawn in the first template cache.
Here, step S712 refers to the description in step S703, and is not described herein again.
Alternatively, the GPU may draw only the template image of the moving object of the W +2 th drawing frame in the third template buffer.
S713, the CPU of the electronic device 100 determines the moving object included in the W +2 th rendering frame and the attribute of the moving object from the rendering parameters.
The CPU of the electronic device may determine a plurality of objects included in the W +2 th drawing frame and determine attributes of the plurality of objects according to the drawing parameters of the W +2 th drawing frame. Wherein the CPU may determine the moving object and the attribute of the moving object in the W +2 th drawing frame. Step 713 may refer to the description in step 704, which is not described herein.
S714, the CPU of the electronic device 100 creates a second index table according to the moving object and the attribute of the moving object included in the W +2 th drawing frame.
The CPU may establish the second index table based on the plurality of objects included in the W +2 th drawing frame and the attributes of the plurality of objects. For example, if the W +2 th drawing frame includes only two moving objects, such as the moving object 1 and the moving object 2. Then the second index table for the W-th drawing frame may be as shown in table 2. The moving object 1 corresponds to the object ID02 in table 2, i.e., the ID indicating the moving object 1 is 02. Object ID02 may include attributes of vertex cache ID02, number of vertices, texture ID02, shader ID02, transformation matrix M02, stencil cache, and so forth. The object ID02 is a moving object, and therefore, the transformation matrix M02 is included in the attribute of the object ID 02. The moving object 2 corresponds to the object ID12 in table 1, i.e., indicates that the ID of the object 2 is 12. Object ID12 may include attributes of vertex cache ID12, number of vertices, texture ID12, shader ID12, transformation matrix M12, stencil cache, and so forth. The object ID12 is a moving object, and therefore, the attribute of the object ID12 includes the transformation matrix M12.
TABLE 2
Figure PCTCN2021106928-APPB-000005
S715, the CPU of the electronic device 100 sends an instruction D for memory mapping to the GPU.
The CPU may also send instruction D to the GPU. Instruction D may refer to the description of instruction B above and will not be described herein. Instruction D is to instruct the GPU to map a second index table in the CPU. Here, step S715 may be step S706, which is not described herein again.
S716, responding to the instruction D, the GPU of the electronic device 100 maps a second index table in the CPU.
In response to instruction D, the GPU may map a second index table in the CPU. I.e. the GPU may access and read the second index table stored by the CPU in some hardware. The second index table may be the object index table shown in table 2 above. Step S716 can refer to step S707, and is not described herein.
S717, the CPU of the electronic apparatus 100 transmits a drawing instruction for drawing the W +2 th drawing frame to the GPU.
The CPU may send a draw instruction to the GPU for drawing the W +2 th draw frame. The drawing instruction of the W +2 th drawing frame may refer to the description of the drawing instruction of the W th drawing frame. In response to the draw instruction for the W +2 th draw frame, the GPU begins drawing the W +2 th draw frame. It is to be understood that the drawing instruction of the W +2 th drawing frame may include a plurality of instructions for drawing the object in the W +2 th drawing frame. That is, when there are a plurality of objects in the W +2 th drawing frame, the GPU may draw the plurality of objects in the W +2 th drawing frame according to a plurality of instructions for drawing the objects.
S718, the GPU of the electronic device 100 renders the object in the W +2 th rendering frame and renders the template image of the moving object in the memory space B.
The GPU may draw all objects in the W +2 th draw frame and render for display. The process of drawing a complete drawing frame by the GPU may refer to the process of drawing by the GPU in the prior art, and is not described herein again.
The GPU may also draw a template image of the moving object in the memory space B, which is used for motion vector calculation in the subsequent steps. It is to be understood that the moving object in step S718 refers to a moving object in the W +2 th drawing frame. The specific process of calculating the motion vector is described in S719-S721, and will not be described herein.
The process of the GPU drawing the moving object template image may refer to the template image drawing process illustrated in fig. 21A to 21C. As shown in (b) of fig. 21A, the GPU may render the template image of the object 1 of the W +2 th rendering frame in the third template buffer. Object 1 is a moving object, so the pixel bit of the template image of object 1 drawn by the GPU is 1 (which may be illustrated in black). As shown in fig. 21A (a), the fourth stencil buffer initial state is empty. That is, in the initial state, no content is stored in the second template cache. And the GPU stores the objects drawn in the third template cache into a fourth template cache every time one object is drawn in the third template cache. As shown in fig. 21B (c), the GPU saves the object drawn in the third stencil buffer into the fourth stencil buffer. As shown in (d) in fig. 21B, the GPU may render the template image of the object 2 of the W +2 th rendering frame in the first template buffer. Object 2 is a static object, so the pixel bit of the template image of object 2 rendered by the GPU is 0 (which may be illustrated in white). As shown in (e) of fig. 21C, the GPU may again save the template image of the object 2 drawn in the third template cache into the fourth template cache. Here, the W +2 th drawing frame may be the N +3 th drawing frame shown in fig. 16 above. Object 1 may be a sphere object corresponding to element 1601 in fig. 16. Object 2 may be a rectangular parallelepiped object to which element 1602 corresponds.
S719-S724: predict the W +3 th predicted frame.
S719, the CPU in the electronic device 100 sends an instruction to the GPU to instruct the GPU to calculate a motion vector.
The CPU in electronic device 100 may send instructions to the GPU instructing the GPU to calculate motion vectors. The instruction is to instruct a shader in the GPU to compute a motion vector. The instruction may be a dispatch instruction. The embodiment of the present application does not limit the specific form of the instruction for calculating the motion vector.
It will be appreciated that during the time period that the electronic device is displaying the video interface of the application, the CPU may loop to send instructions to the GPU for calculating the motion vector. After the GPU has drawn a frame of a drawing frame, the CPU may send an instruction to the GPU to instruct the GPU to calculate a motion vector once. That is, after step S709, the CPU may execute step S719.
S720-S722: a motion vector is calculated.
S720, the GPU in the electronic device 100 determines a first moving object in the first index table, and the first moving object is matched with a second moving object in the second index table.
The GPU may retrieve the second moving object from the second index table, and then find the first moving object matching the second moving object from the first index table.
If only one moving object exists in the second index table corresponding to the W +2 th drawing frame, the moving object is the second moving object. The GPU can determine moving objects and static objects according to the identification carried by each object in the W +2 th drawing frame. That is, the object carrying the first identifier is a moving object, and the object carrying the second identifier is a static object. Only the N +2 drawing objects in the frame are saved in the second index table. The second moving object may be any one of the moving objects in the second index table. The W +2 th drawing frame may include a plurality of objects, and if there is only one moving object in the plurality of objects, the moving object is the second moving object. In one possible implementation, the GPU may fetch the second moving object in the second index table in the W +2 th draw frame. The GPU then finds a first moving object in the first index table of the W-th rendered frame that matches the second moving object.
Here, the first moving object and the second moving object are matched, that is, M attributes of the first moving object are the same as M attributes of the second moving object. Here, M is a positive integer, and for example, the value of M may be 1, or 2,3 or another integer. The value of M is not limited. For example, the number of vertices, texture ID, shader ID, transformation matrix, stencil buffer, etc. attributes of the first moving object are the same as the number of vertices, texture ID, shader ID, transformation matrix, stencil buffer, etc. attributes of the second moving object. It will be appreciated that the first moving object and the second moving object may be the same object in the image frame sequence of the target application. However, the coordinates of the pixel point of the first moving object in the W-th rendering frame are different from the coordinates of the pixel point of the second moving object in the W + 2-th rendering frame. In one possible implementation, the GPU may determine that the first moving object and the second moving object match according to the following code.
Judging whether the indexes of two objects are the same or not, and representing whether the two objects are the same object or not
Figure PCTCN2021106928-APPB-000006
S721, the GPU in the electronic device 100 calculates a first coordinate of the geometric center point of the first moving object in the W-th drawing frame according to the template image of the first moving object, and calculates a second coordinate of the geometric center point of the second moving object in the W + 2-th drawing frame according to the template image of the second moving object.
The GPU of the electronic device 100 may obtain coordinates of all pixel points of the first moving object according to the template image of the first moving object by using the shader. Firstly, the GPU may determine the bounding box of the first moving object according to the minimum value X1min of the X axis, the maximum value X1max of the X axis, the minimum value Y1min of the Y axis, and the maximum value Y1max of the Y axis in all pixel point coordinates of the first moving object. The coordinates of the four vertices of the bounding box of the first moving object may be (X1 min, Y1 min), (X1 max, Y1 min), (X1 min, Y1 max), (X1 max, Y1 max), respectively. The GPU may determine the first coordinates of the first moving object from the bounding box of the first moving object. The first coordinate is ((X1 min + X1 max)/2, (Y1 min + Y1 max)/2).
The GPU of the electronic device 100 may obtain coordinates of all pixel points of the first moving object according to the template image of the second moving object by using the shader. Firstly, the GPU may determine the bounding box of the second moving object according to the minimum value X2min of the X axis, the maximum value X2max of the X axis, the minimum value Y2min of the Y axis, and the maximum value Y2max of the Y axis in all pixel point coordinates of the second moving object. The coordinates of the four vertices of the bounding box of the second moving object may be (X2 min, Y2 min), (X2 max, Y2 min), (X2 min, Y2 max), (X2 max, Y2 max), respectively. The GPU may determine the second coordinates of the second moving object from the bounding box of the second moving object. The second coordinate is ((X2 min + X2 max)/2, (Y2 min + Y2 max)/2).
S722, the GPU in the electronic device 100 determines a motion vector E of the first moving object from the W-th drawing frame to the W + 2-th drawing frame according to the displacement from the first coordinate to the second coordinate, and determines a motion vector F of the second moving object from the W + 2-th drawing frame to the N + 3-th prediction frame according to the motion vector E.
The GPU may determine a motion vector E of the first moving object moving to the second moving object according to the displacement from the first coordinate to the second coordinate. Then, the GPU determines a motion vector F of the second moving object from the W +2 th drawing frame to the N +3 th predicted frame according to the motion vector E. The motion vector F is equal to K times the motion vector E. K is greater than 0 and less than 1. Alternatively, K =0.5. That is, in the image frame sequence of the application program, the motion of the same moving object is uniform. That is, the speed at which the first moving object moves from the first frame to the second frame is the same as the speed at which the first moving object moves from the second frame to the third frame.
As shown in fig. 22A, the first moving object may be the moving object 2201 illustrated in fig. 22A (a). The second moving object may be the moving object 2202 illustrated in fig. 22A (b). As shown in fig. 22B (c), the motion vector E of the first moving object moving to the second moving object is the motion vector of the moving object 2201 moving to the moving object 2202 in fig. 22B (c). As shown in fig. 22B (d), the geometric center point of the moving object 501 is a point 2203. The geometric center point of the moving object 2202 is a point 2204. The motion vector of the moving object 2201 to the moving object 2202 is the motion vector 2205 from the point 2203 to the point 2204. The motion vector E may be the motion vector 2205 in fig. 22B. Motion vector F may be K times motion vector 2205.
S723, the CPU in the electronic device 100 sends an instruction E for drawing the (N + 3) th predicted frame to the GPU.
The CPU of electronic device 100 may send instruction E for drawing the N +3 th predicted frame to the GPU after the GPU calculates motion vector F. After receiving the instruction E, the GPU responds to the instruction E and draws the (N + 3) th prediction frame. It will be appreciated that instruction E may include a plurality of instructions for drawing an object, i.e., the GPU may draw a plurality of objects in the N +3 prediction frame according to the plurality of instructions for drawing an object.
S724, the GPU in the electronic device 100 draws the third moving object in the (N + 3) th prediction frame according to the motion vector F and the coordinates of each pixel point of the second moving object in the W +2 th drawing frame.
In response to instruction D, the GPU renders the (N + 3) th predicted frame. Specifically, the GPU draws a third moving object in the N +3 th prediction frame according to the motion vector F and the coordinates of each pixel point of the second moving object in the W +2 th drawing frame, where the second moving object is matched with the second moving object. That is, the pixel point of each pixel point in the second motion object shifted by the motion vector F is the pixel point constituting the third motion vector. The third motion vector has the same number of vertices as the motion vector F, the same shader ID, the same texture ID, and the same transform matrix, etc.
It can be understood that after the electronic device finishes drawing the W-th drawing frame, i.e. after the electronic device finishes performing step S709, the electronic device may also draw the W + 1-th frame. The W +1 th frame may be a drawing frame or a prediction frame. As shown in fig. 11, if the W-th drawing frame is the 0-th frame, the W + 1-th frame is the 1-th frame, and the W + 1-th frame is the W + 1-th drawing frame at this time, the electronic device may draw the W + 1-th drawing frame according to the drawing process shown in steps S701 to S709. If the W-th rendering frame is the 2 nd frame shown in fig. 11, the W +1 th frame is the 3 rd frame in fig. 11, and the W +1 th frame is the W +1 th predicted frame. The electronic device may draw the W +1 th predicted frame according to the steps shown in steps S719-S724.
It is understood that the electronic device may repeatedly perform steps S719-722 until a motion vector for each moving object is predicted.
It can be understood that the embodiment of the present application is not limited to the electronic device predicting the W +3 th predicted frame through the W-th drawn frame and the W + 2-th drawn frame. Optionally, the electronic device may also predict the W +2 th frame from the W-th frame and the W +1 th frame. Optionally, the electronic device may also predict the W +4 th frame from the W-th frame and the W +3 th frame. The embodiment of the present application does not limit this.
It can be understood that, compared to the case where the electronic device predicts the W +3 th predicted frame through the W-th drawn frame and the W + 2-th drawn frame, the electronic device predicts the W +2 th frame through the W-th frame and the W + 1-th frame, which is to say, the GPU calculates the motion vector and draws the W + 2-th frame before the W + 1-th frame is displayed completely. If the calculation capability of the GPU does not meet the requirement, the video interface of the electronic equipment is blocked after the electronic equipment displays the (N + 1) th frame.
It is understood that the electronic device may predict the predicted frame according to different strategies during the process of displaying the video interface of the target application. The electronic equipment predicts the strategy of the (N + 3) th frame according to the W frame and the W +2 th frame in the first time period, and predicts the strategy of the (W + 2) th frame according to the W frame and the W +1 th frame in the second time period. For example, when the GPU performs more tasks, the electronic device may first predict the W +3 frame from the W frame and the W +2 frame. When the GPU executes fewer tasks, the electronic device may predict the W +2 th frame from the W-th frame and the W +1 th frame. The embodiment of the present application does not limit this.
By implementing the image frame prediction method provided by the embodiment of the application, the electronic device can determine a first moving object in a first drawing frame according to a first drawing instruction, and determine a second moving object in a second drawing frame according to a second drawing instruction. Then, the electronic device determines that the first moving object and the second moving object match according to the first attribute of the first moving object and the second attribute of the second moving object. Next, the electronic device calculates a first coordinate of a center point of the first moving object, and calculates a second coordinate of the second moving object. Then, the electronic device determines a motion vector E from the first moving object to the second moving object according to the displacement from the first coordinate to the second coordinate. And finally, the electronic equipment determines a first pixel point of a third moving object in the first prediction frame according to the motion vector E and the first pixel point of the second moving object. In this way, the electronic device may insert a predicted image frame between every two rendered frames of the application, which may increase the frame rate at which the electronic device displays the video interface of the application. Therefore, the fluency of the electronic equipment for displaying the video interface of the application program can be improved. Also, the electronic device may represent the motion vector of the entire object using the motion vector of the geometric center point of the object, without separately determining the motion vector of each block of the object. Thus, the calculation amount of the electronic device can be reduced, and the power consumption of the electronic device can be saved.
The following describes related modules of an image frame prediction apparatus implementing an image frame prediction method provided by the present application with reference to fig. 23. As shown in fig. 23, the image frame prediction apparatus may include the following modules: a HOOK system module 2300, a Frame buffer Object Composition module 2310, a UI Control separation module 2320, a Motion vector calculation (Motion vector calculation) module 2330, a Frame Prediction (Frame Prediction) module 2340, a system Debug (Debug system) module 2350, and a Frame Rate Control (Frame Rate Control) module 2360. Wherein:
the HOOK module HOOK system2300 is used for calling a related interface of the three-dimensional graphic processing library to acquire drawing parameters of a target application program. Reference may be made to the description of the hook module 1331 in fig. 13, which is not repeated here.
Frame composition module 2310 is configured to compose the separated UI control and the interface content of the target application into a frame of data.
The UI control splitting module 2320 is configured to split the UI controls from the rendering parameters obtained by the target application. The UI control here refers to a control that some buttons, input boxes, and the like in the user interface of the target application can be used to receive a user operation. UI control detach module 2320 may include a UI control Detection (UI Detection) module 2321 and a UI Frame buffer Dump (UI Frame buffer Dump) module 2322. The UI control detection module 2321 is configured to detect a UI control. UI frame buffer dump module 2322 is used to store UI controls detected by UI detection module 2321.
The motion vector calculation module 2330 is used to calculate motion vectors for image frames. The motion vector calculation module 2330 may include: a Diamond Search Algorithm (Diamond Search Algorithm) module 2331, a Reprojection Algorithm (Reprojection Algorithm) module 2332, an Object Based Algorithm (Object Based Algorithm) module 2333. The diamond search algorithm module 2331 is configured to calculate a motion vector of each small block in the image frame by using a diamond search algorithm. The reprojection algorithm module 2332 is used to calculate motion vectors for static objects in image frames. The object-based algorithm module 2333 is used to calculate motion vectors for moving objects in image frames.
Frame prediction module 2340 may be used to predict a frame image frame from two frame rendered frames of a target application. The predicted image frame is the predicted frame in the embodiment of the present application.
The system debugging module 2350 is used for outputting test data and unloading predicted frames, and the drawing debugging system module 2350 may include a debugging Power Key (Debug Power Key) module 2351, a Motion Vector displaying (Motion Vector browsing) module 2352, a UI Information (UI Information) module 2353, and a frame unloading (FrameDump) module 2354. The power key debugging module 2351 may be configured to trigger the switching of the diamond search algorithm module 2331, the reprojection algorithm module 2332, the object-based algorithm module 2333, and the like in the motion vector calculation module 2330. The motion vector display module 2352 may be used to draw motion vector arrows on the frame data to visualize the motion vectors. The UI information module 2353 may be used to display information such as frame rate. Frame dump module 2354 may be used to map the predicted frame data dump into an image file for debug.
The frame rate control module 2360 is used for controlling the frame rate of the image frames of the target application program displayed by the electronic device.
With the rapid development of multimedia technology, the demand for video resolution is increasing. However, the frame rate of many current videos is only 30 frames/second, and when people watch such videos, the problems of video blocking and the like occur in visual perception, and the requirements of users on the videos are difficult to meet. Generally, the higher the frame rate of a video is, the better the fluency and smoothness performance of the video is, so that in the prior art, the frame rate of the video can be improved by inserting a prediction frame into an original frame of a video stream, the fluency of the video is improved, and the user experience is improved.
In one possible implementation, the electronic device generates the predicted frame typically using a block matching algorithm. Specifically, after the electronic device partitions the video frame into blocks, the motion vectors of the blocks are directly applied to all pixels of the blocks to obtain a predicted frame. However, this method may cause a hole and a picture blur in a predicted frame when the difference between motion vectors of adjacent blocks is large.
The embodiment of the application provides a frame prediction method, an electronic device and a computer readable storage medium, wherein in the method, the electronic device firstly determines the position of each vertex of a block in a prediction frame according to the predicted motion vector of the block around the vertex from a target reference frame to the prediction frame, and then calculates the position of each pixel of the block in the prediction frame according to the corresponding relation between the four vertices of the block in the reference frame and the prediction frame, and finally obtains the whole prediction frame.
In the above frame prediction method, the vertex is determined, and the first block is stretched according to the predicted motion vectors of the blocks around the first block, where the first block is a block in the reference frame, so that the vertices of the adjacent blocks in the predicted frame are overlapped, and the blocks are continuous and have no holes.
Some concepts related to embodiments of the present application are described below.
1. Video post-processing techniques
In the embodiment of the present application, the video post-processing technology refers to a video processing technology that obtains a video effect that is improved in a certain aspect by processing a captured video through a video post-processing algorithm, and includes a video stabilization (video stabilization) technology, a high dynamic imaging (HDR) processing technology, a frame rate conversion (frame rate conversion) technology, and the like. The video post-processing algorithm is used for processing the collected multiple pictures or videos.
In this embodiment, the video post-processing algorithm may be called by the image processing module provided in this embodiment to process the acquired picture or video.
2. Frame rate
The frame rate is the number of frames per second of the image in the video, i.e. the number of times the image is updated per second of the video. Due to the mechanism of persistence of vision, when images are played continuously for 16 frames per second and more, the sequence of images achieves a visually smooth and coherent appearance. In general, normal resolution video up to 30 frames/second is considered acceptable, and video at a frame rate up to 60 frames/second is considered high definition video. Generally, the higher the frame rate of the video, the better the fluency and smoothness of the video will perform.
With the rapid development of intelligent terminals and multimedia technologies, the requirements of people on video resolution are higher and higher. At present, the frame rate of many videos is only 30 frames/second, and when people watch the videos, the problems of blocking and the like occur in visual perception, and the user experience is poor. Therefore, the electronic device can interpolate the video with a low frame rate into the video with a high frame rate through a video frame rate conversion technology, for example, the electronic device can increase the frame rate of the video from 30 frames/second to 60 frames/second, so that the video is smoother and continuous, and the reality sense and the interaction sense of people watching the video are improved.
3. Frame rate conversion technique
Frame rate conversion techniques are mainly classified into two major categories, one is a non-motion compensation algorithm, and the other is an algorithm based on motion estimation and motion compensation.
Among them, the non-motion compensation algorithms include simple frame repetition and frame averaging algorithms. Specifically, the frame repeating algorithm completes the work of frame interpolation by directly copying the frames in the input video sequence, the method is simple and efficient, but the quality of the interpolated frames of the objects with motion in the video is poor, and very obvious fuzzy and overlapping problems can occur; the frame averaging algorithm synthesizes an intermediate frame by weighting and averaging adjacent frames in an input video sequence according to a certain rule, the method has lower time-space complexity, but for a scene with motion, a moving object in an interpolation frame image has an obvious aliasing problem.
In processing video in the presence of a motion scene, to synthesize high quality interpolated frames, the electronic device may employ an algorithm based on motion estimation and motion compensation to generate interpolated frames. The main idea of the motion estimation and motion compensation based algorithm is to capture the motion information of the object in the video frame sequence and then to fuse the motion information with the input adjacent frames to synthesize the inserted frame.
4. Block matching algorithm
Algorithms based on motion estimation and motion compensation include Block Matching Algorithms (BMA). The algorithm mainly comprises the steps of dividing an original video frame into a plurality of pixel blocks, supposing that all pixels in the pixel blocks have the same displacement, finding a block matched with a current pixel block in a reference frame according to a certain matching search principle, and calculating the relative displacement between two matched blocks to serve as a motion vector of each block between two frames.
In the prior art, the main idea of a frame prediction method based on a block matching algorithm is that after obtaining a predicted motion vector of each block in a reference frame, an electronic device directly applies the predicted motion vector of the block to all pixels of the block to obtain a corresponding block in the predicted frame, and finally obtains the whole predicted frame, wherein the predicted motion vector is the motion vector of the block from the reference frame to the predicted frame. In the case of large motion vector difference between adjacent blocks, this method may cause local discontinuity in the predicted frame, resulting in a hole effect.
The frame prediction method provided in the embodiment of the present application is specifically described below with reference to fig. 12.
The method can be implemented by the electronic device 100 shown in fig. 12, please refer to fig. 24, fig. 24 is a flowchart of a frame prediction method disclosed in the embodiment of the present application, and as shown in fig. 24, the frame prediction method includes the following steps:
s101, acquiring a first reference frame and a second reference frame.
Specifically, the electronic device may acquire two adjacent images from a video stream, and determine the acquired two adjacent images as a first reference frame and a second reference frame, respectively, where the video stream may be a video, a game, or other multimedia resources, and is not limited herein.
Specifically, please refer to fig. 25, fig. 25 is a schematic diagram of acquiring a reference frame according to an embodiment of the present application, in which a horizontal axis represents time of a video stream, and an arrow on the horizontal axis represents a video frame in the video stream. As shown in the figure, the video stream is an image sequence composed of a plurality of video frames arranged in time sequence, and specifically, the video stream includes a plurality of frames of images, such as a first frame, a second frame, a third frame, and an nth frame. For example, the electronic device may obtain a second frame and a third frame in the video stream, determine the second frame in the video stream as a first reference frame, and determine the third frame in the video stream as a second reference frame.
Please refer to fig. 26, fig. 26 is a schematic diagram of another example of acquiring a reference frame disclosed in the present application. Wherein, (a) in fig. 26 shows a video of a ball motion, specifically, there is a ball in a video picture, the ball moves in a virtual image direction, a dotted arrow represents the ball motion direction, a video background is a still picture with a radial pattern, and a line at the top of the ball is a reference line indicating the ball motion position; images (B) and (C) in fig. 26 are two reference frames obtained by the electronic device from the video, respectively, where (B) in fig. 26 is a first reference frame and (C) in fig. 26 is a second reference frame, as shown in the figure, with a line at the top of the ball as a reference, the ball in the first reference frame is located at an eleventh position, the ball in the second reference frame is located at a twelfth position, and the ball in the first reference frame and the second reference frame is moving relative to the background and has a displacement from the first reference frame to the second reference frame.
S102, determining a target reference frame from the first reference frame and the second reference frame according to the position of the frame to be predicted.
Specifically, the electronic device may determine a frame image from the first reference frame and the second reference frame as the target reference frame according to the position of the frame to be predicted. It should be noted that the position of the frame to be predicted can be determined according to the application scenario and the user requirement, for example, to increase the frame rate of the game, the electronic device can generate the next frame image of the previous two frames of images according to the previous two frames of images, which can prevent the game delay from being too long, and for example, when processing a video without real-time requirement, the electronic device can generate an intermediate image according to two adjacent frames of video, which generates a more accurate predicted frame.
Please refer to fig. 27, fig. 27 is a schematic diagram illustrating a method for determining a target reference frame according to an embodiment of the present disclosure. As shown in fig. 27 (a), the electronic device may determine the second frame and the third frame in the video stream as the first reference frame and the second reference frame, and the electronic device may determine the target reference frame according to the position of the frame to be predicted according to two situations shown in fig. 27 (B) and (C). As shown in fig. 27 (B), when the position of the frame to be predicted is located between the first reference frame and the second reference frame, the electronic device may determine that the first reference frame is a target reference frame, and correspondingly, the second reference frame is a matching frame; as shown in (C) of fig. 27, when the position of the frame to be predicted is located between the second reference frame and the fourth frame, the electronic device may determine that the second reference frame is the target reference frame, and accordingly, the first reference frame is the matching frame. It should be noted that the position of the predicted frame may be located at a middle position between two frame images, or may be located at other positions between two frame images, and is not limited herein.
S103, dividing the target reference frame into blocks according to the square blocks with the first size.
Specifically, the electronic device may first divide the target reference frame, where the size of the division may be 4 × 4, 8 × 8, 16 × 16, and the like. Please refer to fig. 28, fig. 28 is a schematic diagram illustrating a partitioning of a target reference frame according to an embodiment of the present disclosure. As shown in fig. 28, (a) is a target reference frame, and after the electronic device blocks the target reference frame, a divided target reference frame as shown in fig. 28 (B) can be obtained. To better illustrate the situation of the reference frame after the block division, the area of the ball edge is taken to illustrate the present solution, and the area of the ball edge can be enlarged as shown in (C) in fig. 28, where the oblique line area is a ball and the irregular linear area is a static background area. Note that fig. 28 (C) is an enlarged ball, and the stripe interval of the ball in fig. 28 (C) is wider than those in fig. 28 (a) and (B), and the drawing reason is not clearly shown in fig. 28, and thus the description is made.
And S104, calculating a motion vector of a first block from the target reference frame to the predicted frame, wherein the first block is one block in the target reference frame.
Specifically, the electronic device may first obtain a motion vector of the block from the target reference frame to the matching frame, and then calculate a predicted motion vector of the block from the target reference frame to the predicted frame according to the motion vector of the block from the target reference frame to the matching frame. For example, when the target reference frame is the first reference frame, the electronic device may first obtain a motion vector of the block from the first reference frame to the second reference frame, and then calculate a predicted motion vector of the block from the first reference frame to the predicted frame according to the motion vector of the block from the first reference frame to the second reference frame.
Referring to fig. 29, fig. 29 is a schematic flowchart illustrating a process of calculating a predicted motion vector of a block from a target reference frame to a predicted frame according to an embodiment of the present application, where step S104 may include some or all of the following steps:
and S1041, acquiring a motion vector of the block from the target reference frame to the matching frame.
Specifically, the electronic device may determine a matching block of the block in the matching frame according to the block in the target reference frame, where a corresponding region of the block in the target reference frame in the matching frame is the matching block, and then determine a motion vector of the block from the target reference frame to the matching frame according to a displacement from the block in the target reference frame to the matching block of the block in the matching frame. For example, when the target reference frame is a first reference frame, the electronic device may obtain a motion vector of each block in the first reference frame from the first reference frame to the second reference frame, and for example, when the target reference frame is a second reference frame, the electronic device may obtain a motion vector of each block in the second reference frame from the second reference frame to the first reference frame.
Referring to fig. 30, fig. 30 is a schematic diagram illustrating an acquisition block from a target reference frame to a matching frame according to an embodiment of the present disclosure. As shown in fig. 30, (a) is a target reference frame, and a matching block determined in a matching frame may be as shown in fig. 30 (B) from a block in the target reference frame. Further, according to (a) and (B) in fig. 30, a motion vector as shown in (C) in fig. 30 can be obtained, in which an arrow indicates a displacement of a block from a target reference frame to a predicted frame. As shown in (C) of fig. 30, there are three blocks in the diagram that have displacements from the target reference frame to the matching frame, and the electronic device may determine the displacement of a block in the target reference frame to a matching block of the block in the matching frame as a motion vector of the block from the target reference frame to the matching frame, and then the motion vectors of other blocks in the diagram that have no displacement are 0.
The Method for the electronic device to find the matching block in the matching frame according to the block in the target reference frame includes a Full Search Method (FS), a three-step Method, a diamond Search Method, and the like. Specifically, the full search method is to calculate the sum of absolute errors of all reference pixel positions in the search range, and the motion vector is the offset corresponding to the minimum point and the absolute error. Since the best matching point mostly appears near the center of the search area, the FS algorithm mainly uses a spiral search sequence with the area as the center, and performs spiral search from inside to outside counterclockwise in the current frame with the center point as the starting point until all target pixels in the macro block are traversed. It should be noted that the electronic device may overlap between the multiple blocks determined in the matching frame, and accordingly, there may be a partial region in the matching frame that is not determined as a matching block.
In some other embodiments, the electronic device may also obtain the motion vector of the block from the target reference frame to the matching frame according to a hardware device, and there may be other methods to obtain the motion vector of the block from the target reference frame to the matching frame, which is not limited herein.
S1042, according to the motion vector of the block from the target reference frame to the matching frame, determining the predicted motion vector of the block from the target reference frame to the predicted frame.
In one implementation, where the target reference frame is a first reference frame, the electronic device may determine a predicted motion vector for the block from the first reference frame to the predicted frame based on motion vectors for the block from the first reference frame to a second reference frame.
In some embodiments, the motion between blocks may be considered uniform motion, in which case the electronic device may determine half of the motion vector of a block from a first reference frame to a second reference frame as the predicted motion vector for the block, as will be appreciated. Specifically, please refer to fig. 31A, wherein fig. 31A is a schematic diagram illustrating a method for determining a predicted motion vector of a block according to an embodiment of the present disclosure. As shown, a block in the target reference frame is schematically drawn, where Mv is a motion vector of the block from the first reference frame to the second reference frame, and is indicated by a solid arrow; mv' is the predicted motion vector of the block from the first reference frame to the predicted frame, indicated by the dashed arrow. As shown, the motion vector of the block from the first reference frame to the second reference frame is (3, 5), and the electronic device may determine half of the predicted motion vector of the block from the target reference frame to the predicted frame as the predicted motion vector of the block, that is, according to Mv' =1/2Mv, the predicted motion vector of the block is (1.5, 2.5).
In another implementation, where the target reference frame is a second reference frame, the electronic device can determine a predicted motion vector for the block from the second reference frame to the predicted frame based on a motion vector of the block from the second reference frame to the first reference frame.
In some embodiments, the motion between the blocks is considered to be uniform motion, in which case the electronic device may determine half of the negative value of the motion vector of a block from the second reference frame to the first reference frame as the predicted motion vector for that block. Specifically, please refer to fig. 31B, fig. 31B is a schematic diagram of another method for determining a predicted motion vector of a block according to an embodiment of the present disclosure. As shown, a block in the target reference frame is schematically drawn, where Mv is a motion vector of the block from the second reference frame to the first reference frame, and is indicated by a solid arrow; mv' is the predicted motion vector of the block from the second reference frame to the predicted frame, indicated by the dashed arrow. As shown, the motion vector of the block from the first reference frame to the second reference frame is (-3, -5), and the electronic device may determine half of the negative value of the predicted motion vector of the block from the target reference frame to the predicted frame as the predicted motion vector of the block, that is, the predicted motion vector of the block is (1.5, 2.5) according to Mv' = -1/2 Mv.
In other embodiments, the electronic device may also obtain the predicted motion vector of the block from the target reference frame to the predicted frame through a network model that senses the motion acceleration of the video, specifically, the network model uses a quadratic optical flow prediction method, where the main idea of the quadratic optical flow prediction method is that, assuming that the motion of the object in the interval is uniformly accelerated, the electronic device may calculate the displacement of the object at any time in the interval by obtaining the initial velocity of the object and the acceleration of the object in the interval. By the method, the motion vector between the adjacent frames can be estimated more accurately. It should be noted that the electronic device may also obtain the predicted motion vector of the block from the target reference frame to the predicted frame by other methods, which are not limited herein.
And S105, determining the predicted motion vector of the first vertex from the target reference frame to the predicted frame according to the predicted motion vectors of the blocks around the first vertex from the target reference frame to the predicted frame, wherein the first vertex is one vertex in the first block.
Specifically, the electronic device may first determine a block around a vertex in the target reference frame, and then determine a predicted motion vector of the vertex according to a predicted motion vector of the block around the vertex, where the predicted motion vector of the block is a motion vector from the target reference frame to the predicted frame, and the predicted motion vector of the vertex is a motion vector from the target reference frame to the predicted frame.
In some embodiments, the electronic device may determine, for each vertex of each block in the target reference frame, blocks around the vertex in the target reference frame, and determine a predicted motion vector for each vertex based on predicted motion vectors of the blocks around each vertex. In some embodiments, the electronic device may determine the closest tile to the vertex as the tile around the vertex. Specifically, please refer to fig. 32A, wherein fig. 32A is a schematic diagram illustrating a block for determining a periphery of a vertex according to an embodiment of the present disclosure. As shown in fig. 32A, taking one block in the target reference frame as an example, the four blocks closest to the vertex a of the block a are the block a, the block B, the block C, and the block D, respectively, and the electronic device may determine the four blocks as four blocks around the vertex a.
Further, the electronic device may determine an average of the predicted motion vectors of the blocks around the vertex as the predicted motion vector of the vertex. Specifically, please refer to fig. 32B, fig. 32B is a schematic diagram illustrating a method for determining a predicted motion vector of a vertex according to an embodiment of the present disclosure. As shown in fig. 32B, the motion vector of the block a is Mv ' a, where Mv ' a =0, the motion vector of the block B is Mv ' B, the motion vector of the block C is Mv ' C, and the motion vector of the block D is Mv ' D, and the electronic device determines the average of the predicted motion vectors of the blocks around the vertex as the predicted motion vector of the vertex, and obtains the coordinates of the vertex a. It should be noted that there are no four blocks around the vertex of the block on the edge of the target reference frame, and optionally, the electronic device may determine the predicted motion vector of the vertex on the edge of the target reference frame to be zero.
And S106, determining the coordinate of the first vertex in the prediction frame according to the coordinate of the first vertex in the target reference frame and the prediction motion vector of the first vertex.
Specifically, the electronic device may sum the coordinates of the vertex in the target reference frame and the predicted motion vector of the vertex to obtain the coordinates of the vertex in the predicted frame. Please refer to fig. 33, fig. 33 is a schematic diagram illustrating determining vertex coordinates according to an embodiment of the present disclosure. As shown, the vertex a has coordinates of (x) in the target reference frame a1 ,y a1 ) The predicted motion vector of the vertex a is (1, 2), and the electronic equipment adds the coordinates of the vertex a in the target reference frame and the predicted motion vector of the vertex a to obtain the coordinates (x) of the vertex a in the predicted frame a1 +1,y a1 +2)。
In some embodiments, the electronic device may perform a calculation on each vertex in the target reference frame, and may eventually obtain the coordinates of all vertices in the target reference frame in the predicted frame. Note that, in the case where the predicted motion vector of the vertex on the target reference frame side is determined to be zero, the coordinates of the vertex in the target reference frame are equal to the coordinates of the vertex in the predicted frame.
S107, determining a pixel block of the first block in the predicted frame according to the coordinates of the vertex of the first block in the predicted frame and the coordinates in the target reference frame.
Specifically, the electronic device may first obtain a correspondence between a coordinate of a vertex of the first block in the target reference frame and a coordinate in the prediction frame according to the coordinate of the vertex of the first block in the target reference frame and the coordinate in the prediction frame, further determine the correspondence between the coordinate of the vertex of the first block in the target reference frame and the coordinate in the prediction frame as the correspondence between the coordinate of a pixel of the first block in the target reference frame and the coordinate in the prediction frame, and finally determine the coordinate of the pixel of the block in the prediction frame according to the correspondence between the coordinate of the pixel of the first block in the target reference frame and the coordinate in the prediction frame, where the first block is one block in the target reference frame.
Referring to fig. 34, fig. 34 is a flowchart for determining coordinates of pixels in a block in a predicted frame according to an embodiment of the present application, and step S107 may include the following steps:
s1071, according to the coordinates of the vertex of the block in the target reference frame and the coordinates of the predicted frame, obtaining the corresponding relation between the coordinates of the pixel of the block in the target reference frame and the coordinates of the pixel in the predicted frame.
Specifically, the electronic device may obtain a homography transformation formula corresponding to the block according to four pairs of coordinates of the block, where a coordinate of one vertex in the target reference frame and a coordinate in the prediction frame are a pair of coordinates, and the homography transformation formula corresponding to the block is used to represent a correspondence between the coordinates of the pixel of the block in the target reference frame and the coordinates in the prediction frame. It can be understood that, when the electronic device performs calculation on each block in the target reference frame, a homography transformation formula corresponding to each block can be obtained.
Please refer to fig. 35A, fig. 35A is a schematic diagram of a homography transformation formula corresponding to a block, which is obtained according to an embodiment of the present disclosure, and as shown in the drawing, four vertices of the block a are respectively vertex a, vertex b, vertex c and vertex d, coordinates of one vertex in a target reference frame and a predicted frame are a pair of coordinates, specifically, a pair of coordinates of vertex a is (x) coordinate a ,y a ) And (x) a ',y a ') and a pair of coordinates of the vertex b is (x) b ,y b ) And (x) b ',y b ') and a pair of coordinates of vertex c is (x) c ,y c ) And (x) c ',y c ') and a pair of coordinates of the vertex d is (x) d ,y d ) And (x) d ',y d ')。
As shown in the figure, the electronic device may input the homography transformation formula into the 4 pairs of coordinates of the block a, respectively, to obtain an equation set corresponding to the block a, where the equation set includes 4 equations. Wherein, the homography transformation formula is as follows:
Figure PCTCN2021106928-APPB-000007
wherein (x) 1 ,y 1 ) Is the coordinate of the vertex in the target reference frame, (x) 1 ',y 1 ') is the coordinate in the target reference frame as (x) 1 ,y 1 ) H is the unknown homography transform matrix. Wherein the homography transformation matrix is as follows:
Figure PCTCN2021106928-APPB-000008
further, the electronic device may obtain the homography matrix H corresponding to the block a by deblocking the equation set corresponding to the block a A Then, the homography matrix H corresponding to the block A is calculated A Substituting the homography transformation formula into a homography transformation formula to obtain a homography transformation formula corresponding to the block A, wherein the homography transformation formula corresponding to the block A is as follows:
Figure PCTCN2021106928-APPB-000009
The homography transformation formula corresponding to the block A is used for expressing the corresponding relation between the coordinates of the pixels of the block in the target reference frame and the coordinates in the predicted frame.
S1072, determining the coordinates of the pixels of the block in the predicted frame according to the corresponding relation between the coordinates of the pixels of the block in the target reference frame and the coordinates of the pixels of the block in the predicted frame.
Specifically, the electronic device may determine a pixel block of the first block in the predicted frame according to a homography transformation formula corresponding to the first block.
The electronic device may input coordinates of a pixel in the block in the target reference frame into a homography transformation formula corresponding to the block to obtain coordinates of the pixel in the prediction frame.
Please refer to fig. 35B, fig. 35B is a drawing of the present applicationIn the schematic diagram of determining the coordinates of a pixel in a prediction frame, as shown in the figure, two pixel points in a block a are taken as an example, and the coordinate of one pixel point in the block a in a target reference frame is (x) 1 ,y 1 ) Will (x) 1 ,y 1 ) Inputting homography transformation formula corresponding to block A to obtain coordinate (x) of the pixel point in predicted frame 1 ',y 1 ') another pixel in block A has coordinates (x) in the target reference frame 2 ,y 2 ) Will (x) 2 ,y 2 ) Inputting the homography transformation formula corresponding to the block A to obtain the coordinate (x) of the pixel point in the predicted frame 2 ',y 2 ')。
In some embodiments, the electronic device may first determine the area of the pixel block of the first block in the prediction frame according to four vertices of the first block, for example, determine a quadrilateral connecting the four vertices of the first block as the area of the pixel block of the first block in the prediction frame.
In one implementation, when the area of the pixel block of the first block in the prediction frame is larger than the area of the target reference frame, the electronic device may input the coordinates of each pixel of the first block in the target reference frame into a homography transformation formula corresponding to the block to obtain the coordinates of each pixel of the first block in the prediction frame, for example, the obtained number of the coordinates of the first block in the prediction frame may be N, where N is a positive integer greater than 0.
Further, the electronic device may acquire coordinates in the region of the pixel block, for example, M coordinates in the region of the pixel block, where it is understood that M > N, M being a positive integer greater than 0. Further, the electronic device may remove, from M coordinates included in a pixel block region of the prediction frame, coordinates of pixels in N first blocks in the prediction frame to obtain a first coordinate, where the first coordinate includes at least two coordinates, and input the first coordinate into a homography transformation formula corresponding to the first block to obtain a pixel of the first coordinate in the target reference frame.
In another implementation, when the area of the first block in the pixel block of the prediction frame is larger than the area of the first block in the target reference frame, the electronic device may obtain coordinates in the pixel block area, input each coordinate in the pixel block area into a homography transformation formula corresponding to the first block, obtain coordinates of each coordinate in the pixel block area in the target reference frame, and further obtain a pixel block corresponding to the first block.
Referring to fig. 35C, fig. 35C is a schematic diagram of a predicted frame disclosed in an embodiment of the present application, taking a block a as an example, and an obtained pixel block corresponding to the block a according to a homography transformation formula corresponding to the block a by an electronic device may be as shown in fig. 35C.
And S108, displaying the predicted frame, wherein the predicted frame comprises the pixel blocks of the first block in the predicted frame.
The electronic device may process each block in the predicted frame through steps S101 to S107, and may obtain a pixel block of each block in the predicted frame, where each coordinate in the pixel block corresponds to a pixel in the target reference frame, and the electronic device may display each pixel block through the display screen, and finally, the displayed predicted frame is continuous. Referring to fig. 36, fig. 36 is a schematic diagram of a prediction frame disclosed in an embodiment of the present application, in which, taking a part of the prediction frame as an example, a block in the prediction frame finally obtained by an electronic device is in a stretched state, and blocks in the prediction frame are continuous and have no void region.
In the process of predicting the image frame, the electronic device can predict not only the color information of the predicted frame, but also the depth information of the predicted frame. In this way, the electronic device may also synthesize a predicted frame based on the predicted depth information.
The application provides an image frame generation method, which can comprise the following steps: determining a tenth position coordinate of an eleventh vertex of the prediction block in the prediction image frame according to the depth values of the tenth block and the twelfth block and the position coordinates of the tenth block and the twelfth block in the image frame; wherein the tenth block is a block in the first image frame; the twelfth block is a block which is determined in the second image frame according to a matching algorithm and is matched with the eleventh block; generating a prediction block according to the color data of a reference block and the tenth position coordinate, wherein the reference block is one of the tenth block or the twelfth block; generating the predicted image frame, the predicted image frame including the predicted block.
Wherein the color data may be RGB values of each pixel included in the block.
As can be seen, in the embodiment of the present application, the depth value is combined to predict the block, so that the generated prediction block can be scaled according to the depth value, and further, a picture displayed in the predicted image frame is more consistent with a scene drawn by the electronic device according to actual data; when the predicted image frame is inserted into the original image frame drawn according to the actual data by the electronic equipment, the transition between the original image frame and the predicted image frame is more natural and smooth, and the user experience is improved.
In one embodiment, during the operation of the application program, the electronic device may execute the application program according to the image frame f 1 And image frame f 2 Generating an image frame f 3 (ii) a Image frame f 1 And image frame f 2 For original image frames, image frame f 3 To predict the image frame. The predicted image frame is inserted into the original image frame, so that the aim of improving the frame rate can be fulfilled. For example, the electronic device may determine the image frame f according to a block matching algorithm 1 And image frame f 2 The blocks are matched with each other; each set of matched blocks includes a block B 1 And block B 2 Block B 1 At the image frame f 1 In block B 2 At image frame f 2 Performing the following steps; performing the following for each set of matched blocks: according to block B 1 In the image frame f 1 Coordinate sum of (5) and block B 2 At image frame f 2 Determining a displacement vector corresponding to the group of matched blocks; according to block B 1 Or block B 2 The coordinates in the image frame, and the displacement vector determine the block B 1 Or block B 2 In the image frame f 3 Coordinates of (image frame f) 3 Block of (1) is composed of block B 1 Or block B 2 Forming, e.g. image frames f 3 From a plurality of blocks B 1 Recombined (subjected to position conversion). It can be seen that performing the above operation for each set of matched blocks can determine that one block in each set of matched blocks is in the image frame f 3 And then according to each set of matched blocks and the block in the image frame f 3 To generate an image frame f 3 . The embodiment generates the prediction frame, and can achieve the purpose of improving the frame rate. In the above embodiment, when the display content of the image frame represents a scene in a three-dimensional space, the image frame has a characteristic of "near-large-far-small size"; specifically, the display content of the image frame can be regarded as the effect of projecting a scene in a three-dimensional space onto a projection plane; the projection process can be regarded as that a scene in a three-dimensional space is projected onto a projection plane through a camera point; according to the principle that the camera photographs an object, the camera point may be regarded as a focal point, and specifically, as shown in fig. 46E, the camera point may be a camera point in fig. 46E. If the distance between the object and the camera point is short, the display area of the object in the image frame is large; if the distance between the object and the camera point is long, the display area of the object in the image frame is small. In this embodiment, the electronic device determines the block in the image frame f only according to the displacement vector 3 Because the displacement vector refers to a displacement vector in a plane and only contains variables of two dimensions in a three-dimensional space, data of the other dimension is lost, and the finally generated image frame f 3 The effect of 'big and small near' cannot be presented; thus, the image frame f 3 Has a certain deviation from the three-dimensional scene constructed by the running data of the application program, so that the electronic device is in the original image frame (f in the example) 1 ,f 2 ) Predicted frame inserted (f in this example) 3 ) The electronic device then transitions through the image frames in a non-smooth manner.
In some embodiments, a method of generating an image frame is provided. The electronic device may generate a predicted image frame from three-dimensional data of an original image frame such that a display area of an object in the predicted frame has a "near-large-far-small" effect. By implementing the method of the embodiment, when the frame rate of the electronic device is increased by inserting the predicted image frame, a scene restored by the predicted frame is closer to a three-dimensional scene constructed according to data of an application program, and thus transition between the original image frame and the predicted image frame is more natural and smooth.
The following embodiment describes an image frame generation method provided by the embodiment of the present application. As shown in fig. 37, the method may include:
s3701, the electronic device performs block division on the reference image frame to obtain a reference block.
Wherein, the electronic device firstly predicts the image frame f according to the needs 3 With respect to image frame f 1 And an image frame f 2 Position determination image frame f 1 And image frame f 2 One of the image frames is a reference image frame.
Wherein the image frame f 1 And image frame f 2 The image frames are obtained by drawing according to the data of the application program; image frame f 3 Is based on the image frame f 1 And image frame f 2 And predicting the obtained predicted image frame.
Wherein the image frame f 1 At image frame f in the data stream 2 Previously, the electronic device may preset image frame f 1 And an image frame f 2 One of the image frames is a reference image frame; for example, the electronic device may set the first image frame f in the data stream 1 Is a reference image frame. The earlier the image frame is positioned in the data stream, the earlier the time point at which the image frame is displayed by the electronic device when the image frame is displayed according to the data stream.
Optionally, the reference image frame is relative to the image frame f in the data stream according to the image frame f3 1 And an image frame f 2 Is determined. For example, the following two cases may be included: (1) the positional relationship shown in fig. 38 is one: image frame f 1 At image frame f in the data stream 2 Before; if the image frame f 3 At image frame f 1 And image frame f 2 In between, the reference image frame is the image frame f 1 And image frame f 2 Middle front image frame f 1 . (2) the second positional relationship shown in fig. 38: image frame f 1 At image frame f in the data stream 2 Before; if the image frame f 3 At the image frame f 1 And image frame f 2 Then, the reference image frame is the image frame f 1 And image frame f 2 Middle and back image frame f 2 . Note that, the image frame f 3 The position in the data stream may be determined according to the application scenario and the user requirement, for example, to increase the frame rate of the game, the electronic device may generate the next frame of image according to the previous two frames of image predictions, and this method may avoid the game delay being too long, i.e., the position relationship in fig. 38 is two; for another example, when processing a video with no or low requirement on delay, the electronic device may generate an intermediate image frame from two adjacent image frames, and the predicted frame generated by the method is more accurate, i.e. the position relationship in fig. 38 is one.
Wherein the electronic device divides the reference image frame by blocks of a first size, which may be a preset size. For example, the electronic device may set the first size to be one 50 times the area of the reference image frame; for another example, the electronic device may set the first size to be a size made up of nine pixels in the reference image frame.
For example, image frame f 1 And image frame f 2 As shown in FIG. 39A, if the image frame f 1 For reference image frame, for image frame f 1 By performing block division, the effect shown in fig. 39B can be obtained.
S3702, the electronic device determines a matching block matching the reference block in the matching image frame to obtain an image frame f 1 Block B of (1) 1 And image frame f 2 Block B of (1) 2 Are blocks that match each other.
Wherein the matching image frame is an image frame f 1 And an image frame f 2 The image frame of the frame other than the reference image frame.
Specifically, in the image frame f 1 In the case of a reference image frame, the image frame f 2 Matching the image frames; at image frame f 2 In the case of a reference image frame, the image frame f 1 To match the image frames.
Specifically, referring to fig. 40A, step S3703 specifically includes the following steps:
s1021, the electronic device determines a position coordinate (x) of the reference block in the reference image frame c ,y c ) And depth value d c
The position coordinates in the image frame specifically refer to coordinates in a screen coordinate system. The screen coordinate system is a two-dimensional coordinate system. The maximum value of the horizontal and vertical coordinates of the screen coordinate system is determined according to the number of pixels contained in the display area (image frame/display area of the screen) of the electronic equipment;
The electronic device stores the corresponding position coordinates of each pixel in the image frame, and the position of the block in the image frame may be obtained by averaging the position coordinates of all pixels in the block in the image frame. Alternatively, the coordinates of a point in the block may be specified as the coordinates of the block in the image frame; for example, the coordinates of the pixel closest to the vertex of the upper left corner in the block in the image frame are specified as the coordinates of the block; alternatively, the position in the image frame of the pixel closest to the center point in the block is specified as the coordinates of the block in the image frame.
The electronic device stores a depth value corresponding to each pixel in an image frame, wherein the depth value can represent a coordinate of the pixel in a Z dimension in a camera (camera) coordinate system, and the Z dimension is a dimension perpendicular to a projection plane in the camera coordinate system; when the electronic device obtains the image frame through drawing, the electronic device may be regarded as first constructing a three-dimensional scene in a camera coordinate system according to data of an application program, and then projecting the constructed three-dimensional scene onto a projection screen, thereby obtaining the image frame on a projection plane. For example, when the electronic device runs an Open Graphics Library (OpenGL), the electronic device stores a drawing result of a drawn image Frame into a Frame Buffer Object (FBO), where a color Attachment (colorattach) and a Depth Attachment (Depth attach) are disposed, where color value information of a pixel and a position coordinate of the pixel in the image Frame are stored; the depth attachment stores the depth value corresponding to the pixel.
Specifically, the depth value of the block may be calculated from the depth values of the pixels included in the reference block. Specifically, the depth value of the block may be obtained by averaging all pixels included in the block. Alternatively, the depth value of the block may be a depth value corresponding to a specified pixel in the block, for example, the depth value of a pixel at the center point of the block (a pixel closest to the center point) is set as the depth value of the block.
S1022, the electronic device determines the depth value d c A region of a second size is determined.
Wherein d is c The smaller, the larger the second size; d is a radical of c The larger the second size, the smaller the second size. For example, the second size is S A Indicates that if the size of the reference block is S B The size of the whole image frame is S C ;Q=S C ÷S B Then, the following formula can be used:
S A =Q (1-d c ) *S B
in this example, the depth value may range from 0 to 1; it can be seen that by the above formula, at d c When =0, S A =S C (ii) a At d c When =1, S A =S B (ii) a At 0 < d c When < 1, with d 1 Is gradually increased, S A And gradually increases.
Optionally, the electronic device may preset d c And a second size S A The mapping relationship of (2). For example, w 1 <d c ≤w 2 When it is, secondSize S A Has an area of S 1 ;w 3 <d c ≤w 4 When the second size S A Has an area of S 2 ……w (H-1) <d c ≤w H When the second size S A Has an area of S H (ii) a When d is c >w H When the second size S A Is equal to the size S of the block B (ii) a Wherein, w 1 Is a positive integer; w is a 1 <w 2 <w 3 <w 4 <……<w (H-1) <w H ;S 1 >S 2 >……>S H
The shape of the region may be preset, for example, the shape of the region may be square, circular, triangular, and the like.
S1023, the electronic equipment determines the block with the highest similarity with the reference block as a matching block in the area with the second size to obtain a block B 1 And Block B 2 A set of mutually matched blocks.
Note that the electronic device may calculate, from the depth value, a distance between the coordinate represented by the depth value in the camera coordinate system and the camera point (camera) in the Z dimension. The display content of the image frame can be regarded as the effect of the scene in the camera coordinate system projected onto the projection plane; the intersection point of the connecting line of the position point in the three-dimensional scene and the camera point (camera) and the projection plane is the position point projected on the projection screen.
For example, assume that the reference image frame is image frame f 1 Then match image frame to image frame f 2 (ii) a According to depth value d 1 Calculating to obtain z 1 ,z 1 Representing depth values d 1 And converting to coordinates expressed under a camera coordinate system. Please refer to fig. 40B, when z is 1 =z 4 At time, block I 1 ' is Z = Z 4 Quadrilateral I on plane 1 And (4) projection is carried out. At this timeThe search range is defined by (x) 1 ,y 1 ) As a center, 25 blocks. Please refer to fig. 40C, when z 1 =z 5 When z is 5 >z 4 (ii) a Quadrilateral I 1 Is arranged at Z = Z 5 On a plane, then a quadrilateral I 1 Blocks I obtained by projection onto a projection plane 1 "less than Block I 1 '; therefore, it can be understood that the farther from the camera point camera, the smaller the area of the projection plane onto which the same object is projected; and the farther from the camera point camera, the same object performs displacement in the same distance in the X dimension and the Y dimension, and the smaller the displacement is projected onto the projection plane. Thus, the electronic device may be set at z 1 =z 5 Search range under the condition of less than z 1 =z 4 A search range under the condition; as shown in FIG. 40C, z 1 =z 5 The search range is defined by (x) 1 ,y 1 ) As a center, 9 blocks. It can be seen that due to d 1 The distance between the corresponding position coordinate of the block in the camera coordinate system and the camera point (camera) can be indirectly expressed, and therefore, the embodiment of the present application can be directly based on d 1 A region of a second size is determined.
It should be noted that, in the process of searching for a matching block in the search range, the electronic device may move one pixel at a time to perform the matching search. For example, in the search range, the electronic device first determines a block B whose top left corner vertex coincides with the top left corner vertex of the search range S1 Whether it matches a reference block; if not, then B is added S1 Translating a pixel to the right, and judging whether the pixel is matched by the electronic equipment; if not matched, \ 8230; \ 8230, up to block B S1 The vertex at the upper right corner is coincided with the vertex at the upper right corner of the search range; if not, the electronic device moves the block down one line (one pixel), and repeats the above operations until the block traverses to complete the whole search range.
S3703, the electronic device according to Block B 1 And block B 2 Depth value of, block B 1 And Block B 2 Determining the vertex Yp of the prediction block at the position coordinates in the image frame 1 Position coordinates (x) in predicted image frames 31 ,y 31 )。
Wherein, the block B 1 And Block B 2 Are blocks that match each other. Block B 1 Representing an image frame f 1 One block of (2), block B 2 Representing an image frame f 2 One block of (1).
In this example, the position coordinates of the block are represented by the position coordinates of one vertex in the image frame; the electronic device may set a vertex p in the block 1 Representing the coordinates of the block, vertex p 1 The position relative to the block is preset. Specifically, the shape of the block is square, and the vertex of the electronic equipment which is arranged at the upper left corner of the block under the screen coordinate system is p 1 . In this example, block B is in the screen coordinate system 1 Vertex B of the upper left corner 1 p 1 In the image frame f 1 Position coordinates in (1) represents a block B 1 In the image frame f 1 The position coordinates of (1); block B under the Screen coordinate System 2 Vertex B of the upper left corner 2 p 1 In the image frame f 2 Position coordinates of (2) represents a block B 2 At image frame f 2 The position coordinates of (1); predicting the vertex Yp of the upper left corner of the block under the screen coordinate system 1 The position in the reference image frame represents the position coordinates of the prediction block in the reference image frame.
Referring to fig. 41A, step S3703 specifically includes the following steps:
s1031, the electronic device according to block B 1 Depth value d of 1 And Block B 1 In the image frame f 1 Position coordinates (x) of (1) 1 ,y 1 ) Calculating to obtain the position coordinate (x) under the camera coordinate system e1 ,y e1 ,z e1 )。
In this example, one vertex in a block is used in the imageThe position coordinates in the frame represent the position coordinates of the block. Specifically, a block B may be set 1 Vertex Bp of 1 In image frame f 1 Position coordinates in (1) represents a block B 1 The position coordinates of (a). For example, block B 1 Is square in shape, block B 1 Arranged in image frames f 1 In the case of (1), the vertex Bp 1 Is located at block B under the screen coordinate system 1 The vertex in the upper left corner.
The position coordinates in the image frame specifically refer to coordinates in a screen coordinate system; the electronic equipment is based on the depth value d 1 And position coordinates (x) 1 ,y 1 ) Calculating to obtain position coordinates (x) e1 ,y e1 ,z e1 ) The specific process comprises the following steps:
if the resolution of the screen under the screen coordinate system is (Sw, sh), namely the abscissa comprises Sw pixels, the ordinate comprises Sh pixels, the position coordinate of the pixel at the lower left corner under the screen coordinate system is (0, 0), and the position coordinate of the pixel at the upper right corner under the screen coordinate system is (Sw-1, sh-1); the electronic device first determines the position coordinates (x) in the screen coordinate system 1 ,y 1 ) Normalized to obtain (x) n1 ,y n1 ) (ii) a Wherein x is n1 =x 1 /(Sw-1)*2-1;y n1 =y 1 /(Sh-1) × 2-1. The depth value is (0, 1) and the depth value d 1 The normalization process yields z n1 (ii) a In particular, z n1 =d 1 *2-1. Normalized x n1 ,y n1 And z n1 The value ranges are (-1, 1). Therefore, from the position coordinates (x) 1 ,y 1 ) And d 1 The normalized coordinates (x) can be obtained n1 ,y n1 ,z n1 ,w n1 ) (ii) a Wherein w n1 =-1。
Then, normalized (x) n1 ,y n1 ,z n1 ,w n1 ) Converting the coordinate system into a cutting coordinate system to obtain (x) c1 ,y c1 ,z c1 ,w c1 ) (ii) a Will cut out (x) in the coordinate system c1 ,y c1 ,z c1 ,w c1 ) Converting into a camera coordinate system to obtain (x) e1 ,y e1 ,z e1 1); wherein, w c1 =-z e1 . And (x) c1 ,y c1 ,z c1 ,w c1 ) And (x) e1 ,y e1 ,z e1 1) has the following relationship:
Figure PCTCN2021106928-APPB-000010
wherein, w c1 =-z e1 . The matrix P is known and is specified as follows:
Figure PCTCN2021106928-APPB-000011
wherein the content of the first and second substances,
Figure PCTCN2021106928-APPB-000012
r, n, f and t are known constants.
W under cutting coordinate system c1 =-z e1 (ii) a Thus, at known w n1 =-1,w c1 =-z e1 And (x) n1 ,y n1 ,z n1 ,w n1 ) Calculating to obtain coordinates in a cutting coordinate system under the condition of class; (x) in a known clipping coordinate system c1 ,y c1 ,z c1 ,w c1 ),w c1 =-z e1 (ii) a Under the condition of the sum matrix P, the electronic equipment can obtain a camera coordinate system through calculation(x) of e1 ,y e1 ,z e1 ,1)。
It can be seen that the electronic device can be according to Block B 1 D of 1 And (x) 1 ,y 1 ) Calculating to obtain a block B 1 Corresponding position coordinates (x) in the camera coordinate system e1 ,y e1 ,z e1 ,1)。
S1032, the electronic device is according to the block B 2 Depth value d of 2 And Block B 2 Position coordinates (x) in a reference image frame 2 ,y 2 ) Calculating to obtain the position coordinate (x) in the camera coordinate system e2 ,y e2 ,z e2 )。
Specifically, referring to the calculation process of S1041, the electronic device may be according to block B 2 D of 2 And (x) 2 ,y 2 ) Calculating to obtain a block B 2 Position coordinates (x) in camera coordinate system e2 ,y e2 ,z e2 ,1)。
S1033, the electronic device according to block B 1 Corresponding position coordinates (x) in the camera coordinate system e1 ,y e1 ,z e1 1) and Block B 2 Corresponding position coordinates (x) in the camera coordinate system e2 ,y e2 ,z e2 1) calculating to obtain the position coordinates (x) in the camera coordinate system e3 ,y e3 ,z e3 ,1)。
Wherein (x) e3 ,y e3 ,z e3 1) is predicted; (x) e1 ,y e1 ,z e1 And 1) represents a block B 1 Position coordinates under a camera coordinate system; (x) e2 ,y e2 ,z e2 And 1) represents a block B 2 Position coordinates under a camera coordinate system; in the embodiment of the application, the position of one vertex of the prediction block in the prediction image frame is according to the blockB 1 And block B 2 The result of prediction is obtained; in this example, the electronic device first follows block B 1 And block B 2 Predicting the corresponding position coordinates under the camera coordinates to obtain the position coordinates (x) under the camera coordinate system e3 ,y e3 ,z e3 1); convenient for later taking the (x) in the camera coordinate system e3 ,y e3 ,z e3 And 1) converting the position of one vertex of the prediction block under a screen coordinate system to obtain the position coordinate (x) of one vertex of the prediction block under the screen coordinate system 31 ,y 31 )。
Wherein (x) e1 ,y e1 ,z e1 1) and (x) e2 ,y e2 ,z e2 And 1) the last component is 1, which is only convenient for calculation. The camera coordinate system is a three-dimensional coordinate system, and the electronic device can be based on (x) in the camera coordinate system e1 ,y e1 ,z e1 ) And (x) e2 ,y e2 ,z e2 ) Is calculated to obtain (x) e3 ,y e3 ,z e3 ) And further to obtain (x) e3 ,y e3 ,z e3 ,1)。
Image frame f 1 At image frame f in the data stream 2 Before. Block B 1 The position coordinate under the screen coordinate system is A 1 (x e1 ,y e1 ,z e1 ) Block B 2 Corresponding position point A under the camera coordinate system 2 (x e2 ,y e2 ,z e2 ). The electronic device can calculate A 1 To A 2 Displacement vector beta of 1 . Then, A is determined 3 (x e3 ,y e3 ,z e3 ) Is equal to the position coordinates of the reference block in the camera coordinate system (if the reference block is block B) 1 If the corresponding position coordinate of the reference block in the camera coordinate system is point A 1 (ii) a If the reference block is block B 2 If the position coordinate of the reference block in the camera coordinate system is point A, the corresponding position coordinate of the reference block in the camera coordinate system is point A 2 ) Plus a displacement vector beta of a first proportion 1 (ii) a The first ratio is equal to: the time point of the predicted image frame in the data stream minus the time point of the reference image frame in the data stream, and, the image frame f 2 Subtracting image frame f at a point in time in the data stream 1 A ratio of points in time in the data stream.
For example, referring to FIG. 41B, FIG. 41B illustrates an image frame f according to an embodiment of the present application 1 And an image frame f 2 In a group of mutually matched blocks, block B 1 And block B 2 . Referring to FIG. 41C, block B in FIG. 41B 1 And block B 2 Placed in the screen coordinate system, resulting in the schematic shown in fig. 41C. Referring to fig. 41D, fig. 41D is a schematic diagram of a camera coordinate system according to an embodiment of the present disclosure. Block B is included in FIG. 41D 1 Corresponding position point A under camera coordinate system 1 (x e1 ,y e1 ,z e1 ) Block B 2 Corresponding position point A under camera coordinate system 2 (x e2 ,y e2 ,z e2 ). The electronic device can calculate A 1 To A 2 Displacement vector beta of 1 (x e2 -x e1 ,y e2 -y e1 ,z e2 -z e1 ). Recording the predicted image frame as f 3 . (1) Referring to FIG. 41E, if the image frame f 1 Image frame f 2 And an image frame f 3 The first positional relationship shown in fig. 41E: image frame f 1 At the corresponding point in time t in the data stream 1 Image frame f 2 At the corresponding point in time t in the data stream 3 Image frame f 3 At the corresponding point in time t in the data stream 2 Then the image frame f may be set 1 For reference image frames, i.e. blocks B 1 Is a reference block; the first ratio is equal to (t) 2 -t 1 ) And (t) 3 -t 1 ) The ratio of (a) to (b) can be determined: displacement vector beta 2 =(t 2 -t 1 )/(t 3 -t 11 Then A is 3 (x e3 ,y e3 ,z e3 )=(x e1 ,y e1 ,z e1 )+(t 2 -t 1 )/(t 3 -t 11 . (2) Referring to FIG. 41F, if the image frame F 1 Image frame f 2 And image frame f 3 The second positional relationship shown in fig. 41F: image frame f 1 At the corresponding point in time t in the data stream 1 Image frame f 2 At the corresponding point in time t in the data stream 2 Image frame f 3 At the corresponding point in time t in the data stream 3 (ii) a The image frame f may be set 2 For reference image frames, i.e. blocks B 2 Is a reference block; the first ratio is equal to (t) 3 -t 2 ) And (t) 2 -t 1 ) The ratio of (c) can then be determined: displacement vector beta 2 =(t 3 -t 2 )/(t 2 -t 11 Then A is 3 (x e3 ,y e3 ,z e3 )=(x e2 ,y e2 ,z e2 )+(t 3 -t 2 )/(t 2 -t 11
S1034, the electronic equipment according to (x) in the camera coordinate system e3 ,y e3 ,z e3 And 1) converting the coordinate system of the screen to obtain (x) 31 ,y 31 )。
Specifically, the conversion from the camera coordinate system to the screen coordinate system is an inverse transformation of the whole process in step S1041, and may be specifically completed by the following steps:
Figure PCTCN2021106928-APPB-000013
wherein, w c1 =-z e1 . According to w c1 =-z e1 And [ x ] c3 y c3 z c3 w c3 ]Calculating to obtain normalized coordinate (x) n3 ,y n3 ,z n3 ,w n3 ) (ii) a According to normalized coordinates (x) n3 ,y n3 ,z n3 ,w n3 ) And the resolution of the screen is (Sw, sh), the screen coordinate system of (x) will be obtained 31 ,y 31 )。
Wherein the normalized coordinates are considered as coordinates in the normalized coordinate system.
It should be noted that, for the explanation of the screen coordinate system, the normalized coordinate system, the clipping coordinate system, and the camera coordinate system in the embodiment of the present application, reference may be made to the following specifications: in the seventh edition, the fourth section of OpenGL super precious ceremony, and the 3D graphics calculation section, explanation of coordinate space in OpenGL is given. (English name: openGLSuperBible, chapter 4, mathfor 3D graphics, coordinate space science OpenGL). Specifically, the screen coordinate system corresponds to (windows space) in the original english book; the normalized coordinate system corresponds to (normalized event coordinate (NDC) Space) in the original english, the clipping coordinate system corresponds to (Clip Space) in the original english, and the camera coordinate system corresponds to (View Space) in the original english.
S3704, the electronic device according to block B 3 And block B 4 Of the block B 3 And said block B 4 Determining the vertex Yp of the prediction block at the position coordinates in the image frame 2 Corresponding position coordinates (x) in the predicted image frame 32 ,y 32 )。
Note that the execution process of step S3705 is the same as step S3704. Block B in S3704 1 Respectively replacing the depth value and the position coordinate in the image frame withBlock B 3 And the position coordinates in the image frame; block B in S3704 2 And the depth value and the position coordinates in the image frame are respectively replaced with the block B 4 And the position coordinates in the image frame; the output result is the vertex Yp 2 Corresponding position coordinates (x) in the predicted image frame 32 ,y 32 )。
Wherein, block B 3 Is an image frame f 1 Neutralizing block B 1 One block adjacent to, block B 4 Is an image frame f 2 Neutralizing block B 2 One block adjacent to the other, and block B 3 And block B 4 Are the blocks matched with each other, refer to step S3703 for the explanation of the matching between the blocks;
wherein the vertex Yp 2 Position and Yp relative to the prediction block 1 The position with respect to the prediction block is different.
Due to the block B 1 And block B 2 Are mutually matched blocks, block B 3 And Block B 4 Are mutually matched blocks, block B 1 And block B 3 Adjacent, block B 2 And block B 4 Adjacent; thus, block B 3 With respect to block B 1 Position of (A) and block B 4 Relative to block B 2 Are the same. In the embodiment of the present application, the vertex Yp of the prediction block 2 The block B3 and the block B4 have preset association relation; for example, the vertex Yp 2 To predict the vertex of the upper right corner of the block, block B 3 Is a block B 1 The block on the right. (Note that, in the embodiment of the present application, the vertex Yp of the prediction block 1 Is composed of a block B 1 And block B 2 Determined and the color data of the predicted block is according to block B 1 And/or block B 2 Is determined, so the vertex Yp in this example 1 Represents a vertex in the prediction block; the vertex Yp of the prediction block in this example 2 Is based on the sum of block B 1 And blockB 2 Determined by adjacent blocks, yp in the case of a square block 2 Can represent the vertex Yp in the prediction block 1 Any one of the other three vertices).
In some embodiments, the block is a square, and the position coordinates of the vertex of the upper left corner of the block in the screen coordinate system are regarded as the position coordinates of the block in the screen coordinate system. Referring to FIG. 42, FIG. 42 is a diagram illustrating the determination of vertices of a prediction block provided in the embodiment of the present application. Vertex Yp of prediction block 1 The vertex in the upper left corner of the prediction block is indicated. Vertex Yp 1 At the predicted image frame f 3 The corresponding position coordinates are according to block B 1 Vertex B of the upper left corner 1 p 1 And block B 2 Vertex B of the upper left corner 2 p 2 And (4) calculating. Vertex Yp of prediction block 2 A vertex representing the upper right corner of the prediction block; vertex Yp 2 In the predicted image frame f 3 The corresponding position coordinates are according to block B 3 Vertex B of the upper left corner 3 p 1 And Block B 4 Vertex B of the upper left corner 4 p 1 Calculating to obtain; block B 3 Is a block B 1 One block adjacent to the right, block B 4 Is a block B 2 The right adjacent block. Vertex Yp of prediction block 3 A vertex representing the lower right corner of the prediction block; vertex Yp 3 In the predicted image frame f 3 The corresponding position coordinates are according to block B 5 Vertex B of the upper left corner 5 p 1 And block B 6 Vertex B of the upper left corner 6 p 1 Obtained by calculation; block B 5 The vertex of the upper left corner of (1) and block B 1 The vertices of the lower right corner coincide, block B 6 Top left corner of (1) and block B 2 The vertices of the lower right corner coincide. Vertex Yp of prediction block 4 A vertex representing the lower left corner of the prediction block; vertex Yp 4 In the predicted image frame f 3 Is according to block B 7 Vertex B of the upper left corner 7 p 1 And block B 8 Vertex B of the upper left corner 8 p 1 Calculating to obtain; block B 7 The vertex of the upper left corner of (1) and block B 1 The vertices of the lower left corner coincide, block B 8 Top left corner of (1) and block B 2 The vertices in the lower left corner are merged.
S3705, the electronic device predicts a vertex Yp of the block according to the color data of the reference block and the vertex Yp of the block 1 And vertex Yp 2 The position coordinates in the predicted image frame generate a predicted image frame.
Specifically, the electronic device first predicts the vertex Yp of the block according to the color data of the reference block and the vertex 1 And vertex Yp 2 Generating a prediction block; since the predicted image frame is composed of a plurality of prediction blocks, the predicted image frame may be generated from the plurality of prediction blocks.
The color data may specifically refer to a color value of a pixel, for example, an RGB value of the pixel. The color data may be stored in a color attachment of the FBO.
Specifically, vertices of the reference block and the prediction block correspond to each other, and the electronic device generates the prediction block according to the correspondence of the vertices, the positional relationship of the pixels in the block with respect to the vertices, and the color data of the reference block. The vertex of the upper left corner of the reference block corresponds to the vertex of the upper left corner of the prediction block, the vertex of the upper right corner of the reference block corresponds to the vertex of the upper right corner of the prediction block, the vertex of the lower right corner of the reference block corresponds to the vertex of the lower right corner of the prediction block, and the vertex of the lower left corner of the reference block corresponds to the vertex of the lower left corner of the prediction block; then, according to the corresponding relation, the display content in the reference block is placed in an area formed by the top points of the prediction block, and the prediction block is generated; specifically, the electronic device needs to perform one or more of the following operations on the display content in the reference block: stretching and shrinking. For example, where the reference block is block B 1 In this case, referring to FIG. 43, the electronic device is based on block B 1 And the relation between the vertices of the prediction block and the vertices of the block B 1 Is mapped to the image frame f 3 In (2) generating an imageFrame f 3 The prediction block of (1). It should be noted that the above process is called "mapping" in some embodiments; i.e. block B 1 The process of texture mapping into four vertices of a prediction block; specifically, the electronic device is mapped in a triangular manner during the mapping process, so that the block B can be mapped 1 Splitting the prediction block into two equilateral right-angled triangles; then, the block B is processed according to the corresponding relation between the vertexes 1 The contents of the two triangles in (a) are mapped into the two triangles of the prediction block, respectively.
Optionally, the electronic device may obtain the homography transformation formula corresponding to the two blocks according to four pairs of coordinates formed by four vertices of the reference block and the prediction block, where the coordinates of the reference image frame and the coordinates of the corresponding pair of vertices of the prediction image frame are a pair of coordinates, and the homography transformation formula corresponding to the block is used to represent a correspondence between the coordinates of the pixels of the block in the reference image frame and the coordinates of the prediction frame. It will be appreciated that the electronic device performs a calculation for each block in the reference image frame to obtain a homography transformation formula for each block.
Specifically, if the reference block is block B 1 (ii) a Block B 1 Is respectively a vertex B 1 p 1 Vertex B 1 p 2 Vertex B 1 p 3 And vertex B 1 p 4 (ii) a The four vertices of the prediction block are: vertex Yp 1 Vertex Yp 2 Vertex Yp 3 Vertex Yp 4 (ii) a Vertex B 1 p 1 And the vertex Yp 1 Corresponds, vertex B 1 p 2 And the vertex Yp 2 Corresponds, vertex B 1 p 3 And the vertex Yp 3 Corresponds, vertex B 1 p 4 And vertex Yp 4 Corresponding; the position coordinates of a vertex in the reference image frame and the position coordinates of the corresponding vertex in the prediction image frame are a pair of position coordinates, specifically, vertex B 1 p 1 A pair of position coordinates of (x) a ,y a ) And (x) a ',y a ') vertex B 1 p 2 Has a pair of position coordinates of (x) b ,y b ) And (x) b ',y b ') vertex B 1 p 3 A pair of position coordinates of (x) c ,y c ) And (x) c ',y c ') vertex B 1 p 4 Has a pair of position coordinates of (x) d ,y d ) And (x) d ',y d ')。
The electronic device may transfer block B 1 Respectively inputting the 4 pairs of position coordinates into a homography transformation formula to obtain a block B 1 A corresponding system of equations, wherein the system of equations comprises 4 equations. Wherein, the homography transformation formula is as follows:
Figure PCTCN2021106928-APPB-000014
wherein (x) 1 ,y 1 ) Coordinates of the location of the vertex in the reference block, the vertex (x) in the predicted image frame and in the reference block 1 ,y 1 ) The corresponding vertex has a position coordinate of (x) 1 ',y 1 ') H is an unknown homography transformation matrix. Wherein the homography transformation matrix is as follows:
Figure PCTCN2021106928-APPB-000015
Further, the electronic device passes deblocking B 1 Corresponding system of equations, block B can be obtained 1 Corresponding homography matrix H A Then, the block B is put into 1 Corresponding homography matrix H A Substituting into homography transformation formula to obtain block B 1 Correspond toHomography transformation formula of, block B 1 The corresponding homography transformation formula is as follows:
Figure PCTCN2021106928-APPB-000016
wherein, the block B 1 The corresponding homography transform formula is used to represent block B 1 The position coordinates of the pixel in the reference image frame and the position coordinates in the predicted image frame.
Then, the electronic device determines the position coordinates of the pixels of the reference block in the prediction image frame according to the correspondence of the position coordinates of the pixels of the block in the reference image frame and the position coordinates in the prediction image frame.
The electronic device may input the position coordinates of the pixel in the reference block in the reference image frame into the homography transformation formula corresponding to the block, so as to obtain the position coordinates of the pixel in the prediction frame.
Then, the electronic device maps the color data to position coordinates in the prediction image frame based on the color data of each pixel in the reference block and the position coordinates of each pixel in the prediction image frame, and generates a prediction block.
Specifically, in the image frame f 1 And image frame f 2 And image frame f 3 The positional relationship therebetween is as shown in fig. 41E, the first positional relationship, and the second positional relationship are the same as each other, based on the image frame f 1 And image frame f 2 Generated image frame f 3 As shown in FIG. 44A; to facilitate viewing of the image frame f 1 Image frame f 2 And an image frame f 3 Position change of middle car and zoom degree of car, image frame f 1 Image frame f 2 And image frame f 3 The vehicle in (1) is placed in the same coordinate system, and fig. 44B is obtained.
Specifically, in the image frame f 1 Image frame f 2 And an image frame f 3 The positional relationship therebetween is as shown in FIG. 41FIn the case of the second positional relationship, based on the image frame f 1 Image frame f 2 Generated image frame f 3 As shown in fig. 45A; to facilitate viewing of the image frame f 1 Image frame f 2 And image frame f 3 Position change of middle car and zoom degree of car, image frame f 1 Image frame f 2 And an image frame f 3 The vehicle in (1) is placed in the same coordinate system, and fig. 45B is obtained.
The following embodiment describes an image frame generation method provided by the embodiment of the present application. As shown in fig. 46A, the method may include:
s201, the electronic equipment carries out block division on the reference image frame to obtain a reference block.
Wherein the divided image frame f 1 Including block B 1 Image frame f 2 Including block B 2 The reference block is a block B 1 And block B 2 One block of (1), block B 3 Scaled by reference block, block B 3 Refers to a prediction block in a prediction image frame.
For a detailed explanation, refer to S3701.
S202, the electronic equipment determines a reference block to a block B 3 The predicted displacement vector of (2) is a displacement vector in a plane in which the image frame is located.
Wherein, block B 3 The display content in (2) is scaled according to the content of the reference block; block B 3 In the image frame f 3 The position coordinates in (1) are translated from the coordinates of the reference block in the reference image frame. Image frame f 3 Is a predicted image frame, i.e. block B 3 Is predicted.
Wherein the electronic device is directed to the image frame f shown in FIG. 39A 1 And image frame f 2 Executing block matching algorithm to determine block B 1 Matched block B 2 Block B 1 Is an image frame f 1 One block of (1), block B 2 Is an image frame f 2 One block of (a).
The reference block may be an image frame f 1 Block B of (1) 1 Or may be an image frame f 2 Block B of (1) 2 . Specifically, the reference block may be preset. For example, image frame f 1 At image frame f in the data stream 2 Before; the reference block may be set to the image frame f 1 The block in (1), i.e. the reference block, is block B 1 (ii) a The reference block may also be set to the image frame f 2 The block in (1), i.e. the reference block, is block B 2
Alternatively, the reference block may be based on the image frame f 3 In the data stream with respect to the image frame f 1 And image frame f 2 The position determination of (2) may include the following two cases: (1) If the image frame f 3 At the image frame f 1 And image frame f 2 In between, then the reference block is the earlier image frame f in the data stream 1 The block in (1), as shown in fig. 46C, has the following positional relationship: image frame f in a data stream 1 At the image frame f 3 Previous, image frame f 3 At the image frame f 2 Previously, the reference block was the image frame f 1 Block B of (1) 1 (ii) a (2) If the image frame f 3 At the image frame f 1 And image frame f 2 Then the reference block is the later image frame f in the data stream 2 Block (b) in (b), as shown in fig. 46D, positional relationship two: image frame f in a data stream 1 At image frame f 2 Previous, image frame f 2 At the image frame f 3 Previously, the reference block was the image frame f 2 Block B of (1) 2 . Note that, the image frame f 3 The position in the data stream may be determined according to the application scenario and the user requirement, for example, to increase the frame rate of the game, the electronic device may generate the next frame of image according to the previous two frames of image predictions, and this method may avoid the game delay being too long, i.e., the position relationship in fig. 46D is two; For another example, when processing a video with no or low requirement on the delay, the electronic device may generate an intermediate image frame according to two adjacent image frames, and the predicted frame generated by the method is more accurate, i.e. the first position relationship in fig. 46C.
The electronic device may be according to Block B 1 And block B 2 The coordinates in the plane of the image frame determine the predicted displacement vector of the reference block. It should be noted that, when the electronic device generates or displays an image frame, each pixel in the image frame has a corresponding coordinate. The electronic device may determine coordinates of the block from coordinates of pixels in the block; according to block B 1 Coordinate sum block B 2 Determines the predicted displacement vector of the reference block. Specifically, the coordinates of the block may be determined according to any one of the following three ways: (1) Taking the coordinate obtained by averaging the coordinates of all pixels contained in the block as the coordinate of the block; (2) Coordinates of a center point (pixel) of the block as coordinates of the block; (3) The coordinates corresponding to the designated vertex of the block are taken as the coordinates of the block, for example, the coordinates of the vertex at the upper left corner of the block are taken as the coordinates of the block.
Specifically, the image frame f 1 Including block B 1 Block B 1 The coordinate in the plane of the image frame is (x) 1 ,y 1 ) (ii) a Image frame f 2 Including block B 2 Block B 2 The coordinate in the plane of the image frame is (x) 2 ,y 2 ) (ii) a To facilitate viewing of Block B 1 And block B 2 Relation between, image frame f 1 Including block B 1 Car image and image frame f 2 Including block B 2 The automobile image of (4) is placed in the orthogonal coordinate system shown in fig. 46E, to obtain fig. 46B. As shown in fig. 46B, block B 1 To block B 2 Has a displacement vector of beta 1 =(x 2 -x 1 ,y 2 -y 1 )。
As shown in fig. 46C, image frame f 1 ,f 2 ,f 3 In the case where the relative position in the data stream is the positional relationship one, the image frame f 1 At the corresponding point in time t in the data stream 1 Image frame f 2 At the corresponding point in time t in the data stream 3 Image frame f 3 At the corresponding point in time t in the data stream 2 (ii) a The reference block being an image frame f 1 Block B of (1) 1 . Computing an image frame f 3 Corresponding point in time t 2 And an image frame f 1 Corresponding time point t 1 The difference between them is obtained as the difference d 1 =(t 2 -t 1 ) (ii) a Computing an image frame f 2 Corresponding time point t 3 And the image frame f 1 Corresponding time point t 1 The difference between the two values is obtained as the difference d 2 =(t 3 -t 1 ) (ii) a To obtain d 1 And d 2 Ratio d of 1 /d 2 . Suppose an object in an image frame is at t 1 To t 3 Can determine the predicted displacement vector beta of the reference block when the motion is uniform within the time period of (2) 2 =d 1 /d 21 =(t 2 -t 1 )/(t 3 -t 1 )*(x 2 -x 1 ,y 2 -y 1 )=((t 2 -t 1 )/(t 3 -t 1 )*(x 2 -x 1 ),(t 2 -t 1 )/(t 3 -t 1 )*(y 2 -y 1 ))。
As shown in fig. 46D, at image frame f 1 ,f 2 ,f 3 In the case where the relative position in the data stream is the positional relationship two, the image frame f 1 At the corresponding point in time t in the data stream 1 Image frame f 2 At the corresponding point in time t in the data stream 2 Image frame f 3 In a data streamCorresponding to the time point t 3 (ii) a The reference block being an image frame f 2 Block B of (1) 2 . Computing an image frame f 3 Corresponding time point t 3 And an image frame f 2 Corresponding time point t 1 The difference between the two values is obtained as the difference d 3 =(t 3 -t 2 ) (ii) a Computing an image frame f 2 Corresponding time point t 2 And the image frame f 1 Corresponding time point t 1 The difference between the two values is obtained as the difference d 4 =(t 2 -t 1 ) (ii) a To obtain d 3 And d 4 Ratio d of 3 /d 4 . Suppose an object in an image frame is at t 1 To t 3 May determine the predicted displacement vector beta of the reference block in case of uniform motion within the time period of (1) 2 =d 3 /d 41 =(t 3 -t 2 )/t 2 -t 1 )*(x 2 -x 1 ,y 2 -y 1 )=((t 3 -t 2 )/t 2 -t 1 )*(x 2 -x 1 ),(t 3 -t 2 )/t 2 -t 1 )*(y 2 -y 1 ))。
It should be noted that the corresponding time point of the image frame in the data stream may represent the time point of displaying the image frame according to the data stream; for example, if the current data stream is video data with a duration of 30 seconds, the image frame f 1 Image frame f at the 15 th second corresponding point in time in the data stream 2 The corresponding time point in the data stream is 17 seconds; when the electronic equipment plays the video, the image frame f is displayed when the video is played to the 15 th second 1 And displaying the image frame f when the video is played to the 17 th second 2
Optionally, if the data stream does not include time data of the image frame, the electronic device may calculate a time point corresponding to display of each image frame according to a frame rate preset by the system, and replace the time point corresponding to the image frame in the data stream with the time point displaying the image frame; for example, the preset frame rate of the electronic device is 30 frames/second, if the data stream includes 300 image frames, the corresponding playing time length of the data stream is 10 seconds, and the electronic device can calculate the time point when each image frame in the data stream is displayed; the method shown in fig. 46C and 46D is further performed.
In fig. 46C and 46D, the coordinates of the plane in which the image frame of the reference block is located and the predicted displacement vector of the reference block are added to obtain (x) 3 ,y 3 ),(x 3 ,y 3 ) Namely block B 3 Coordinates in a plane in which the image frame lies; block B 3 At the image frame f 3 The preparation method comprises the following steps of (1) performing; block B 3 And then scaled according to the reference block.
S203, the electronic equipment according to the block B 1 And block B 2 Determining block reference block to block B at the Z-dimension coordinates 3 To a different scaling factor.
Wherein reference is made to block B 3 The scaling of (A) refers to block B 3 Is scaled relative to the area of the reference block.
To better understand the embodiments of the present application, first, the principle applied when the electronic device generates image frames is described. When the camera captures an image frame, the image frame has a feature of "near-far-small", that is, the closer the same object is to the camera, the larger the object is in the generated image. Similarly, in the embodiment of the present application, the data of the image frame includes three-dimensional data; when the electronic device generates the image frame, it may be considered that the electronic device first constructs a three-dimensional scene in a three-dimensional space according to data of the image frame, and then projects the three-dimensional scene onto a projection plane in the three-dimensional space to generate the image frame, where the projection plane is located on one side of the three-dimensional scene. It can be understood that, if an object in the three-dimensional scene is closer to the projection plane, the display area of the object in the image frame is larger; if the object in the three-dimensional scene is farther from the projection plane, the display area of the object in the image frame is smaller.
For example, referring to fig. 46E, fig. 46E is a schematic diagram illustrating a principle applied when an electronic device generates an image frame. In fig. 46E, a three-dimensional rectangular coordinate system is established, and a three-dimensional scene constructed from the data of the image frames includes a car. The projection plane may be a plane constructed in X and Y dimensions; the electronic equipment can set a camera point camera in a three-dimensional space, the three-dimensional scene is projected to the projection plane N through the camera point camera, and an intersection point of a straight line formed by any point in the three-dimensional scene and the camera point camera and the projection plane is a position point of the any point projected on the projection plane. The coordinate of the camera point camera in the Z dimension is Z c The coordinate of the projection plane N in the Z dimension is Z n . It should be noted that, in the embodiment of the present application, the projection plane N and the camera point camera are on the same side of the three-dimensional scene. FIG. 46E includes scene one and scene two; any point on the automobile in the first scene and the second scene has the same coordinate in the X dimension and the Y dimension, and the coordinate is different only in the Z dimension. The projection plane N has Z as a coordinate in the Z dimension n (ii) a Including point p on the vehicle 1 And point p 2 (ii) a In scenario one, point p 1 And point p 2 The coordinate in the Z dimension is Z 5 Point p of 1 And point p 2 Projected onto the projection plane at respective corresponding points p 1 ' and Point p 2 '; in scenario two, point p 1 And point p 2 The coordinate in the Z dimension is Z 6 Point p of 1 And point p 2 Projected to projection plane respectively corresponding point p 1 "and Point p 2 ”,z 5 <z 6 (ii) a Due to z 5 <z 6 ,z 5 -z c <z 6 -z c Thus at a point p in the scene 1 And point p 2 Closer to the camera point camera; thus, from the geometric knowledge, the point p can be determined 1 ' and Point p 2 ' the distance between is greater than p 1 "and point p 2 "the distance between; utensil for cleaning buttockOf body, | p 2 ”-p 1 ”|/|p 2 '-p 1 '|=|z 5 -z c |/|z 6 -z c L. the method is used for the preparation of the medicament. It can be seen that, in the three-dimensional space, when the distance between two points is determined and a straight line formed by the two points is parallel to the projection plane, the closer the distance between the two points is to the projection plane, the farther the distance between the two points projected onto the projection plane is; similarly, it can be found that the closer the object in the three-dimensional space is to the projection plane, the larger the area occupied by the object projected onto the projection plane.
In the present embodiment, the image frame f 3 Is based on the image frame f 1 And image frame f 2 Predicted, and thus, can be based on the image frame f 1 Middle block B 1 And image frame f 2 Middle block B 2 Coordinate prediction in the Z dimension results in block B 3 Coordinates in the Z dimension. Specifically, please refer to fig. 46C and 46D, in which fig. 46C and 46D respectively illustrate two schematic diagrams of the position relationship; the first positional relationship and the second positional relationship.
For the positional relationship one: combining FIG. 46C and FIG. 46E, block B 1 At the image frame f 1 In block B 1 The coordinate in the Z dimension is Z 1 (ii) a Block B 2 At the image frame f 2 In block B 2 The coordinate in the Z dimension is Z 2 (ii) a Block B 3 At the image frame f 3 In, suppose block B 3 The coordinate in the Z dimension is Z 3 (ii) a Reference block is B 1 (ii) a In particular, z 1 ,z 2 And z 3 Please refer to fig. 46F. Image frame f 1 At the corresponding point in time t in the data stream 1 Image frame f 2 At the corresponding point in time t in the data stream 3 Image frame f 3 At the corresponding point in time t in the data stream 2 (ii) a Then calculate the image frame f 3 Corresponding point in time t 2 And the image frame f 1 Corresponding point in time t 1 Difference between them, obtaining time difference d 1 =(t 2 -t 1 ) (ii) a Computing an image frame f 2 Corresponding time point t 3 And an image frame f 1 Corresponding time point t 1 Difference between them, obtaining time difference d 2 =(t 3 -t 1 ) (ii) a To obtain d 1 And d 2 Ratio d of 1 /d 2 . Calculation Block B 2 And block B 1 Difference in distance in Z dimension, resulting in Z 2 -z 1 (ii) a Then z is 3 Can be composed of 1 、z 2 -z 1 And d 1 /d 2 Calculated to obtain, in particular, z 3 =z 1 +(z 2 -z 1 )d 1 /d 2 . Referring to FIG. 46E, if the position of the camera point camera in the Z dimension is Z c (ii) a Block B 1 Distance Δ of coordinate in Z dimension from camera point camera 1 =|z 1 -z c L, |; block B 3 Distance Δ of the coordinates of the corresponding Z dimension from the camera point camera 2 =|z 3 -z c L, |; from the conclusion that fig. 46E can be drawn, the farther the same object is from the camera point in the three-dimensional space, the smaller the area projected onto the projection plane; block B in the examples of this application 1 And Block B 2 Is a matching block, block B 1 And block B 2 Displaying the same object in three-dimensional space; due to the block B 3 Is according to block B 1 And block B 2 Predicted, thus, block B 3 Block B 1 And block B 2 All displayed objects are the same object in the three-dimensional space; with reference to FIG. 46E, block B 3 Length of side L B3 And block B 1 Length of side L B1 Is equal to Δ 1 And Δ 2 Is the ratio of (i.e. L) B3 /L B1 =Δ 12 . Block B 3 Area S of B3 And block B 1 Area S of B1 Is equal to Δ 1 And delta 2 Square of the ratio of (A), (B), S B3 /S B1 =(Δ 12 ) 2 . Then block B is referenced in the case of position relation one 1 Scaling is (Δ) 12 ) 2 . The above formula is developed: (Delta.) ( 12 ) 2 =(z 1 -z c ) 2 /(z 3 -z c ) 2 =(z 1 -z c ) 2 /(z 1 -z c +(z 2 -z 1 )d 1 /d 2 ) 2 =(z 1 -z c ) 2 /(z 1 -z c +(z 2 -z 1 )(t 2 -t 1 )/(t 3 -t 1 )) 2
For the positional relationship two: combining FIG. 46D and FIG. 46E, block B 1 At the image frame f 1 In block B 1 The coordinate in the Z dimension is Z 1 (ii) a Block B 2 At image frame f 2 In block B 2 The coordinate in the Z dimension is Z 2 (ii) a Block B 3 At the image frame f 3 In, suppose block B 3 The coordinate in the Z dimension is Z 3 (ii) a Reference block is B 2 (ii) a In particular, z 1 ,z 2 And z 3 Please refer to fig. 46H. Image frame f 1 At the corresponding point in time t in the data stream 1 Image frame f 2 At the corresponding point in time t in the data stream 2 Image frame f 3 At the corresponding point in time t in the data stream 3 (ii) a Then calculate the image frame f 3 Corresponding time point t 3 And imagesFrame f 2 Corresponding time point t 2 Difference between them, obtaining time difference d 3 =(t 3 -t 2 ) (ii) a Computing an image frame f 2 Corresponding time point t 2 And an image frame f 1 Corresponding point in time t 1 Difference between them, obtaining time difference d 4 =(t 2 -t 1 ) (ii) a To obtain d 3 And d 4 Ratio d of 3 /d 4 . Calculation Block B 2 And block B 1 Difference of distance in Z dimension to obtain Z 2 -z 1 ;z 3 Can be formed by z 2 、z 2 -z 1 And d 3 /d 4 Calculated, optionally, z 3 =z 2 +(z 2 -z 1 )d 3 /d 4 . Referring to FIG. 46E, if the position of the camera point camera in the Z dimension is Z c (ii) a Block B 2 Distance Δ of coordinate in Z dimension from camera point camera 3 =|z 2 -z c L; block B 3 Distance Δ of the coordinate of the corresponding Z dimension from the camera point camera 4 =|z 3 -z c L, |; from the conclusion that fig. 46E can be drawn, the farther the same object is from the camera point in the three-dimensional space, the smaller the area projected onto the projection plane; block B in the examples of this application 1 And block B 2 Is a matching block, block B 1 And block B 2 Displaying the same object in three-dimensional space; due to the block B 3 Is according to block B 1 And block B 2 Predicted, thus, block B 3 Block B 1 And Block B 2 All displayed objects are the same object in the three-dimensional space; with reference to FIG. 46E, block B 3 Length of side L B3 And block B 2 Length of side L B2 Is equal to Δ 3 And delta 4 Ratio of (i.e. L) B3 /L B2 =Δ 34 . Block B 3 Area S of B3 And block B 2 Area S of B2 Is equal to Δ 3 And Δ 4 Square of the ratio of (A), (B), S B3 /S B1 =(Δ 34 ) 2 . Then block B is referenced in the case of position relationship two 2 Scaling is (Δ) 34 ) 2 . The above formula is developed: (Delta. DELTA. 34 ) 2 =(z 2 -z c ) 2 /(z 3 -z c ) 2 =(z 2 -z c ) 2 /(z 2 -z c +(z 2 -z 1 )d 3 /d 4 ) 2 =(z 2 -z c ) 2 /(z 2 -z c +(z 2 -z 1 )(t 3 -t 2 )/(t 2 -t 1 )) 2
S204, the electronic equipment generates a block B according to the reference block and the scaling ratio of the reference block 3
Specifically, as described in S3703, for the position relationship one, the block B will be referred to 1 Scaling (Δ) 12 ) 2 Block B is obtained 3 (ii) a For the position relation two, refer to the block B 2 Scaling (Δ) 34 ) 2 Block B is obtained 3
S205, the electronic device determines the block B according to the coordinates of the reference block in the plane of the image frame, the prediction displacement vector of the reference block and the scaling of the reference block 3 Coordinates of the plane in which the image frame lies.
According to the explanation of step S3703, in conjunction with fig. 46C, in the case of the positional relationship one: if block B 1 Seat of central pointMarked by (x) 1 ,y 1 ) Then block B 1 At any point (x) b1 ,y b1 ) The coordinate scaled by the central point is (x) b112 +x 1 *(Δ 21 )/Δ 2 ,y b112 +y 1 *(Δ 21 )/Δ 2 ) (ii) a Adding the scaled coordinates and the predicted displacement vector, and combining with FIG. 46C to obtain block B 3 Coordinates in the plane in which the image frame lies, i.e. B 3 (x b112 +x 1 *(Δ 21 )/Δ 2 +(t 2 -t 1 )/(t 3 -t 1 )*(x 2 -x 1 ),y b112 +y 1 *(Δ 21 )/Δ 2 +(t 2 -t 1 )/(t 3 -t 1 )*(y 2 -y 1 ))。
Referring to fig. 46D, in the case of the positional relationship two: if block B 2 Has a coordinate of (x) as the center point 2 ,y 2 ) Then block B 2 Any point above (x) b2 ,y b2 ) The coordinate scaled by the central point is (x) b234 +x 2 *(Δ 43 )/Δ 4 ,y b234 +y 3 *(Δ 43 )/Δ 4 ) (ii) a Adding the scaled coordinates and the predicted displacement vector, and combining with FIG. 46D to obtain block B 3 Coordinates in the plane in which the image frame lies, i.e. B 3 (x b234 +x 2 *(Δ 43 )/Δ 4 +(t 3 -t 2 )/t 2 -t 1 )*(x 2 -x 1 ),y b234 +y 3 *(Δ 43 )/Δ 4 +(t 3 -t 2 )/t 2 -t 1 )*(y 2 -y 1 ))。
S206, the electronic equipment according to the block B 3 And block B 3 Generating an image frame f at the coordinates of the plane in which the image frame lies 3
Combine fig. 46B and 46C; referring to FIGS. 46G for S204-S206, the electronic device first scales for block B 1 Scaling, block B after scaling 1 Is recorded as a block B 4 (ii) a Then, according to the predicted displacement vector beta 2 For block B 4 Translating to obtain an image frame f 3 Block B of (1) 3 (ii) a Final plurality of blocks B 3 Forming an image frame f 3
Combine fig. 46B and 46D; referring to FIGS. 46I for S204-S206, the electronic device first scales for block B 2 Scaling, block B after scaling 2 Is recorded as block B 5 (ii) a Then, according to the predicted displacement vector beta 2 For block B 5 Translating to obtain an image frame f 3 Block B of (1) 3 (ii) a Final plurality of blocks B 3 Forming an image frame f 3
It should be noted that fig. 46G and 46I are for better understanding of the present solution; in particular, the electronic device may scale and predict the displacement vector β based on data of the reference block 2 Generating an image frame f 3 And from the image frame f 3 Data display image frame f 3
Therefore, in the embodiment of the application, the electronic equipment is at the rootFrom the original image frame (image frame f) 1 And image frame f 2 ) Generating a predicted image frame (image frame f) 3 ) In the process, the blocks in the image frame can be zoomed by combining the coordinate of the Z dimension which is vertical to the plane of the image frame, so that the transition between the original image frame and the predicted image frame is more natural after the predicted image frame is inserted; the predicted image frame observed by the user is more consistent with a scene built by the electronic equipment according to the data of the application program, and the visual experience when the predicted image frame is inserted is improved.
Referring to fig. 47, in the embodiment of the present application, the electronic device may establish a storage space 401 in the cache 1352, where the storage space 401 includes the accessories, and specifically, the type and the number of the accessories may be determined according to an actual situation. Combining S3701 through 106, S201 through S205; the electronic device can set type A accessories and type B accessories; the electronic device may store the corresponding drawing results in the type a attachment and the type B attachment; specifically, the storage space 401 may specifically refer to a Frame Buffer Object (FBO) in OpenGL; the a-type accessory may be a color accessory, specifically, colorattribute in OpenGL, and may be configured to store color data in the image frame, for example, RGB values of pixels in the image frame, and the a-type accessory may further include coordinates of the pixels in the image frame, for example, in S3702, the electronic device may determine coordinates of the block according to the coordinates of the pixels and determine the prediction displacement vector according to the coordinates of the block. The type B attachment may store a depth attachment of depth information of the image frame, which may be specifically in OpenGL: the Depth information may be specifically the Z-dimension data described in S3701 to S3705, and S201 to S205.
The storage space in the embodiment of the application is a block of space defined in the cache by the electronic device. The storage space includes the following functions: (1) Storing a drawing result generated according to the data of the application program; specifically, the storage space may include at least one accessory, and each accessory includes at least one buffer type, and the buffer type may include a color type, a depth type, and a template type; the electronic equipment can store the drawing result to different accessories according to the drawing instruction, and finally form an image frame in the storage space. (2) storing and composing the image frame in the storage space: the electronic device may draw the drawing result into a storage space, and then generate an image frame. At least one storage space for storing the image frames may be provided; the electronic device may store different image frames in different memory spaces, e.g. a first image frame in a first memory space and a second image frame in a second memory space. In an embodiment of the application, the electronic device may generate a predicted image frame in the third storage space from the first image frame in the first storage space and the second image frame in the second storage space.
In this embodiment, the location of the storage space on the hardware device may be specifically set in the ram of the graphics processor 1351, or may be set in the ram communicatively connected to the GPU.
Some general callouts of the storage space in the embodiments of the present application may be: a frame buffer memory, referred to as frame buffer for short, or frame buffer; it should be noted that the memory space may have other different names in different programming languages.
A frame buffer including at least one accessory; an accessory comprising at least one buffer type; the buffer types may include: color type, depth type, and template type. For example, three attachments may be included in the frame buffer, specifically: 1. color type attachments, depth type attachments and template type attachments.
For example, the memory space in the embodiment of the present application may be referred to as vkframebuffer in the Vulkan programming language; called a Frame Buffer Object (FBO) in the Open Graphics Library (OpenGL) programming language.
Referring to fig. 48, fig. 48 is another schematic view of an electronic device 100 according to an embodiment of the disclosure; in conjunction with fig. 43 and 44A, and the software block diagram of the electronic device 100 shown in fig. 12 above. Among other things, the electronic device 100 may include an application processor 1353, a graphics processor 1351, and a display device 1350; in the embodiment of the application, the application processor 1353 may run the target application 1301, and call an API in the three-dimensional graphics processing framework according to data and requirements in the running process of the target application 1301, so as to implement processing on application data; the application processor 1353 and the graphics processor 1351 may perform data communication, and a three-dimensional graphics processing framework in the application processor 1353 may transmit data of the target application 1301 and an instruction to execute the API to a three-dimensional graphics processing server in the graphics processor 1351; in response to the application data and the API execution instruction sent by the three-dimensional graphics processing framework, the three-dimensional graphics processing server may call the three-dimensional graphics processing library 1330 to perform the drawing operation, and finally store the drawn image frame in the cache 1352; the graphics processor 1351 then transfers the buffered image frames to the display device 1350 for display. In this embodiment, the cache 1352 may specifically refer to a dynamic random access memory in the graphics processor 1351. For example, the three-dimensional graphics processing library 1330 in this example may be OpenGL ES; the three-dimensional graphics processing framework may be an OpenGL ES framework; the three-dimensional graphics processing server may be an OpenGL ES server.
Before the electronic device performs the steps of S3701 to 106, the electronic device starts an application program, and allocates and/or creates a storage space according to the requirements of the application program; the steps of S3701 through S106 are then performed in the storage space. For example, the electronic device runs OpenGL with FBO as the storage space; before the steps of S3701 to S106, the electronic device includes: when the electronic device starts the application program, the electronic device allocates FBO 0 corresponding to the application program, and then the electronic device creates at least three FBOs according to a data call creation instruction of the application program, in this embodiment of the application, when the steps S3701 to S106 are executed, the electronic device may only create three FBOs: FBO 1, FBO 2 and FBO 3; FBO 1 for storing image frames f 1 Corresponding to the rendering result, FBO 2 is used to store the image frame f 2 The corresponding rendering result and FBO 3 are used to store the image frame f 3 And (5) correspondingly drawing the result. The specific processes of allocating FBO 0 and creating FBO 1, FBO 2 and FBO 3 can be seen in FIG. 49A, and FIG. 49A includes S301 and S302:
s301, the application processor 1353 starts an application, and the application processor 1353 instructs the graphics processor 1351 to configure FBO 0 for the application.
S302, the application processor 1353 calls a create instruction for creating a storage space and sends the create instruction to the graphics processor 1351, and the graphics processor 1351 creates FBO 1, FBO 2, and FBO 3.
The create instruction may specifically be the glGenFramebuffers API in OpenGL.
Wherein FBO 1, FBO 2 and FBO 3 can be used to store image frame f respectively 1 Image frame f 2 And image frame f 3 And (5) correspondingly drawing the result.
Wherein the electronic device draws and displays the image f after the electronic device completes the execution of S301 1 Or image frames f 2 Referring to FIG. 49B, FIG. 49B illustrates the electronic device drawing and displaying the image frame f 1 S401 to S406 are included as an example.
S401, the application processor 1353 invokes a binding instruction for binding the application program with the storage space, and sends the binding instruction to the graphics processor 1351, and the graphics processor 1351 binds the application program with the FBO 1.
The binding instruction may specifically be a glBindFrameBuffer API in OpenGL. When the application program is bound with the FBO, the electronic equipment performs drawing according to the data of the application program to obtain a drawing result, and the drawing result is stored in the bound FBO.
S402, the application processor 1353 invokes an attachment addition instruction for adding an attachment in the storage space and sends the attachment addition instruction to the graphics processor 1351, and the graphics processor 1351 adds an a-type attachment and a B-type attachment in the FBO 1.
Wherein the attachment addition instruction is for adding an attachment in the FBO. The attachment addition instruction may specifically be the glframebuffer texture2D API in OpenGL. The type a attachment may be a color attachment, and in OpenGL may specifically be: colorAttachment; the type B Attachment may be a Depth Attachment, and may specifically be Depth Attachment in OpenGL. The type a attachment stores information of color values of pixels and coordinate information of the pixels in the image frame, specifically including coordinates in a plane where the image frame is located as described in S3702. The type B attachment stores therein depth information, specifically including a depth value, which may represent data of a Z-dimension perpendicular to a projection plane in a camera coordinate system.
S403, the application processor 1353 invokes a drawing instruction for drawing, sends the drawing instruction to the graphics processor 1351, the graphics processor 1351 performs a drawing operation, and stores a drawing result in the a-type attachment and the B-type attachment of the FBO 1 to obtain the image frame f 1
The drawing instruction may include a plurality of different drawing instructions, and thus different functions are implemented when the drawing operation is performed. The drawing instruction may specifically be a glDraw API in OpenGL; the glDraw API may include a glDraw Arrays API and a glDraw ElementsAPI.
S404, the application processor 1353 calls the attachment binding instruction and sends the attachment binding instruction to the graphics processor 1351, and the graphics processor 1351 binds the type-A attachment and the type-B attachment in the FBO 1 to the FBO 0;
in the present embodiment, FBO 0 and FBO K have different functions, and K is a positive integer.
The FBO K is used for storing drawing results/data of the application program; the role of FBO 0 is: the electronic device can send the image frames in FBO 0 to the display screen. Specifically, when the electronic device runs an application program, at least one FBO K may be created for the application program according to a data storage requirement of the application program. FBO 0 is assigned to an application when the application of the electronic device is started. When the electronic equipment displays the image frame, the image frame in the FBO K needs to be firstly drawn (copied) into the FBO 0, and the electronic equipment can send the image frame in the FBO 0 to be displayed when the vertical synchronization signal arrives, so that the image frame is displayed on the display screen. The electronic device sets at least one FBO 0.
In the case where the number of FBO 0 set by the electronic device is more than two: the electronic equipment can display the image frames in one FBO 0 to the display equipment, and simultaneously draw the image frames in the other FBO 0, thereby reducing the drawing to displayThe time of the whole process improves the efficiency of generating to displaying image frames of the electronic equipment. For example, the application program of the electronic device is provided with FBO 1, FBO 2 and FBO 3, FBO 0 of the application program comprises FBO 01 and FBO 02, and the electronic device draws the image frame f in FBO 1 1 After the corresponding rendering result, image frame f is rendered 1 Drawing a corresponding drawing result to the FBO 01, calling a rotation instruction by the electronic equipment, wherein the rotation instruction can be eglSwap buffers API in OpenGL, and drawing the image frame f in the FBO 01 1 Sending the corresponding drawing result to display equipment for displaying; image frame f in FBO 01 1 And sending the corresponding drawing result to display equipment for displaying: the electronic equipment can draw the image frame f in the FBO 2 2 Corresponding rendering result, and completing for the image frame f 2 After the drawing, the image frame f in FBO 2 is taken 2 The corresponding rendering results are rendered in FBO 02. Then, the electronic equipment calls a rotation instruction to enable the image frame f in the FBO 02 2 Sending the corresponding drawing result to display equipment for displaying; meanwhile, the electronic device can be based on the image frame f in the FBO 1 1 Corresponding rendering results and image frame f in FBO 2 2 Drawing the corresponding drawing result in FBO 3 to obtain an image frame f 3 Corresponding drawing result, and finishing the image frame f 3 After the drawing, the image frame f in FBO 3 is taken 3 The corresponding rendering results are rendered in FBO 01.
The electronic device binds the type a attachment and the type B attachment in the FBO 1 to the FBO 0, and may be regarded as that the type a attachment and the type B attachment in the FBO 1 are used as inputs of the FBO 0 by the electronic device, so that the subsequent processing is performed according to the input of the FBO 0, and an image frame that can be finally displayed on the display screen is obtained at the FBO 0.
The attachment binding instruction may specifically be a glBindTexture API in OpenGL.
S405, the application processor 1353 calls a drawing instruction and sends the drawing instruction to the graphics processor 1351, and the graphics processor 1351 executes drawing results in the type-A attachments and the type-B attachmentsLine drawing operation to generate an image frame f 1 An image frame f 1 Stored in FBO 0.
The drawing instruction may specifically be a glDraw API in OpenGL. The drawing operation performed in S306 is different from the drawing operation performed in S304.
S406, the application processor 1353 invokes a rotation instruction for displaying the image frames, and sends the rotation instruction to the graphics processor 1351, and the graphics processor 1351 transmits the image frames in the FBO 0 to the display screen 340, and the display screen 340 displays the image frames.
The rotation instruction may specifically be an eglsvapbuffersapi in OpenGL. When FBO 0 contains two FBOs, the electronic device calls a rotate command to alternately display FBOs in FBO 0. Specifically, the FBO 0 includes FBO 01 and FBO 02, and if the electronic device currently displays the image frames in the FBO 01 and the electronic device calls the rotation instruction eglSwapBuffers API, the electronic device displays the image frames in the FBO 02. When the rotation command eglSwapBuffers API is called again, the electronic equipment displays the image frames in the FBO 01. The process of alternately transmitting and displaying the image frames in the FBO 01 and the FBO 02 can be realized by repeating the operation.
Wherein the electronic device draws and displays the image f after the electronic device completes the execution of S301 3 Please refer to fig. 49C, in which fig. 49C includes S501 to S507. From S3701 to S106, it can be determined that the electronic device is generating the image frame f 3 Then, the electronic device has drawn the completed image frame f 1 And image frame f 2 (ii) a In this example, image frame f 1 The corresponding rendering results are stored in FBO 1, and the image frame f 2 The corresponding rendering results are stored in FBO 2, and the image frame f 1 The corresponding type A attachment is marked as A f1 Image frame f 2 The corresponding type B attachment is denoted as B f1 (ii) a Image frame f 2 The corresponding type A attachment is marked as A f2 Image frame f 2 The corresponding type B attachment is denoted as B f2
S501, the application processor 1353 invokes a binding instruction for binding the application program with the storage space, and sends the binding instruction to the graphics processor 1351, and the graphics processor 1351 binds the application program with the FBO 3.
The binding instruction may specifically be a glBindFrameBuffer API in OpenGL.
S502, the application processor 1353 calls the attachment binding instruction and sends the attachment binding instruction to the graphics processor 1351, and the graphics processor 1351 binds A in FBO 1 f1 、B f1 A in FBO 2 f2 ,B f2 Binding with FBO 3.
Wherein, A is f1 、B f1 、A f2 And B f2 Binding with FBO 3; the concrete function is as follows: the graphics processor 1351 performs the rendering operation subsequently as A f1 、B f1 、A f2 And B f2 As input, will be according to A f1 、B f1 、A f2 And B f2 The obtained rendering result is stored in FBO 3.
The attachment binding instruction may specifically be a glBindTexture API in OpenGL.
S503, the application processor 1353 calls the attachment adding instruction and sends the attachment adding instruction to the graphics processor 1351, and the graphics processor 1351 adds an A-type attachment in the FBO 3, which is marked as A f3
The type a accessory is specifically a color accessory, and in OpenGL, specifically: colorAttachment.
The accessory addition instruction may specifically be in OpenGL: the glFramebufferTexture2D API.
S504, the application processor 1353 calls the drawing instruction and sends the drawing instruction to the graphics processor 1351, and the graphics processor 1351 draws according to A f1 、B f1 、A f2 And B f2 Executing drawing operation, and storing the obtained drawing result in A f3 In (1).
The drawing instruction may specifically be a glDraw API in OpenGL.
Wherein, in S405, according to A f1 、B f1 、A f2 And B f2 The rendering operation is performed, and the specific algorithm to be performed may be the algorithms described in S3701 to 106, S201 to 205.
S505, the application processor 1353 calls the attachment binding instruction and sends the attachment binding instruction to the graphics processor 1351, and the graphics processor 1351 binds A in FBO 3 f3 Binding to FBO 0;
please refer to S307 for the explanation of FBO 0.
Wherein the electronic device converts A in FBO 3 f3 Binding to FBO 0 may be viewed as the electronic device will be A in FBO 3 f3 As input to FBO 0, so as to perform a rendering operation subsequently according to the input of FBO 0, resulting in an image frame.
The attachment binding instruction may specifically be a glBindTexture API in OpenGL.
S506, the application processor 1353 calls the drawing instruction and sends the drawing instruction to the graphics processor 1351, and the graphics processor 1351 draws according to A f3 Performs a rendering operation on the rendering result in (f), generates an image frame 3 An image frame f 3 Stored into FBO 0.
The drawing instruction may specifically be a glDraw API in OpenGL. S407 is different from the drawing operation in S405.
S507, the application processor 1353 invokes a rotation instruction for rendering the image frame, and sends the rotation instruction to the graphics processor 1351, and the graphics processor 1351 renders the image frame f in the FBO 0 3 Transmitted to the display screen 340, and the display screen 340 displays the image frame f 3
The rotation instruction may be an eglSwapBuffers API in OpenGL.
The game interface displayed by the electronic equipment can contain game content and UI controls. Generally, the position and size of the UI controls in the game interface do not change. Therefore, in the embodiment of the application, when the electronic device draws a drawing frame issued by an application, the electronic device may draw a UI control in the drawing frame of the first frame, and the following drawing frame may directly use a drawing result of the UI control in the drawing frame of the first frame. Therefore, repeated drawing can be avoided, and the power consumption of the electronic equipment is saved.
In the embodiment of the application, the image frame generated according to the data of the target application program is divided into a dynamic layer and a User Interface (UI) layer in a User-defined mode; it should be noted that the above division is only for better understanding of the technical solutions in the embodiments of the present application; in practical situations, the electronic device executes a drawing operation according to data of a target application program, and finally generates an image frame without concepts of a dynamic layer and a UI layer.
For example, playing video games on electronic devices has become one of the important ways for many users to entertain and entertain. In the process of running the game application program, the electronic equipment constructs a smooth game picture by displaying the image frames of one frame on the display screen. Referring to fig. 50A, fig. 50A is a schematic diagram of a dynamic layer in a game scene, where the dynamic layer may be a game picture in the game scene, and the game picture may change continuously according to an actual operation condition of a game application.
The UI layer defined in the embodiment of the application mainly comprises a control in a display interface, the position of the control in the display interface is fixed under normal conditions, and the control exists on the display interface in the whole running process of the target application; e.g., navigation bar, window frame, text box, button, drop-down menu, etc.; specifically, in a game scene, the control may refer to a control button, a setting button, and the like in a game application, for example, please refer to fig. 50B, where fig. 50B is a schematic diagram of a UI layer in a game scene, and as shown in the figure, the control may include a direction indicator, a game duration, a game delay, a game setting button, and a game control button; it can be seen that the above-mentioned control always exists on the display screen in the game scene, only the displayed data changes, for example, the game delay control updates the delayed value according to the change of the network condition, but the game delay control always displays on the display screen.
The image frame 1C finally displayed by the electronic device may be composed of the dynamic layer as shown in fig. 50A and the UI layer as shown in fig. 50B defined in the embodiment of the present application.
The storage space in the embodiment of the application is a block of space defined in the cache by the electronic device, and the storage space can be regarded as a mounting point, and the association between the texture image and the rendering object is established through the mounting point. The storage space includes the following functions: (1) Storing a drawing result generated according to the data of the target application program; specifically, the storage space may include at least one accessory, and each accessory includes at least one buffer type, and the buffer type may include a color type, a depth type, and a template type; the electronic device can store the drawing result on different accessories according to the drawing instruction, and finally form the image frame in the storage space. (2) storing and composing the image frame in the storage space: the electronic device may store the rendering results stored in the plurality of storage spaces into the same storage space, and synthesize an image frame in the storage space. (3) Transmitting the stored image frame to a display screen for displaying when a vertical synchronizing signal VSYNC arrives; the storage space for storing the image frames may be set to at least one; when the number of the storage spaces for storing the image frames is greater than or equal to 2, the target application program can call a first instruction to rotate the storage spaces for storing the image frames; for example, if the ninth storage space for storing the image frame is composed of a thirteenth storage space and a fourteenth storage space, when the target application calls the first instruction, the electronic device may first rotate the thirteenth storage space storing the image frame to the first state, and when the thirteenth storage space is in the first state, the image frame in the thirteenth storage space may be transmitted to the display device when the vertical synchronization signal VSYNC arrives, and then the display device displays the image frame; when the thirteenth storage space is in the first state, the electronic device may generate an eighth image frame in the fourteenth storage space according to the drawing instruction of the target application program; when the target application program calls the first instruction again, the electronic equipment rotates the fourteenth storage space to the first state, and draws and generates a ninth image frame in the thirteenth storage space; by repeating the above operations, the display of the screen can be realized. When the memory space is in the second state, the electronic device can draw or compose the image frame in the memory space according to the instruction of the target application program. In this embodiment of the application, the ninth storage space may implement an off-screen rendering function when the ninth storage space is formed by at least two storage spaces. When a target application program in the electronic equipment is started, the electronic equipment can configure at least one piece of storage space aiming at the target application program; in the process of operating the target application program, the electronic equipment can select a piece of storage space to bind, and then the electronic equipment can store a drawing result and a rendering result obtained by drawing into the bound storage space according to a drawing instruction of the target application program; specifically, the electronic device may store the obtained drawing result in different storage spaces according to the drawing instruction. In the embodiment of the present application, in the case that one storage space is composed of a plurality of storage spaces, it means that the plurality of storage spaces have the same at least one characteristic in the embodiment of the present application; for example, in the claims of the embodiment of the present application, the seventh storage space is composed of a tenth storage space and an eleventh storage space, and it means that the tenth storage space and the eleventh storage space have the same characteristics, that is, the tenth storage space and the eleventh storage space are both used for storing the drawing result when the drawing condition satisfies the first condition.
In this embodiment, the location of the storage space on the hardware device may be specifically set in a random access memory of a Graphics Processing Unit (GPU), or may be set in a random access memory communicatively connected to the GPU.
Some general callouts of the storage space in the embodiments of the present application may be: a frame buffer memory, referred to as frame buffer for short, or frame buffer; it should be noted that there may be other different calluses in different programming languages.
A frame buffer including at least one accessory; an accessory comprising at least one buffer type; the buffer types may include: a color type, a depth type, and a template type. For example, three attachments may be included in the frame buffer, specifically: a color type of attachment, a depth type of attachment, and a template type of attachment.
For example, the memory space in the embodiment of the present application may be referred to as vkframebuffer in the Vulkan programming language; called a Frame Buffer Object (FBO) in the Open Graphics Library (OpenGL) programming language. The following is a detailed description of FBO in OpenGL: in OpenGL, creating an FBO requires at least the following conditions: the FBO at least comprises an attachment, wherein the attachment can be a color attachment, a depth attachment and a template buffer attachment; at least one of the accessories is a color accessory; and the attachment is already in the cache. After the target application program is started, a glGenFramebuffers Application Programming Interface (API) in OpenGL can be called to generate at least one FBO, wherein the glGenFramebuffers API is specifically used for creating and generating the FBO; then, according to a drawing instruction of the target application program, calling a glBindFrameBuffer API to bind the target application program with one FBO in at least one FBO, wherein the glBindFrameBuffer API is specifically used for binding the target application program with the FBO; under the condition that the target application program is bound with the FBO, the target application program can call a glDrawElements API to perform drawing operation, wherein the glDrawElements API is a drawing instruction and is specifically used for performing drawing operation in the FBO; then, storing the drawing result into the bound FBO; then, storing the existing drawing result in at least one FBO into a rotation FBO, and synthesizing an image frame in the FBO; when the eglsswapbuffers API is received, rotating the rotating FBO so that when in the first state, image frames in the rotating FBO can be sent to the display device when VSYNC comes in; the eglSwapBuffers API is a concrete representation of the first instruction in OpenGL.
In some embodiments, the electronic device may configure the target application with at least one storage space; when the target application program is bound with different storage spaces, the electronic equipment can store different drawing results into different storage spaces according to the drawing instruction.
For example, in the embodiment of the application, the electronic device stores the drawing result into different storage spaces according to the drawing instruction of the target application program, and stores the drawing result into a seventh storage space when the drawing instruction meets a first condition; when the drawing instruction meets a second condition, storing a drawing result into an eighth storage space; in this embodiment of the present application, the first condition is that the drawing instruction includes an instruction for enabling a depth test, the drawing result may present a three-dimensional effect, and the drawing result under the first condition may be regarded as a dynamic layer shown in fig. 50A; the second condition is that the drawing instruction includes a condition of closing the depth test, and the drawing result can only present a two-dimensional effect. Therefore, the drawing result in the second condition may be regarded as being used for presenting the UI layer shown in fig. 50B. Finally, the electronic device mixes the rendering result in the seventh storage space and the rendering result in the eighth storage space in the ninth storage space, i.e., the image frame shown in fig. 50C can be synthesized in the ninth storage space.
In the embodiment of the application, under the condition that the drawing instruction meets a first condition, drawing operation is executed according to the drawing instruction to obtain a first drawing result; if the first drawing result is presented on the display screen, the presented effect may be a three-dimensional layer/image as shown in fig. 50A; under the condition that the drawing instruction meets a second condition, executing drawing operation according to the drawing instruction to obtain a second drawing result, and if the second drawing result is finally presented on a display screen, presenting the second drawing result to a two-dimensional layer/image as shown in fig. 50B; the electronic device finally mixes the corresponding drawing results when the drawing instruction satisfies different conditions to obtain a final image frame, as shown in fig. 50C. Specifically, in the embodiment of the present application, a drawing process of the electronic device executing the target application program is shown in fig. 50D, and it can be seen that, in the drawing process of the target application program, a drawing instruction of the target application program is continuously switched between meeting a first condition and meeting a second condition; when every two image frames in the image frames of the electronic equipment share one drawing instruction and meet the second condition, the drawing operation of the electronic equipment when half of the drawing instructions meet the second condition can be reduced when the image frames are generated by the electronic equipment, and the power consumption of the electronic equipment is reduced. It should be noted that, in a game scene, the dynamic layer shown in fig. 50A is a root cause that affects game experience, and in a case that the frame rate of the dynamic layer is high, a user may feel that a game screen is smooth; otherwise, the user feels that the game picture is unsmooth, and the game application program is considered to drop frames; the UI layer shown in fig. 50B has less variation in the game scene and is not a root cause of the game experience. Therefore, in the embodiment of the application, when every two frames of images share one drawing instruction and meet the second condition, the corresponding drawing result reduces the power consumption of the electronic device under the condition that the game experience is less influenced or even not influenced. Specifically, in the embodiment of the present application, the first condition refers to that the drawing instruction includes an instruction for enabling a depth test; the second condition refers to an instruction of the drawing instructions including the closing depth test.
In the embodiment of the application, one drawing cycle represents the time length occupied by the whole drawing process from the beginning to the end of drawing of one frame of image frame; specifically, when the electronic device runs the target application program, the electronic device may determine, according to the drawing instruction of the target application program, time points at which the image frame starts to be drawn and ends to be drawn, and further determine a drawing period of the image frame. Specifically, the time point when the target application program calls the first instruction may be a time point when two adjacent drawing later periods end and start; the specific function of the first instruction is explained with reference to the memory space.
The embodiment of the application provides a method for generating image frames, so as to reduce power consumption of electronic equipment when a game application program runs.
Referring to fig. 50E for illustrative purposes, fig. 50E is a schematic diagram of one possible image frame generated in an embodiment of the present application, as shown in fig. 50E,
step (1): when the electronic device detects that the target application is started, the electronic device first creates a seventh storage space 5001, an eighth storage space 5002 and a ninth storage space 5003 associated with the target application;
step (2): when the drawing instruction of the target application program satisfies the first condition, the drawing result is stored in the seventh storage space 5001 as a first drawing result;
And (3): when the drawing instruction of the target application program satisfies the second condition, the drawing result is stored in the seventh storage space 5002 as a second drawing result;
and (4): storing the first drawing result in the seventh storage space 5001 and the second drawing result in the eighth storage space 5002 into the ninth storage space 5003, and synthesizing a seventh image frame from the first drawing result and the second drawing result in the ninth storage space 5003;
and (5): when the drawing instruction of the target application program meets the first condition, the drawing result is stored in the seventh storage space 5001 in a covering manner to serve as a third drawing result;
and (6): the third drawing result in the seventh storage space 5001 and the second drawing result in the eighth storage space 5002 are stored in the ninth storage space 5003, and an eighth image frame is synthesized in the ninth storage space 5003 based on the stored third drawing result and second drawing result.
Wherein, the step (2), the step (3) and the step (4) may constitute a first drawing period; the step (5) and the step (6) may constitute a second rendering period.
It should be noted that, in this example, the ninth storage space may include at least two storage spaces.
After the electronic device synthesizes an image frame in the target storage space, the image frame needs to be transmitted to a display screen for displaying, and then the image frame is deleted, so that the next image frame can be synthesized and stored. For example, in fig. 50E, the electronic device first synthesizes a seventh image frame, then transfers the seventh image frame in the ninth storage space 5003 to the display screen for display, and then empties the ninth storage space 5003 so as to synthesize an eighth image frame in the ninth storage space 5003. It should be noted that the electronic device may further include a plurality of storage spaces, where the plurality of storage spaces form the ninth storage space 5003; and storing the generated image frames into different storage spaces, and further realizing the rotation of the storage spaces according to the first instruction, and further continuously transmitting the image frames in the storage spaces to a display screen for displaying. For example, the ninth storage space may be provided with a thirteenth storage space and an eighth storage space; when the thirteenth storage space storing the seventh image frame is in the first state, that is, the seventh image frame is displayed on the display screen of the electronic device, the electronic device may synthesize the eighth image frame in the fourteenth storage space.
Therefore, in the embodiment of the application, the drawing results obtained when the drawing instruction meets different conditions are separately stored, multiplexing of partial drawing results is achieved, data processing amount of the electronic equipment when the target application program runs is reduced, and power consumption of the electronic equipment when the target application program runs is reduced.
As shown in fig. 51A, the method may include:
s5101, during a first drawing cycle, according to a drawing instruction of a target application program, when the drawing instruction satisfies a first condition, storing a drawing result in a seventh storage space as a first drawing result, and when the drawing instruction satisfies a second condition, storing the drawing result in an eighth storage space as a second drawing result.
The time nodes of the beginning and the ending of the two adjacent drawing periods can be time points when the target application program calls a first instruction, and the first instruction is used for rotating the storage space; for example, when the drawing instruction is an instruction in OpenGL, the first drawing instruction may specifically be the eglSwapBuffers API;
in this example, when the electronic device intercepts the first instruction for the first time, the electronic device may create a seventh storage space and an eighth storage space; the seventh storage space is used for storing the drawing result when the first condition is met in the drawing instruction; the eighth storage space is used for storing a drawing result when a second condition is met in the drawing instruction;
The electronic equipment is provided with an interception module; the interception module can intercept the instruction of the target application program and execute operation according to the intercepted instruction; when the electronic equipment intercepts a drawing instruction of a target application program, if the drawing execution contains an instruction for starting a depth test, determining that the drawing instruction meets a first condition; and if the drawing instruction comprises an instruction for closing the depth test, determining that the drawing instruction meets a second condition. When the drawing instruction meets a first condition, the electronic equipment executes the drawing process according to the target application program, and the depth test is in an enabled state; when the drawing instruction meets the second condition, the electronic equipment executes the drawing process according to the target application program, and the depth test is in a closed state.
In the embodiment of the application, the electronic device is provided with an interception module, and the interception module can intercept the drawing instruction of the target application program, so that the drawing instruction is identified, and the drawing instruction is judged to meet a first condition or a second condition.
The electronic equipment is provided with a counting unit, the initialized value of the counting unit is a first value, and the value of the counting unit is switched between the first value and a second value once every time the value of the counting unit is updated; the moment when the drawing cycle starts is the moment when the counting unit is updated; in a drawing cycle, if the updated value of the counting unit is the second value, emptying an eighth storage space; when the drawing instruction meets a second condition, storing the drawing result into an eighth storage space; if the updated value of the counting unit is the first value, the second storage unit is not cleared; and when the drawing instruction satisfies the second condition, the drawing is not executed. The time nodes for starting and ending two adjacent drawing periods can be the time points for calling the first instruction by the target application program; therefore, when the interception module intercepts the first instruction once, the counting unit is updated once. Specifically, the first value may be 0, and the second value may be 1.
If the first drawing result is presented on the display device, the display effect of the final presentation may be as shown in fig. 50A.
S5102, generating a seventh image frame according to the first drawing result and the second drawing result;
wherein, the electronic device can synthesize a seventh image frame in the seventh storage space 5001; the image frames may also be combined in the ninth storage space 5003. If the electronic device synthesizes a seventh image frame in the seventh storage space 5001, the seventh storage space 5001 may be composed of at least two storage spaces. If the electrons synthesize the seventh image frame in the ninth memory space 5003, the ninth memory space may be composed of at least two memory spaces. The electronic device may rotate the storage spaces storing the image frames, so that the image frames in the storage spaces may be transmitted to the display device and then displayed by the display device.
Referring to fig. 51B, fig. 51B is a schematic diagram of synthesizing a seventh image frame in the ninth storage space 5003 according to the first drawing result in the seventh storage space 5001 and the second drawing result in the eighth storage space 5002.
Referring to fig. 51C, fig. 51C is a schematic diagram of synthesizing a seventh image frame in the seventh storage space 5001 according to the first rendering result in the seventh storage space 5001 and the second rendering result in the eighth storage space 5002.
Wherein the effect of the seventh image frame finally rendered on the display device may be as shown in fig. 50C.
S5103, in the second drawing cycle, according to the drawing instruction of the target application program, when the drawing instruction meets a first condition, storing the drawing result into a seventh storage space to serve as a third drawing result, and when the drawing instruction meets a second condition, not drawing;
if the third drawing result is presented on the display device, the display effect of the final presentation may be as shown in fig. 51D;
s5104, an eighth image frame is generated according to the third drawing result and the second drawing result.
Wherein the effect of the eighth image frame finally rendered on the display device may be as shown in fig. 51E.
As described in step S5102, the eighth image frame may be combined in the seventh storage space 5001 or the ninth storage space 5003.
As can be seen, in the above example, the electronic device separately stores, in the first drawing cycle, a first drawing result obtained when the drawing instruction satisfies the first condition and a second drawing instruction when the drawing instruction satisfies the second condition, and generates the seventh image frame according to the first drawing result and the second drawing result; in a second drawing period, the electronic equipment only needs to draw when the drawing instruction meets the first condition to obtain a third drawing result, the second drawing result is multiplexed, and an eighth image frame is generated according to the second drawing result and the third drawing result; the data processing amount of the electronic equipment is reduced when the drawing instruction meets the first condition, and the power consumption of the electronic equipment is reduced.
The following is an explanation of the flow shown in fig. 51A with reference to one possible example.
Referring to fig. 51F, fig. 51F includes a seventh storage space 5001, an eighth storage space 5002, a ninth storage space 5003 and a counting unit 5104. The electronic equipment draws a frame of image frame in each drawing period; as shown, the initial value of the counting unit is a first value, which is specifically indicated by 0 in the figure; when the target application program is started, the interception module intercepts the call of the target application program to the first instruction for the first time; the value of the counter 5104 is updated to a second value, which is specifically denoted by 1 in the figure; entering a first drawing cycle, and storing a first drawing result into the seventh storage space 5001 in the first drawing cycle when a drawing instruction meets a first condition; when the drawing instruction satisfies the second condition, since the value of the counting unit is the second value, the second drawing result is stored in the second storage space 5002; then, the seventh image frame is synthesized in the ninth storage space 5003 based on the first drawing result and the second drawing result. When the interception module intercepts the call of the target application program to the first instruction again; the value of the counter 5104 is updated to a first value; entering a second drawing cycle, and storing a third drawing result into the seventh storage space 5001 in the second drawing cycle when the drawing instruction meets the first condition; when the drawing instruction meets a second condition, the drawing operation is not executed because the numerical value of the counting unit is the first numerical value; thereafter, an eighth image frame is synthesized in the ninth storage space 5003 based on the third drawing result and the second drawing result. The ninth image frame and the tenth image frame can be obtained in the same way. As can be seen, in the present example, it can be determined by the numerical value of the count unit whether or not the drawing operation is performed when the drawing instruction satisfies the second condition in the current drawing cycle. Specifically, the first value may be 0, and the second value may be 1. It should be noted that the value of the counting unit can be adjusted according to specific situations. For example, the technical units can be switched in the sequence of 0,1 \8230 \8230J, \8230N, N is a positive integer greater than or equal to 2; the initial value of the counting unit is 0; in a drawing cycle, if the updated value of the counting unit is 1, emptying the eighth storage space at the beginning of the drawing cycle; when the drawing instruction meets a second condition, storing the drawing result into an eighth storage space; if the updated counting unit is a value other than 1, the second storage unit is not emptied at the moment of starting the drawing cycle; and when the drawing instruction satisfies the second condition, the drawing is not executed.
Referring to fig. 52, fig. 52 is a schematic view of a modular interaction provided in an embodiment of the present application; fig. 52 includes a target application 1301, an interception module 5201, a storage space configuration module 5202, a cache 1352, and a display device 1350; in particular, this example may be referred to in conjunction with fig. 13.
The interception module 5201 may be located in the three-dimensional graphics processing library 1330 shown in fig. 13. The interception module 5201 is configured to intercept a drawing instruction of the target application 1301, and call an API in the three-dimensional graphics processing library 1330 in response to the interception of the drawing instruction, to complete processing of data of the target application; it should be noted that, in the embodiment of the present application, a position of the interception module in the software framework is not limited, and the interception module may also be located between the application framework layer and the system library, or the interception module is located in the application framework layer.
The storage space configuration module 5202 may be located in the three-dimensional graphics processing library 1330 shown in fig. 13, and the cache configuration module may implement the configuration of the storage space by calling a function/instruction/API in the three-dimensional graphics processing library 1330.
The display device 1350 is used to display the combined image frames.
The following is an explanation of the function of the partial function/instruction/API: binding instructions: the binding instruction is used for binding the target application program with the storage space and storing the drawing result into the bound storage space; drawing instructions: for performing rendering operations according to the target application 1301; the first instruction is to rotate the memory space in which the image frames are stored to a state where the display device 1350 can read. The first instruction may serve as a time node for the end and start of two adjacent cycles. Specifically, in a case that the three-dimensional graphics processing library 1330 is OpenGL, the binding instruction may be a glBindFrameBuffer API; the drawing instruction may be: the glDrawElements API; the first instruction may be the eglsswapbuffers api.
The image frame generation method in the embodiment of the present application may include the steps as shown in fig. 52:
step (1) the electronic device creates a seventh storage space 5001 and an eighth storage space 5002 when the target application program 1301 is started;
the seventh storage space 5001 is used for storing a drawing result obtained when the drawing instruction satisfies the first condition;
the eighth storage space 5002 is used for storing a drawing result obtained when the drawing instruction satisfies the second condition.
In the step (2), the electronic device intercepts a binding instruction of the target application program 1301, and firstly binds the target application program 1301 with the seventh storage space 5001.
And (3) the electronic device intercepts the drawing instruction, stores the drawing result obtained in the drawing process into the seventh storage space 5001 when the drawing instruction meets a first condition and stores the drawing result obtained in the drawing process into the eighth storage space 5002 when the drawing instruction meets a second condition according to the drawing instruction. The electronic device can store the drawing result in the seventh storage space 5001 and the drawing result in the eighth storage space 5002 into the ninth storage space 5003; and synthesizes a seventh image frame in the ninth storage space 5003.
Specifically, when the drawing instruction includes an instruction for starting a depth test, it is determined that the drawing instruction satisfies a first condition; the electronic device invokes a binding instruction to bind the target application program with the seventh storage space 5001, so that a drawing result generated according to the data of the target application program is stored in the seventh storage space 5001; when the drawing instruction comprises an instruction for closing the depth test, determining that the drawing instruction meets a second condition; the electronic device invokes a bind instruction to bind the target application with the eighth storage space 5002, so that a drawing result generated according to the data of the target application is stored in the eighth storage space 5002.
Step (4), the electronic device intercepts the first instruction, and rotates the ninth storage space 5003 to a state where the image frames can be transmitted to the display device 1350; so that the seventh image frame is transmitted to the display device 1350 for display when the vertical synchronization signal arrives.
It should be noted that the time point when the electronic device intercepts the first instruction may represent time nodes when two adjacent drawing cycles end and start.
And (5) the electronic equipment intercepts the drawing instruction, stores the drawing result obtained in the drawing process into the seventh storage space 5001 in a covering manner when the drawing instruction meets a first condition according to the drawing instruction, and does not execute drawing when the drawing instruction meets a second condition. The electronic device may store the drawing result in the seventh storage space 5001 and the drawing result in the eighth storage space 5002 in the ninth storage space 5003; and synthesizes an eighth image frame in the ninth storage space 5003.
Step (6), the electronic device intercepts the first instruction, and rotates the ninth storage space 5003 to a state where the image frames can be transmitted to the display device 1350; such that the eighth image frame is transmitted to the display device 1350 for display when the vertical synchronization signal arrives.
It should be noted that the ninth storage space may be formed by a plurality of storage spaces, that is, the storage spaces for storing the seventh image frame and the eighth image frame may be different; the electronic device rotates the storage space in which the seventh image frame is stored in the ninth storage space to a state in which the image frame can be transmitted to the display device 1350 in step (4); rotating the storage space in which the eighth image frame is stored in the ninth storage space to a state in which the image frame can be transmitted to the display device 1350 in step (6); the storage space for storing the seventh image frame and the storage space for storing the eighth image frame are not the same storage space.
Currently, when an electronic device runs a target application program, if a frame rate of a display screen corresponding to the target application program is to be increased, there are two common methods; one method is that more actually drawn image frames are generated in unit time according to data of a target application program, and then display pictures are generated in unit time according to more image frames, so that the purpose of improving the frame rate is achieved; another approach is that the electronic device may generate a predicted image frame from at least two actually rendered image frames; the electronic equipment generates a display picture on the display screen according to the actually drawn image frame and the predicted image frame, and the predicted image frame is added, so that the aim of improving the frame rate is fulfilled.
The following embodiment describes another method for generating an image frame provided by the embodiment of the present application, in order to reduce power consumption of an electronic device in the case of inserting a predicted frame. As shown in fig. 53A, the method may include:
s5301, during a first drawing cycle, storing, by the electronic device, a drawing result in a seventh storage space as a first drawing result when the drawing instruction satisfies a first condition, and storing, by the electronic device, the drawing result in an eighth storage space as a second drawing result when the drawing instruction satisfies a second condition, according to a drawing instruction of a target application program; and generating a seventh image frame according to the first drawing result and the second drawing result.
S5302, during a third drawing cycle, storing, by the electronic device, the drawing result in a seventh storage space as a fourth drawing result when the drawing instruction satisfies a first condition, and storing, by the electronic device, the drawing result in an eighth storage space as a fifth drawing result when the drawing instruction satisfies a second condition, where the third drawing cycle is located after the first drawing cycle, according to the drawing instruction of the target application program; and generating a ninth image frame according to the fourth drawing result and the fifth drawing result.
S5303, during a fourth drawing cycle, generating, by the electronic device, a sixth drawing result according to the first drawing result and the fourth drawing result; generating a tenth image frame according to the sixth drawing result and the seventh drawing result; the seventh drawing result is a drawing result obtained when the drawing instruction of the target application satisfies the second condition in a fifth drawing cycle, which is prior to the fourth drawing cycle.
The specific process of generating the sixth drawing result according to the first drawing result and the fourth drawing result may include the following steps:
it should be noted that, for convenience of describing the process of generating the sixth drawing result from the first drawing result and the fourth drawing result, display effect diagrams that may be presented by the first drawing result, the fourth drawing result and the sixth drawing result in this example are schematically shown in fig. 53B to 53D; fig. 53B is a diagram showing an effect that the first rendering result may exhibit; fig. 53C is a diagram showing an effect that the fourth rendering result may exhibit; fig. 53D shows an effect diagram that may be exhibited by the sixth rendering result.
s11: the electronic equipment executes a block matching algorithm aiming at the first drawing result and the fourth drawing result, and determines blocks matched with each other in the first drawing result and the fourth drawing result;
please refer to fig. 53B; wherein the position coordinate of the first target block is (x) 1 ,y 1 ) The first target block specifically shows a first area on the car. Please refer to fig. 53C; wherein the position coordinate of the second target block is (x) 2 ,y 2 ) The second target block displays a first area on the automobile as the first target block; since the first target block and the second target block represent the same area on the automobile, the first target block and the second target block are blocks that match each other.
s12: the electronic equipment calculates a motion vector corresponding to each pair of matched blocks according to the matched blocks to obtain a motion vector corresponding to each block in a fourth drawing result;
referring to FIG. 53C, as shown in FIG. 53C, the motion vector of the second target block relative to the first target block is (x) 2 -x 1 ,y 2 -y 1 ) Then the motion vector of the second object is (x) 2 -x 1 ,y 2 -y 1 );
s13: the electronic equipment generates a position coordinate corresponding to each block in a sixth drawing result according to the motion vector corresponding to each block in the fourth drawing result and the position coordinate corresponding to each block in the fourth drawing result, and a sixth drawing result is obtained;
Referring to FIG. 53D, a third target block is also used to represent a first area on the vehicle; the coordinates of the third target block are (x) 3 ,y 3 ) Coordinate (x) of the third target block 3 ,y 3 ) From the coordinates (x) of the second target block 2 ,y 2 ) And motion vector (x) 2 -x 1 ,y 2 -y 1 ) Are added to obtain 3 =x 2 +x 2 -x 1 ;y 3 =y 2 +y 2 -y 1
It should be noted that fig. 53B-53D only exemplify target blocks representing the first area of the automobile, and in the process of actually generating the sixth rendering result, the electronic device may traverse each block in the first rendering result and the fourth rendering result, execute the above-mentioned processing flow of obtaining the third target block according to the first target block and the second target block for each block, and finally obtain coordinates of each block in the sixth rendering result, so as to generate the sixth rendering result.
As can be seen, in this example, the electronic device reduces the power consumption of the electronic device according to the drawing result when the drawing instruction meets the second condition every other frame of image frame; meanwhile, when the electronic device generates a tenth image frame to be inserted, calculation is not required to be performed according to all data of the first image frame and the ninth image frame, so that the data processing amount of the electronic device in the drawing process is reduced, and the power consumption of the electronic device in the image frame generation process is reduced.
The electronic equipment reduces the drawing times of the UI layer defined in the embodiment of the application, and reduces the power consumption of the electronic equipment; meanwhile, the electronic equipment can predict a dynamic layer and synthesize an image frame according to the predicted dynamic layer and the UI layer, so that the effect of frame interpolation is realized, and the frame rate and the fluency of the electronic equipment are improved when the electronic equipment displays the image according to the image frame; specifically, the scheme can be applied to a game scene, that is, when the electronic device runs a game application program, the electronic device can reuse data corresponding to the UI layer, so that power consumption of the electronic device when the game application program runs is reduced; meanwhile, the electronic equipment can predict and generate the next dynamic layer according to the two dynamic layers drawn according to the actual game running data, and further synthesize the image frame of the game application program according to the predicted dynamic layers, so that the frame insertion processing in the game scene is realized, and the refresh rate of the display image when the electronic equipment runs the game application program is improved.
The following explains the flow shown in fig. 51A and 53A with reference to one possible example.
Referring to fig. 53E, fig. 53E is a schematic diagram illustrating ten image frames being generated; the seventh image frame, the eighth image frame, (8230) \8230;, et al are indicated by numerals 1,2,3,4 in fig. 53E.
FIG. 53E is a schematic diagram illustrating a process for generating ten frame image frames;
wherein, the seventh image frame and the eighth image frame; a ninth image frame and a tenth image frame; the flow method shown in fig. 51A may be considered to be employed. In fig. 53E, the first drawing result, the third drawing result, the fourth drawing result, the sixth drawing result, the seventh drawing result, the tenth drawing result, and the thirteenth drawing result are all drawing results obtained when the drawing instruction satisfies the first condition. The second drawing result, the fifth drawing result, the eighth drawing result, the eleventh drawing result and the fourteenth drawing result are drawing results obtained when the drawing instruction satisfies the second condition. Generating a ninth drawing result according to the fourth drawing result and the seventh drawing result; generating a twelfth drawing result according to the seventh drawing result and the tenth drawing result; the tenth drawing result and the thirteenth drawing result generate a fifteenth drawing result.
The following is explained with respect to the flow shown in fig. 53A in conjunction with another possible example.
Referring to fig. 53F, fig. 53F is a schematic diagram illustrating the generation of six frame image frames; in this example, the first drawing result, the third drawing result, the sixth drawing result, and the eighth drawing result are drawing results obtained when the drawing instruction satisfies the first condition; the second drawing result, the fourth drawing result, the seventh drawing result and the ninth drawing result are drawing results obtained when the drawing instruction satisfies the second condition. Generating a fifth drawing result by the first drawing result and the third drawing result; the sixth drawing result and the eighth drawing result generate a tenth drawing result.
In the example shown in fig. 53F, the counting unit may perform update switching among three values, i.e., the first value, the second value, and the third value, and the update switching order is switched according to the order from the first value to the second value; specifically, the counting unit has an initial value of a first numerical value, when the counting unit enters a first drawing period, the counting unit updates the numerical value to a second numerical value, enters a next drawing period and updates the numerical value to a third numerical value, then enters a next period and updates the first numerical value, \ 8230 \ 8230, and when the counting unit has the numerical value of the first numerical value, the drawing operation is not executed when a drawing instruction meets a second condition; and when the numerical value of the counting unit is a second numerical value or a third numerical value, storing the drawing result into an eighth storage space when the drawing instruction meets a second condition. Specifically, the first value may be 0; the second value may be 1; the third value may be 2.
Referring to fig. 53G, fig. 53G illustrates a possible schematic diagram during a rendering process. Fig. 53G includes a seventh storage space 5001, an eighth storage space 5002, a ninth storage space 5003, a tenth storage space 5004, and an eleventh storage space 5005. In the first drawing cycle, according to a drawing instruction of a target application program, when the drawing instruction meets a first condition, storing a first drawing result into the seventh storage space 5001; according to the drawing instruction of the target application program, when the drawing instruction meets a second condition, storing a second drawing result in the eighth storage space 5002; a seventh image frame is generated in the ninth storage space 5003 from the first drawing result and the second drawing result. In the second drawing cycle, according to the drawing instruction of the target application program, when the drawing instruction satisfies the first condition, the third drawing result is stored in the tenth storage space 5004; according to the drawing instruction of the target application program, when the drawing instruction meets a second condition, the drawing is not executed; an eighth image frame is generated from the third drawing result and the second drawing result in the ninth storage space 5003. In the third rendering period, the first rendering result in the seventh storage space 5001 and the third rendering result in the tenth storage space are stored in the eleventh storage space 5005; generating a fourth rendering result from the first rendering result and the third rendering result in the eleventh storage space 5005; and generating a ninth image frame according to the fourth drawing result and the second drawing result.
Referring to fig. 53H, fig. 53H illustrates a possible schematic diagram during a rendering process. Fig. 53H includes a seventh storage space 5001, an eighth storage space 5002, a ninth storage space 5003, a tenth storage space 5004, and an eleventh storage space 5005. In the first drawing cycle, according to a drawing instruction of a target application program, when the drawing instruction satisfies a first condition, storing a first drawing result in the seventh storage space 5001; according to the drawing instruction of the target application program, when the drawing instruction satisfies a second condition, storing a second drawing result in the eighth storage space 5002; a seventh image frame is generated in the ninth storage space 5003 from the first drawing result and the second drawing result. In the second drawing cycle, according to the drawing instruction of the target application program, when the drawing instruction satisfies the first condition, the first drawing result is stored in the tenth storage space 5004; according to the drawing instruction of the target application program, when the drawing instruction meets a second condition, storing a fourth drawing result into the eighth storage space 5002; an eighth image frame is generated from the third drawing result and the fourth drawing result in the ninth storage space 5003. In the third rendering cycle, the first rendering result in the seventh memory space 5001 and the third rendering result in the tenth memory space 5004 are stored in the eleventh memory space 5005; generating a fifth rendering result from the first rendering result and the third rendering result in the eleventh storage space 5005; a ninth image frame is generated in the ninth storage space 5003 based on the fourth drawing result in the eighth storage space 5002 and the fifth drawing result in the eleventh storage space 5005.
Referring to fig. 53I, fig. 53I is another possible example of the process shown in fig. 53A. FIG. 53I illustrates a schematic diagram of generating eight frame image frames; in fig. 53I, the seventh image frame, the eighth image frame, 8230, etc. are indicated by numerals 1,2,3,4, \8230, 8230, 8.
In fig. 53I, the first drawing result, the third drawing result, the fourth drawing result, the seventh drawing result, and the tenth drawing result are all drawing results obtained when the drawing instruction satisfies the first condition. The second drawing result, the fifth drawing result, the eighth drawing result and the eleventh drawing result are all drawing results obtained when the drawing instruction satisfies the second condition. Generating a sixth drawing result by the first drawing result and the fourth drawing result; generating a ninth drawing result according to the fourth drawing result and the seventh drawing result; the seventh drawing result and the tenth drawing result generate a twelfth drawing result.
In the embodiment of the present application, each frame of image that the electronic device uses to display in the display screen is called an image frame. In the embodiment of the present application, the image frame may be a frame image of a certain application, may be a drawing result drawn by the electronic device according to a drawing instruction of the application, and may also be a prediction result predicted according to an existing drawing result. As shown in fig. 54A, the electronic device (i.e., tablet computer 10) displays a user interface 5400. At time T0, the nth frame image frame is displayed in the user interface 10. The nth frame image frame is a drawing frame. The timing diagram 5401 in fig. 54A shows image frames that the electronic device can display from time T0 to time Tn.
It will be appreciated that the size of the image frame, the size of the rendered frame and the size of the predicted frame correspond to the display size of the application to which they pertain. For example, as shown in fig. 54A, the display size of the application to which the nth frame image frame shown in fig. 54A belongs in the tablet computer 10 is: width L and height H. Then the size of the nth frame image frame may be: width L and height H. As also shown in fig. 54B, the display size of the tablet pc 10 to which the image frame 5402 shown in fig. 54B belongs is: width L0 and height H. Then the size of the image frame 5402 may be: width L0 and height H. Also as shown in fig. 54C, an image frame 5403 is shown in fig. 54C, along with a control column 5404. Among other things, controls 5405 can be included in controls column 540404. The control bar 5404 may be drawn by the system of the tablet computer 10. The display size of the tablet pc 10 to which the image frame 5403 belongs is: width L1 and height H0. Then the size of the image frame 5403 may be: the width is L1 and the height is H0.
To improve the frame rate and improve the fluency of the video, the electronic device may insert prediction frames between the rendered frames of the application. The electronic device may perform image frame prediction based on the applied rendering frame to obtain a predicted frame. As shown in fig. 54A, the electronic device may insert a frame of predicted image frames between every two rendered frames. In this way, the frame rate of the image frames displayed by the electronic device may be increased.
It is understood that the drawing result drawn by the electronic device according to the drawing instruction of the application program may be an image frame drawn and rendered by the electronic device according to a frame of image captured by the camera. Objects in an image frame may move due to positional changes such as movement, rotation, etc. of the camera. For example, fig. 55A exemplarily shows that the camera 200 photographs the cylinder 5502, the cylinder 5503, and the rectangular parallelepiped 5504 at different positions, and one frame image photographed in the photographing field of view of the camera 5500 may be an image frame. Fig. 55B exemplarily shows a camera photographing view 5501 of the camera 200. As shown in fig. 55A, at position 1, one frame image captured in the camera field of view 5505 of camera 5500 may be image frame 5508 in fig. 55A, i.e., rendered frame a. At position 2, one frame of image captured in the camera field of view 5506 of camera 5500 may be image frame 5509 in fig. 55A, i.e., rendered frame B. The electronic device may predict the image frame 5510 of the camera 5500 at position 3, i.e., the predicted frame 5510, from the rendered frame a and the rendered frame B. The predicted frame is predicted from the drawing frame B and the drawing frame a, and an object included in the predicted frame is the same as an object included in the drawing frame B. Drawing frame a includes cylinder 5502 and cuboid 204, and drawing frame B includes only cuboid 5504. Therefore, only rectangular parallelepiped 5504 is included in the predicted frame. If the drawing is normal, at the position 3, the drawing contents drawn by the electronic device are objects that can be photographed by the photographing field of view 5507 of the camera 5500. At position 3, the camera 200 has a field of view 5507 that includes a partial region of the cuboid 5504 and a partial region of the cylinder 5503. The cylinder 5503 not present in the drawing frame B cannot be predicted in the prediction frame. This results in inaccurate predicted frames.
In order to make the predicted image frame more accurate, an embodiment of the present application provides a method for predicting an image frame, which may include: when drawing a twenty-first drawing frame of the first application, the electronic equipment draws a drawing instruction of the twenty-first drawing frame according to a first drawing range to obtain a twenty-first drawing result, wherein the size of the first drawing range is larger than that of the first drawing frame of the first application; when drawing a twenty-second drawing frame of the first application, the electronic equipment draws a drawing instruction of the twenty-second drawing frame according to a second drawing range to obtain a twenty-second drawing result, wherein the size of a twenty-second memory space is larger than that of the second drawing frame, and the size of the twenty-first drawing frame is the same as that of the twenty-second drawing frame; and the electronic equipment predicts and generates a twenty-third predicted frame of the first application according to the twenty-first drawing result and the twenty-second drawing result, wherein the size of the third predicted frame is the same as that of the twenty-first drawing frame. In this way, the electronic device draws the twenty-first drawing frame and the twenty-second drawing frame according to a drawing range with a larger size, and can draw more drawing contents than the twenty-first drawing frame and the twenty-second drawing frame displayed by the electronic device. In this way, the electronic device may obtain a predicted frame. Under the condition of not increasing the drawing frame, the frame rate of the electronic equipment can be improved. Therefore, the fluency of the video interface displayed by the electronic equipment can be improved under the condition of saving the power consumption of the electronic equipment. Further, the predicted frame predicted by the electronic device may have rendering contents that are not present in the first rendering frame and the second rendering frame displayed by the electronic device. Thus, the drawing content in the predicted frame predicted by the electronic device is closer to the shooting content in the shooting field of view of the camera. Thus, the image frames predicted by the electronic device may be more accurate.
It is understood that the drawing instruction of the U-th drawing frame may include the drawing content, and when the drawing range is relatively small, the U-th drawing frame drawn and rendered by the GPU may include only a portion of the drawing content included in the drawing instruction of the U-th drawing frame. When the drawing range is larger, the GPU can draw and render more drawing contents contained in the drawing instruction of the U-th drawing frame. For example, if the drawing range is a region having an abscissa of from 50 to 100 and an ordinate of from 50 to 100, one of the drawing instructions of the U-th drawing frame draws an object having an abscissa of from 10 to 60 and an ordinate of from 10 to 60. Then only the portion of the drawing content having the abscissa from 50 to 60 and the ordinate from 50 to 60 is rendered into the drawing range. If the drawing range is expanded to a region having an abscissa from 0 to 150 and an ordinate from 0 to 150, the drawing contents may be rendered in full within the drawing range.
In the embodiment of the present application, the U-th drawing frame may be referred to as a twenty-first drawing frame, the U + 2-th drawing frame may be referred to as a twenty-second drawing frame, and the U + 3-th prediction frame may be referred to as a twenty-third prediction frame.
A method for image frame prediction according to an embodiment of the present application will be described in detail below with reference to the accompanying drawings. Fig. 56 is a flowchart illustrating a method for image frame prediction according to an embodiment of the present application. As shown in fig. 56, a method for image frame prediction provided by an embodiment of the present application may include the following steps:
S5601-S5602, the electronic device begins to perform an image frame prediction method.
S5601, when the target application starts to draw, the CPU of the electronic device sends an instruction to instruct the GPU to create a memory space.
The target application is an application with animation effects in the user interface, such as a game-like application. The embodiments of the present application are described below by taking a target application as an example of a game application. When a game application installed in the electronic device runs, the CPU of the electronic device sends an instruction to the GPU instructing the GPU to create a memory space. Specifically, the instruction may carry information about the number and size of the created memory space.
S5602, the GPU of the electronic device creates a twenty-first memory space (FBO 1), a twenty-second memory space (FBO 2), and a twenty-third memory space (FBO 3) in the memory, the sizes of which are larger than the default memory space (FBO 0) in the electronic device, based on the size of the default memory space (FBO 0) in the electronic device.
In response to an instruction sent by the CPU, the GPU creates a twenty-first memory space (FBO 1), a twenty-second memory space (FBO 2), and a twenty-third memory space (FBO 3) in the memory having a size larger than the default memory space based on the size of the default memory space of the electronic device. In an embodiment of the present application, the default memory space is a memory space provided by a system (e.g., a rendering system) of the electronic device for storing image frames for display. The default memory space may store image frames for display of the target application program, and may also store image frames for display in other application programs. As shown in fig. 57, the default memory space may include a plurality of attachments (attachments) with consecutive logical addresses. For example, the default memory space shown in FIG. 57 can include n memories for attachment 5701, attachment 5702, \8230, attachment 570n, etc. Typically, n is less than or equal to 3. Each of the n attachments of the default memory space may have a width of L and a height of H. In this embodiment, the width of the default memory space may be referred to as a third size, and the height of the default memory space may be referred to as a fourth size. It is understood that the width of the default memory space refers to the width of each attachment in the default memory space, and the height of the default memory space may refer to the height of each attachment in the default memory space.
Optionally, the n attachments in the default memory space may include a color attachment (color attachment) and a depth attachment (depth attachment). For example, the attachment 5701 may be a color attachment. In this embodiment, the color attachment is a memory, and is used to store color data (for example, RGB values of pixels) of each pixel in a drawing result when the electronic device performs drawing according to the drawing instruction. The color attachment may be part of the FBO (frame buffer object). In this embodiment, the depth attachment is a memory, and is used to store depth data of each pixel point in a drawing result when the electronic device performs drawing according to a drawing instruction. The color attachments may be part of the FBO. It can be appreciated that the smaller the depth value of the pixel points in the depth attachment, the closer the distance to the camera. When synthesizing an image frame, for two pixel points with equal coordinate values in two color attachments, one pixel point with a small depth value can cover the other pixel point with a large depth value. Namely, the color displayed by the pixel point of the final display screen is the color of the pixel point with small depth value in the two color attachments.
The twenty-first memory space is larger in size than the default memory space. In one possible implementation, the width of the twenty-first memory space is K1 times the width of the default memory space, and the height of the twenty-first memory space is K2 times the height of the default memory space. As shown in the twenty-first schematic memory space of fig. 58, the logical addresses of the twenty-first memory space may be arranged consecutively. The twenty-first memory space may include a plurality of enclosures with sequentially arranged logical addresses. For example, n memories for accessory 5801, accessory 5802, \8230;, accessory 580n, etc. are shown in FIG. 58. In general, n may be less than or equal to 3. Each of the n accessories of the twenty-first memory space may have a width of L · K1 and a height of H · K2. Both K1 and K2 are greater than 1. K1 may be equal to K2. K1 and K2 may be configured by a system of the electronic device. Optionally, each of the n accessories in the twenty-first memory space may have a width of L + Q1 and a height of H + Q2. Q1 and Q2 are both greater than 0. Q1 and Q2 may be equal. Q1 and Q2 may be configured by a system of the electronic device. Optionally, K1 and K2 of the system configuration of the electronic device are both fixed values. That is, the values of K1 and K2 are not changed every time the electronic device performs the image frame prediction method provided by the embodiment of the present application.
In this embodiment, a width of the twenty-first memory space may be referred to as a first size, and a height of the twenty-first memory space may be referred to as a second size.
In one possible implementation, K1 and K2 may be floating point numbers and Q1 and Q2 may be integers.
The size of the twenty-second memory space is greater than the size of the default memory space. In one possible implementation, the width of the twenty-second memory space is K1 times the width of the default memory space, and the height of the twenty-second memory space is K2 times the height of the default memory space. As shown in the twenty-second memory space diagram of fig. 59, the logical addresses of the twenty-second memory space may be arranged consecutively. The twenty-second memory space may include a plurality of enclosures with sequentially arranged logical addresses. For example, n attachments such as attachment 5901, attachment 5902, \8230, attachment 590n, etc. are shown in FIG. 59. In general, n may be less than or equal to 3. Each of the n attachments in the twenty-second memory space may have a width L · K1 and a height H · K2. Both K1 and K2 are greater than 1. K1 may be equal to K2. K1 and K2 may be configured by a system of the electronic device. Optionally, each of the n accessories of the twenty-second memory space may have a width of L + Q1 and a height of H + Q2. Q1 and Q2 are both greater than 0. Q1 and Q2 may be equal. Q1 and Q2 may be configured by a system of the electronic device.
In this embodiment, a width of the twenty-second memory space may be referred to as a fifth dimension, and a height of the twenty-second memory space may be referred to as a sixth dimension.
The size of the twenty-third memory space is greater than the size of the default memory space. In one possible implementation, the width of the twenty-third memory space is K1 times the width of the default memory space, and the height of the twenty-third memory space is K2 times the height of the default memory space. As shown in the twenty-third schematic memory space of fig. 60, the logical addresses of the twenty-third memory space may be arranged consecutively. A twenty-third memory space may include a plurality of enclosures with sequentially arranged logical addresses. For example, n attachments such as attachment 6001, attachment 6002, \8230, attachment 600n, etc. are shown in FIG. 60. Each of the n accessories of the twenty-third memory space may have a width of L · K1 and a height of H · K2. Both K1 and K2 are greater than 1. K1 may be equal to K2. K1 and K2 may be configured by the electronic device system. Optionally, each of the n accessories of the twenty-third memory space may have a width of L + Q1 and a height of H + Q2. Q1 and Q2 are both greater than 0. Q1 and Q2 may be equal. Q1 and Q2 may be configured by a system of the electronic device. Optionally, Q1 and Q2 of the system configuration of the electronic device are both fixed values. That is, the values of Q1 and Q2 are not changed every time the electronic device performs the image frame prediction method provided by the embodiment of the present application.
In this embodiment, a width of the twenty-third memory space may be referred to as a seventh size, and a height of the twenty-third memory space may be referred to as an eighth size.
In one possible implementation, the size of the twenty-first memory space is the same as the size of the twenty-second memory space and the size of the twenty-third memory space. That is, if the twenty-first memory space has a width L · K1 and a height H · K2. The width of the twenty-second memory space is L · K1 and the height is H · K2. The twenty-third memory space has a width of L · K1 and a height of H · K2.
S5603-S5606, the electronic device draws the Uth drawing frame.
S5603, the electronic device obtains a drawing parameter of the U-th drawing frame.
When a target application in the electronic device performs drawing, the target application may call a drawing instruction to perform drawing. The CPU of the electronic device 100 may acquire the drawing parameter of the U-th drawing frame of the application program through an interface in the three-dimensional image processing library. And the drawing parameters of the U-th drawing frame are used for drawing and rendering the U-th drawing frame. The drawing parameters of the U-th drawing frame may include information carried in a drawing instruction (e.g., a draw call instruction) of the U-th drawing frame, such as coordinates, color values, depth values, and the like of each vertex in the drawing contents of the draw call instruction.
S5604, the CPU in the electronic device sends a drawing instruction for instructing the GPU to draw the Uth drawing frame to the GPU.
The CPU of the electronic device may send, to the GPU, a drawing instruction for instructing the GPU to draw the U-th drawing frame according to the drawing parameter of the U-th drawing frame. It is understood that the drawing parameters of the U-th drawing frame acquired by the CPU may include information of a plurality of drawing instructions. In this way, the CPU may sequentially send a plurality of drawing instructions for instructing the GPU to draw the U-th drawing frame to the GPU. In the embodiment of the present application, the drawing instruction includes an execution drawing (draw call) instruction and a drawing state setting instruction.
The execution of the drawing instruction may be used to trigger the GPU to perform drawing rendering on the current drawing state data, and generate a drawing result, for example, a glDrawElements instruction in OpenGL. OpenGL is a cross-language, cross-platform Application Programming Interface (API) for rendering 2D, 3D vector graphics.
The rendering state setting instruction may be configured to set current rendering state data on which the rendering instruction depends, for example, to set the state data to include a vertex information cache index on which rendering depends, for example, a glBindBuffer in OpenGL, where the vertex information cache index is used to indicate vertex information data of a rendering object, and the vertex information data is a set of coordinate position, color, and the like data used to describe a vertex of a two-dimensional or three-dimensional vector model used for rendering in a rendering process.
The drawing state setting instruction may further include an instruction to set a vertex index, texture information, a spatial position, and the like of the drawing object, for example, a glActiveTexture, a glBindBufferRange instruction, and the like in OpenGL. A drawing object may be an object that can be drawn by the electronic device according to all vertices and vertex information included in one drawing instruction.
For a more visual illustration, taking the example of drawing the rectangular block 5504 in fig. 55A by the electronic device, one possible OpenGL drawing instruction may be as follows in execution order:
glbindbufferange (target = GL _ unitorm _ BUFFER, index =1, BUFFER =738, offset =0, size = 352)// indicates that the GPU modifies partial rendering global information, such as the position of rectangular parallelepiped 5504 in fig. 55A;
glBindBuffer (target = GL _ ARRAY _ BUFFER, BUFFER = BUFFER 0)// indicating that the GPU stores BUFFER0 index information, which holds vertex information (e.g., information of a position, a color, and the like of a vertex) of the rectangular parallelepiped 5504, into GL _ ARRAY _ BUFFER;
glBindBuffer (target = GL _ ELEMENT _ ARRAY _ BUFFER, BUFFER = BUFFER 1)// instructing the GPU to save the index of BUFFER1, which saves the vertex index information (e.g., drawing order of the vertices) of the rectangular parallelepiped 5504, into GL _ ELEMENT _ ARRAY _ BUFFER;
glActiveTexture(texture=GL_TEXTURE0)
glBindTexture (target = GL _ text _2d, TEXTURE = TEXTURE 1)// instructs the GPU to save the index of TEXTURE1 that saves the TEXTURE information of the rectangular solid 5504 into GL _ text _ 0;
glDrawElements (GLenumomode, GLsizeicount, GLenumtype, constvoid indices)// indicates that the GPU performs rectangular parallelepiped 5504 rendering.
S5605, in the twenty-first memory space, the GPU of the electronic device draws the drawing content in the drawing instruction of the U-th drawing frame into a first drawing range to obtain a twenty-first drawing result, where a size of the first drawing range is smaller than or equal to a size of the twenty-first memory space and larger than a size of the default memory space.
The GPU of the electronic device may draw the drawing content of the drawing instruction of the U-th drawing frame into the twenty-first memory space, to obtain a twenty-first drawing result. Optionally, the GPU of the electronic device may draw the drawing content of the drawing instruction of the U-th drawing frame into the first drawing range of the twenty-first memory space, to obtain a twenty-first drawing result. The size of the first rendering range is smaller than or equal to the size of the twenty-first memory space and larger than the size of the default memory space. It is to be understood that, in the embodiment of the present application, the drawing range in any one of the enclosures in the twenty-first memory space may be referred to as a first drawing range.
In one possible implementation, the size of the first drawing range is determined by the electronic device according to a viewport parameter in the target application program. The viewport parameter of the target application is used to specify the width and height of a drawing range of an image frame in the electronic device in which the target application is drawn. Generally, the width of the drawing range specified by the viewport parameter of the target application program is the same as the width of the default memory space in the electronic device, and the height of the drawing range specified by the viewport parameter is the same as the height of the default memory space in the electronic device. The electronic device may specify the size of the first rendering range by modifying a viewport parameter of the target application program through hook technology. For example, if the drawing range of the U-th drawing frame specified by the viewport of the target application is L in width and H in height. The electronic device may modify the viewport parameter of the target application by hook techniques to designate the width of the first rendering range as L · K3 and the height as H · K4. Both K3 and K4 are greater than 1. Optionally, K3 and K4 are floating point numbers greater than 1. K3 is less than or equal to K1 and K4 is less than or equal to K2. Here, the viewport parameter in the U-th drawing frame may be the first parameter in the embodiment of the present application.
It can be understood that the viewport parameter in the drawing instruction issued by the application program is used to specify the size of the canvas for drawing the content in the drawing instruction of the application program. Generally, the electronic device draws the drawing content in the drawing instruction of each frame of the application program according to the size of a canvas, and then renders the canvas after drawing into a display screen. The canvas size specified by the viewport parameter in the application program may be the same as the size of the display screen of the electronic device, or may be smaller than the size of the display screen of the electronic device, which is not limited herein. As shown in fig. 61 (a), fig. 61 (a) illustrates an enclosure 6102 in a twenty-first memory space and a first drawing range 6101. The memory 802 has a width L K1 and a height H K2. The first drawing range 6101 has a width of L · K3 and a height of H · K4. The following description will be made by taking an example in which the electronic device draws the drawing content of the drawing instruction in the U-th drawing frame to the first drawing range 6101.
In one possible implementation, K3 is equal to K1, a fixed value configured for a system of the electronic device. K4 is equal to K2, a fixed value configured for the system of the electronic device. It is to be understood that the size of the first rendering range is equal to the size of the twenty-first memory space.
In one possible implementation, the values of K3 and K4 may be determined by including the corner parameter of the camera in the rendering parameters of the previous rendering frame in the U-th rendering frame. If the image frame displayed by the electronic device is as shown in fig. 54A, the electronic device may insert a prediction frame into every two drawing frames, and then the drawing frame before the U-th drawing frame is the U-2-th drawing frame.
Further, the specific calculation process of the electronic device to determine K3 may be as follows:
1. and calculating a coordinate conversion matrix T between the N-2 th drawing frame and the U-th drawing frame.
T=(P1V1) -1 * (P2V 2) (equation 1)
Wherein, (P1V 1) -1 Is the inverse matrix of (P1V 1). V1 is an observation matrix (view matrix) included in the rendering parameters of the U-2 th rendering frame, and P1 is the rendering parameters of the U-2 th rendering frameA projection matrix (projection matrix) included in the number. V2 is an observation matrix included in the rendering parameters of the U-th rendering frame, and P2 is a projection matrix included in the rendering parameters of the U-th rendering frame.
In the embodiment of the present application, the observation matrix is a conversion matrix between world space (world space) and observation space (camera space). For example, the coordinates of vertex 1 in the drawing instruction of the U-th drawing frame may be converted from coordinates in the world space to coordinates in the observation space by the observation matrix. The projection matrix is a transformation matrix between a viewing space and a clip space (clip space). For example, the coordinates of vertex 1 may be converted from coordinates in view space to coordinates in clip space by a projection matrix. World space is the corresponding space in world coordinates. The observation space is a space corresponding to a camera coordinate system (a coordinate system constructed with a camera as a coordinate origin). The position of the object described in the observation space is a position in the camera coordinates. The cropping space specifies a range of coordinates of objects that may be displayed in a display screen of the electronic device.
2. And calculating the maximum value Z (Ai)' max of the X-axis component of the position offset between the Nth frame and the N-2 th frame of all the pixel points in the rightmost column in the Nth frame.
If the rightmost column in the Nth frame includes r pixel points A1, A2, A3, \8230;, ai, \8230;, ar, etc. The position offset Z (Ai) of the pixel point Ai from the U frame to the U-2 frame is as follows:
z (Ai) = Ai (xai, yai) -Aprev (equation 2)
Wherein Ai (xai, yai) is the coordinate of the pixel space of the pixel Ai in the Nth frame; aprev is the coordinate of the pixel point Ai in the pixel space of the U-2 drawing frame. And the electronic equipment establishes a coordinate system by taking the lower left corner of the U-th drawing frame as an origin and taking the lower left corner as an X axis and the upper right corner as an Y axis. The space to which the coordinate system corresponds may be referred to as a pixel space in the nth frame. Similarly, the electronic device establishes a coordinate system with the bottom left corner of the U-2 th drawing frame as the origin and the right as the X-axis and the upper Y-axis. The space to which the coordinate system corresponds may be referred to as the pixel space in frame U-2.
The electronic device can acquire a coordinate Ai (xai, yai) of a pixel space of the first pixel Ai in the nth frame, and the electronic device acquires that the drawing range of the U-th frame has a width of L0 and a height of H0. The depth value of the pixel point Ai is Dai, and then the electronic device can calculate that the coordinate Ai _ clip of the pixel point Ai in the projection space of the nth frame is as follows:
Ai _ clip = (xai/L0, yai/H0, 2X Dai-1, 1) (equation 3)
The coordinate Ai _ clip _ prev of the pixel Ai in the N-2 frame clipping space is as follows:
ai _ clip _ prev = T Ai _ clip (equation 4)
The coordinate Aprev of the pixel point Ai in the pixel space of the N-2 frame is as follows:
aiprev = (Ai _ clip _ prev.x/Ai _ clip _ prev.w.l 0, ai _ clip _ prev.y/Ai _ clip _ prev.w.h 0) (formula 5)
The Ai _ clip _ prev is a 1-by-4 vector, wherein Ai _ clip _ prev.x is a first element of Ai _ clip _ prev, a _ clip _ prev.y is a second element of Ai _ clip _ prev, and Ai _ clip _ prev.w is a fourth element of Ai _ clip _ prev.
If Z (Ai) = x > =0 then the value of Z (Ai) is modified to (0, 0) and if Z (Ai) · x <0 then each value in Z (Ai) is taken as the absolute value, resulting in Z (Ai)'.
The electronic equipment calculates the position offset between the Nth frame and the U-2 th frame of r pixel points of A1, A2, A3, \8230, ai, \8230, ar and the like according to formula 2-formula 5. The electronic device determines a maximum value Z (Ai)' max of X-axis components of position offsets between U-th frame to U-2-th frame for r pixel points A1, A2, A3, \8230, ai, \8230, ar, etc.
3. And calculating the maximum value Z (Bi)' max of the position deviation X-axis components of all pixel points in the leftmost column in the Nth frame from the U-th frame to the U-2 th frame.
If the rightmost column in the Nth frame includes r pixel points B1, B2, B3, \8230;, bi, \8230;, br, etc. The position offset Z (Bi) of the pixel point Bi from the U frame to the U-2 frame is as follows:
z (Bi) = Bi (xbi, ybi) -Biprev (formula 6)
Wherein Bi (xbi, ybi) is the coordinate of the pixel space of the pixel point Bi in the Nth frame; biprev is the coordinate of the pixel point Bi in the pixel space of the U-2 drawing frame.
The electronic device can acquire coordinates Bi (xbi, ybi) of a pixel space of the first pixel point Bi in the U-th frame, and the electronic device acquires that the drawing range of the U-th frame has a width of L0 and a height of H0. The depth value of the pixel point Bi is Dbi, and then the electronic device can calculate the coordinate Bi _ clip of the pixel point Bi in the U-th frame clipping space as follows:
bi _ clip = (xai/L0, yai/H0, 2. Multidi Dbi-1, 1) (equation 7)
The coordinate Bi _ clip _ prev of the pixel point Bi in the U-2 frame clipping space is as follows:
bi _ clip _ prev = T Bi _ clip (formula 8)
The coordinate Biprev of the pixel point Bi in the U-2 frame pixel space is as follows:
biprv = (Bi _ clip _ prev.x/Bi _ clip _ prev.w) × L0, bi _ clip _ prev.y/Bi _ clip _ prev.w) = H0 (formula 9)
Wherein Bi _ clip _ prev is a 1 by 4 vector, wherein Bi _ clip _ prev.x is a first element of Bi _ clip _ prev, B _ clip _ prev.y is a second element of Bi _ clip _ prev, and Bi _ clip _ prev.w is a fourth element of Bi _ clip _ prev.
If Z (Bi) =0 then the value of Z (Bi) is modified to (0, 0) and if Z (Bi) · x <0 then the absolute value of the value of Z (Bi) is taken to yield the final Z (Bi)'.
The electronic equipment calculates the position offset between the U frame and the U-2 frame of r pixel points such as pixel points B1, B2, B3, \8230;, bi, \8230;, br, etc. according to formula 6-formula 9. The electronic device determines the maximum value Z (Bi)' max of the X-axis component of the position offset between the U-th frame and the U-2-th frame of r pixel points, such as pixel points B1, B2, B3, \8230;, bi, \8230;, br, and the like.
4. K3 is calculated from the positional deviations Z (Ai) 'max and Z (Bi)' max.
K3= ((Z (Ai) 'max + Z (Bi)' max)/2 + L)/L (equation 10)
Alternatively, if the width of the first drawing range is L + Q3, Q3 may be:
q3= (Z (Ai) 'max + Z (Bi)' max)/2 (formula 11)
Further, the specific calculation process of the electronic device to determine K4 may be as follows:
5. and calculating a coordinate conversion matrix T between the U-2 drawing frame and the U drawing frame.
Step 1 above can be referred to herein, and will not be described herein again.
6. And calculating the maximum value Z (Ci)' max of the X-axis components of the position deviation between the U frame and the U-2 frame of all pixel points in the uppermost column in the U frame.
Here, the process of calculating Z (Ci) 'max may refer to the process of calculating Z (Ai)' max in the above formulas 2 to 5, and will not be described herein again.
7. And calculating the maximum value Z (Di)' max of the position offset X-axis component between the U frame and the U-2 frame of all pixel points in the lowest column in the U frame.
Here, the process of calculating Z (Di) 'max may refer to the process of calculating Z (Ai)' max in the above formula 2 to formula 5, and will not be described herein again.
8. K4 is calculated from the positional deviations Z (Ci) 'max and Z (Di)' max.
K4= ((Z (Ci) 'max + Z (Di)' max)/2 + H)/H (equation 12)
Alternatively, if the height of the first drawing range is H + Q4, Q4 may be:
q4= (Z (Ci) 'max + Z (Di)' max)/2 (formula 13)
The following description will be given by taking an example in which the size of the first drawing range is smaller than the size of the twenty-first memory space. For example, as shown in fig. 61 (a), one enclosure 6102 of the twenty-first memory space has a width L · K1 and a height H · K2. The first drawing range has a width L · K3 and a height H · K4. The accessory 6102 may be the accessory 5701 or the accessory 5702 shown in fig. 58, or the accessory 570n.
The electronic device may draw the drawing content of the drawing instruction of the U-th drawing frame within the first drawing range of the twenty-first memory space. The size of the first rendering range is larger than the size of the rendering range specified by the viewport parameter of the target application. Since the first drawing range is enlarged, the electronic apparatus can draw more drawing contents into the first drawing range. However, if the drawing contents of the U-th drawing frame also follow the enlargement ratio of the drawing range, the drawing contents in the first drawing range are not increased. The electronic device needs to perform a similarity transformation on the drawing content in the U-th drawing frame under the camera coordinate with the origin of the camera coordinate as a central point, so that the drawing content shrinks towards the middle under the camera coordinate and becomes smaller. Thus, the electronic apparatus can draw more drawing contents within the first drawing range.
The electronic device may cause the drawing content of the U-th drawing frame to be drawn more within the first drawing range by modifying the projection matrix in the drawing parameters of the U-th drawing frame. The electronic device may generate a matrix T1 based on the determined values of K3 and K4, and the projection matrix P may be modified to P x T1. The matrix T1 may be:
Figure PCTCN2021106928-APPB-000017
in the embodiment of the present application, the matrix T1 may be referred to as a first conversion matrix.
In one possible implementation, the electronic device may modify the projection matrix through a hookglBufferSubData function. The glBufferSubData function is: glBufferSubData (GLenum target, GLintptr offset, GLsizeiptr size, const void data).
The glBufferSubData function is used for writing the data with size pointed by the data into the buffer memory corresponding to the target at the position of the offset. After the electronic device hook is used for the function, the data is judged to contain the information of the projection matrix P according to the target value GL _ UNIFORM _ BUFFER and the size 2848. The electronic equipment takes out P in the data by using the memory information and writes P x T1 into the specified position of the data. The electronic device draws the drawing content of the U-th drawing frame into the first drawing range according to the modified projection matrix (i.e., P × T1), and may obtain a twenty-first drawing result. Thus, the twenty-first drawing result may include more drawing contents. For example, a square with a size of 100 × 100 exists in the rendering contents of the rendering instruction in the U-th rendering frame. The rendered content will have a square size of (100 x 1/K3) x (100 x 1/K4) with the modified projection matrix (i.e., P x T1). Since the electronic device expands the width of the first drawing range by K3 times and expands the height of the first drawing range by K4 times, the size of the square in the first drawing range is (100 × 1/K3 × K3) (100 × 1/K4 × K4) = (100 × 100). Thus, the drawing range becomes large, but the size of the drawn contents does not become large, and therefore, more drawn contents can be drawn in the large drawing range.
The twenty-first rendering result of the U-th rendering frame may be as shown in the twenty-first rendering result 6103 in the (b) diagram in fig. 61. The twenty-first drawing result 6103 has a width L · K3 and a height H · K4. That is, the size of the twenty-first rendering result may be the same as the size of the first rendering range. Optionally, the size of the twenty-first rendering result may be the same as the size of the twenty-first memory space. The following is explained taking as an example that the size of the twenty-first rendering result is the same as the size of the first rendering range.
And S5606, in a default memory space, the GPU cuts the size of the twenty-first drawing result to be the same as that of the default memory space, and a U-th drawing frame is obtained.
Generally, the size of the display screen of the electronic device is the same as the size of the default memory space. Therefore, before the image frame of the target application is displayed, the electronic device needs to cut the size of the image frame to be the same as the size of the default memory space. In the default memory space, the GPU may cut the size of the twenty-first rendering result to be the same as the size of the default memory space, to obtain a U-th rendering frame. The U-th drawing frame is an image frame for display by the electronic device. The U-th drawing frame may be as illustrated in a U-th drawing frame 6104 illustrated in (c) in fig. 61. The U-th drawing frame 6104 may move leftward.
In a possible implementation manner, the electronic device may cut the twenty-first drawing result through a function glBlitFramebuffer to obtain a U-th drawing frame.
The glBlitFramebuffer function may be:
void glBlitFramebuffer
GLint srcX0,// the abscissa of the first pixel in the twenty-first rendering result, srcX0= (K3-1) × L/2
GLint srcY0,// ordinate of first pixel in twenty-first rendering result, srcY0= (K4-1) × L/2
GLint srcX1,// abscissa of second pixel in twenty-first rendering result, srcX1= srcX0+ L
GLint srcY1,// ordinate of second pixel in twenty-first rendering result, srcY1= srcY0+ H
GLint dstX0,// the abscissa of the vertex in the lower left corner of the default memory space
GLint dstY0,// default memory space lower left corner vertex ordinate
GLint dstX1,// default memory space with abscissa of vertex in upper right corner
GLint dstY1,// default memory space right vertex ordinate
GLbitfield mask,
GLenum filter)。
The glBlitFramebuffer function is used to clip the twenty-first rendering result into the default memory space. Specifically, the electronic device clips the size of the twenty-first drawing result to be the same as the size of the default memory space. And then the electronic equipment writes the cut twenty-first drawing result into a default memory space. The first pixel point is the pixel point at the vertex position of the lower left corner of the twenty-first clipped rendering result. And the second pixel point is the pixel point at the top right corner vertex position of the twenty-first clipped rendering result.
S5607-S5610, the electronic device draws a U +2 th drawing frame.
S5607, the CPU of the electronic device obtains drawing parameters of the U +2 th drawing frame.
The CPU of the electronic device may acquire the drawing parameter of the U +2 th drawing frame. Specifically, the CPU of the electronic device 100 may acquire the drawing parameter of the U +2 th drawing frame of the application program through an interface in the three-dimensional image processing library. And the drawing parameters of the U +2 th drawing frame are used for drawing and rendering the U +2 th drawing frame. The drawing parameters of the U +2 th drawing frame may include information carried in a drawing instruction (e.g., a draw call instruction) of the U +2 th drawing frame, such as coordinates, color values, depth values, and the like of each vertex in the drawing contents of the draw call instruction.
It is to be appreciated that the electronic device displays the U +1 th frame before the electronic device draws the U +2 th drawing frame. If the U +1 th frame is a drawing frame, the electronic device may draw the U +1 th frame according to the steps of drawing the U +2 th drawing frame in steps S5607 to S5610. If the U +1 th frame is a predicted frame, the electronic device may predict the U +1 th predicted frame according to steps S5611-S5615.
S5608, a CPU in the electronic device sends a drawing instruction for instructing the GPU to draw the U +2 th drawing frame to the GPU.
The CPU of the electronic device may send, to the GPU, a drawing instruction for instructing the GPU to draw the U +2 th drawing frame according to the drawing parameter of the U +2 th drawing frame. It is understood that the drawing parameters of the U +2 th drawing frame acquired by the CPU may include information of a plurality of drawing instructions. In this way, the CPU may sequentially send a plurality of drawing instructions for instructing the GPU to draw the U +2 th drawing frame to the GPU. Here, the description in step S5604 may be specifically referred to, and is not described here again.
And 5609, in a twenty-second memory space, the GPU of the electronic device draws the drawing content in the drawing instruction of the U +2 th drawing frame into a second drawing range to obtain a twenty-second drawing result, where a size of the second drawing range is smaller than or equal to a size of the twenty-second memory space and larger than a size of a default memory space.
The GPU of the electronic device may draw the drawing content of the drawing instruction of the U +2 th drawing frame into the twenty-second memory space, to obtain a twenty-second drawing result. Optionally, the GPU of the electronic device may draw the drawing content of the drawing instruction of the U +2 th drawing frame into a second drawing range of the twenty-second memory space, so as to obtain a twenty-second drawing result. The size of the second rendering range is smaller than or equal to the size of the twenty-second memory space and larger than the size of the default memory space. It is to be understood that, in the embodiment of the present application, the drawing range in any one of the enclosures in the twenty-second memory space may be referred to as a second drawing range.
In one possible implementation, the size of the second rendering range is determined by the electronic device according to a viewport parameter in the target application program. The viewport parameter of the target application is used to specify the width and height of a drawing range of an image frame in the electronic device in which the target application is drawn. Generally, the width of the drawing range specified by the viewport parameter of the target application program is the same as the width of the default memory space in the electronic device, and the height of the drawing range specified by the viewport parameter is the same as the height of the default memory space in the electronic device. The electronic device may specify the size of the second rendering range by modifying a viewport parameter of the target application program by hook technology. For example, if the width of the U-th drawing frame defined by the viewport of the target application is L and the height is H. The electronic device may modify the viewport parameter of the target application by hook techniques to designate the width of the second rendering range as L · K5 and the height as H · K6. K5 and K6 are both greater than 1. Optionally, K5 and K6 are floating point numbers greater than 1. K5 is less than or equal to K1 and K6 is less than or equal to K2. The viewport parameter of the U +2 th rendering frame may be the second parameter in the embodiment of the present application.
As shown in fig. 62 (a), fig. 62 (a) illustrates one accessory 6202 in a twenty-second memory space and a second rendering range 6201. Accessory 6202 has a width L K1 and a height H K2. The second drawing range 6201 has a width of L · K5 and a height of H · K6. The following description will be given taking as an example that the electronic device draws the drawing content of the drawing instruction in the U +2 th drawing frame to the second drawing range 6201. This accessory 6202 may be an accessory 5901 or an accessory 5902, or an accessory 590n in the twenty-second memory space shown in fig. 59.
In one possible implementation, K5 is equal to K1, a fixed value configured for the system of the electronic device. K6 is equal to K2, a fixed value configured for the system of the electronic device. It is to be understood that the size of the second rendering range is equal to the size of the twenty-second memory space.
In one possible implementation, the values of K5 and K6 may be determined by including the rotation angle parameter of the camera in the rendering parameters of the previous one of the U +2 th rendering frames. If the image frame displayed by the electronic device is as shown in fig. 54A, the electronic device may insert a predicted frame into every two drawing frames, and then the drawing frame before the U +2 th drawing frame is the U-th drawing frame. The calculation process of K5 and K6 may refer to the calculation process of K3 described above, and will not be described herein.
The electronic device may draw the drawing content of the drawing instruction of the U +2 th drawing frame within the second drawing range of the twenty second memory space. The size of the second rendering range is larger than the size of the rendering range specified by the viewport parameter of the target application. Since the second drawing range is enlarged, the electronic device can draw more drawing contents into the second drawing range. However, if the drawing contents of the U +2 th drawing frame also follow the enlargement ratio of the drawing range, the drawing contents in the second drawing range are not increased. The electronic equipment needs to perform a similarity transformation on the drawing content in the U +2 th drawing frame under the camera coordinate by taking the origin of the camera coordinate as a central point, so that the drawing content can be shrunk in the middle under the camera coordinate and become smaller. Thus, the electronic device can draw more drawing contents within the first drawing range.
The electronic device may cause the drawing content of the U +2 th drawing frame to be drawn more within the first drawing range by modifying the projection matrix in the drawing parameters of the U th drawing frame. The electronic device may generate a matrix T2 based on the determined values of K5 and K6, and the projection matrix P may be modified to P x T2. The matrix T2 may be:
Figure PCTCN2021106928-APPB-000018
In the embodiment of the present application, the matrix T2 may be referred to as a second conversion matrix.
In one possible implementation, the electronic device may modify the projection matrix through a hookglBufferSubData function. The glBufferSubData function is: glBufferSubData (GLenum target, GLintptr offset, GLsizeiptr size, const void data).
The glBufferSubData function is used for writing the data with size pointed by the data into the buffer memory corresponding to the target as the position of the offset. After the electronic device hook is used for the function, the data is judged to contain the information of the projection matrix P according to the target value GL _ UNIFORM _ BUFFER and the size 2848. The electronic equipment takes out P in the data by using the memory information and writes P x T2 into the designated position of the data. The electronic device draws the drawing content of the U +2 th drawing frame into the second drawing range according to the modified projection matrix (i.e., P × T2), and may obtain a twenty-second drawing result. The twenty-second drawing result may include more drawing contents in the drawing instruction of the U +2 th drawing frame. Here, the description about the glBufferSubData function in step S305 may be referred to, and is not repeated here.
The twenty-second rendering result of the U +2 th rendering frame may be as shown in (b) diagram 6203 in fig. 62. The width in the twenty-second rendering result 6203 is L · K5, and the height is designated as H · K6. That is, the size of the twenty-second rendering result may be the same as the size of the second rendering range. Alternatively, the size of the twenty-second rendering result may be the same as the size of the twenty-second memory space. The following explains an example in which the size of the twenty-second rendering result is the same as the size of the second rendering range.
And S5610, in the default memory space, the GPU of the electronic equipment cuts the size of the twenty-second drawing result to be the same as that of the default memory space, and a U +2 th drawing frame is obtained.
Generally, the size of the display screen of the electronic device is the same as the size of the default memory space. Therefore, before the image frame of the target application is displayed, the electronic device needs to clip the size of the image frame to be the same as the size of the default memory space. In the default memory space, the GPU may clip the size of the twenty-second rendering result to be the same as the size of the default memory space, to obtain a U +2 th rendering frame. The U +2 th drawing frame is an image frame for display by the electronic device. The U +2 th drawing frame may be as illustrated in a U +2 th drawing frame 6204 illustrated in (c) in fig. 62. The U +2 th drawing frame 6204 may move leftward.
In a possible implementation manner, the electronic device may cut the twenty-second rendering result through a function glBlitFramebuffer to obtain a U +2 th rendering frame.
The glBlitFramebuffer function may be:
void glBlitFramebuffer(
GLint srcX2,// the abscissa of the third pixel point in the twenty-second rendering result, srcX2= (K5-1) × L/2
GLint srcY2,// second vertical coordinate of third pixel in drawing result, srcY2= (K6-1) × L/2
GLint srcX3,// abscissa of fourth pixel in twenty-second rendering result, srcX3= srcX2+ L
GLint srcY3,// twenty-second rendering result, srcY3= srcY2+ H
GLint dstX0,// the abscissa of the vertex in the lower left corner of the default memory space
GLint dstY0,// default memory space lower left corner vertex ordinate
GLint dstX1,// default memory space with abscissa of vertex in upper right corner
GLint dstY1,// ordinate of top-right vertex of default memory space
GLbitfield mask,
GLenum filter)。
The glBlitFramebuffer function may be used to clip the twenty-second rendering result into the default memory space. Specifically, the electronic device clips the size of the twenty-second rendering result to be the same as the size of the default memory space. And then the electronic equipment writes the clipped twenty-second drawing result into a default memory space. And the third pixel point is the pixel point at the vertex position of the lower left corner of the sheared twenty-second drawing result. And the fourth pixel point is the pixel point at the top right corner vertex of the clipped twenty-second drawing result.
S5611-S5615, the electronic device predicts the U +3 th predicted frame.
S5611, the electronic device sends, to the GPU, an instruction to instruct the GPU to calculate a motion vector.
The CPU of the electronic device may send instructions to the GPU instructing the GPU to calculate motion vectors. The instruction is to instruct a shader in the GPU to compute a motion vector. The instruction may be a dispatch instruction. The embodiment of the present application does not limit the specific form of the instruction for calculating the motion vector.
And S5612, the GPU of the electronic device calculates a motion vector S of a twenty-second drawing result by using the twenty-first drawing result and the twenty-second drawing result.
The GPU of the electronic device may calculate a motion vector of the twenty-second rendering result using the twenty-first rendering result and the twenty-second rendering result.
In a possible implementation manner, the calculating, by the GPU of the electronic device, the motion vector of the twenty-second rendering result by using the twenty-first rendering result and the twenty-second rendering result may specifically include the following steps:
1. the GPU may divide the twenty-second rendering result 6203 into Q pixel blocks. Each pixel block may contain f x f (e.g., 16 x 16) pixels. The twenty-second rendering result 6203 is a rendering result of the U +2 th rendering frame. As shown in fig. 63A (b).
2. The GPU takes the first pixel block (e.g., the pixel block 6205 in fig. 63A (b)) in the twenty-second rendering result 6203, and finds a matching pixel block matching the first pixel block in the twenty-first rendering result 6103, e.g., the pixel block 6105 in fig. 63A (a). The twenty-first drawing result 6103 is a drawing result of the U-th drawing frame.
In the embodiment of the present application, of all candidate blocks in the twenty-first rendering result, a candidate block having the smallest absolute difference from the RGB value of the first pixel block is referred to as a matching pixel block matching the first pixel block. The electronic device needs to find a matching pixel block 6105 matching the pixel block in the twenty-first rendering result. Optionally, the GPU of the electronic device may find a matching pixel block matching the first pixel block in the U-th drawing frame through a diamond search algorithm. As shown in fig. 63B, the GPU may find a matching pixel block in the rendering result of the U-th frame of the diagram (i.e., the twenty-first rendering result 6103) that matches the pixel block 6205 in the rendering result of the U + 2-th frame of the diagram (a) (i.e., the twenty-second rendering result 6203) by a diamond search algorithm. The electronic device can use pixel point 6211 in the upper left corner of pixel block 6205 to perform a diamond search. The electronic device can find that the pixel block 6105 with the pixel point 6106 as the upper left corner matches the pixel block 6205 in the rendering result of the U-th rendering frame. The diamond search algorithm may specifically refer to descriptions in the prior art, and details are not described here.
As shown in fig. 63C, the (a) and (b) diagrams of fig. 63C exemplarily show a schematic diagram that the electronic device finds a matching pixel block 6105 with the pixel block 6205 in the U-th rendering frame. As shown in fig. 63C (a), the electronic device performs a diamond search for the top-left pixel point 6211 of the pixel block 6205. As shown in fig. 63C (b), in an implementation manner, the electronic device first performs a diamond search with a pixel point 9012 in the U-th drawing frame as a center point. The coordinates of pixel point 6211 are the same as the coordinates of pixel point 6212. The electronic device can calculate that pixel blocks 6205 are pixel blocks with upper left corner pixel points as pixel points 6212, upper left corner pixel points are pixel blocks of pixel points 6301, upper left corner pixel points are pixel blocks of pixel points 6302, upper left corner pixel points are pixel blocks of pixel points 6303, upper left corner pixel points are pixel blocks of pixel points 6304, upper left corner pixel points are pixel blocks of pixel points 6305, upper left corner pixel points are pixel blocks of pixel points 6306, upper left corner pixel points are pixel blocks of pixel points 6307, and upper left corner pixel points are color values of pixel blocks of pixel points 6308. The electronic device can then perform small diamond search with the upper left pixel point in the pixel block with the smallest color value difference as the center. For example, in the above pixel block, the difference between the color value of the pixel block whose upper left corner is the pixel block 6307 and the color value of the pixel block 6205 is the smallest. The electronic device may perform a small diamond search centered on pixel 6307. That is, the electronic device can calculate that the pixel block 6205 is a pixel block with the upper left corner pixel point as the pixel block of the pixel point 6311, the upper left corner pixel point is a pixel block of the pixel point 6312, the upper left corner pixel point is a pixel block of the pixel point 6313, and the upper left corner pixel point is a pixel block of the pixel point 6314. Finally, the electronic device may determine that the color value difference between the pixel block with the upper-left pixel point being the pixel block of the pixel point 6314 (i.e., the pixel block 6105 shown in fig. 63B) and the pixel block 6105 is the minimum. Then the electronic device can determine that block 6105 in the nth frame is a matching block to block 6205.
3. And the GPU calculates a first displacement from the matched pixel block to the first pixel block and determines a motion vector A1 of the first pixel block according to the first displacement.
For example, as shown in fig. 63B, the matching block to the pixel block 6205 in fig. 63B is the pixel block 6105. The motion vector of the pixel block 6205 is the motion vector A1 illustrated in (B) in fig. 63B.
4. The GPU may calculate the motion vector of each of the Q pixel blocks in the twenty-second rendering result 6203 according to the above steps 1 to 3, i.e. A1, A2, \8230;, AQ. The motion vector of the twenty-second rendering result is S = (A1, A2, \8230;, AQ).
S5613, the electronic device sends an instruction to the GPU for instructing the GPU to draw the prediction frame.
The CPU of the electronic device may send an instruction for drawing the U +3 th predicted frame to the GPU after the GPU calculates the motion vector S of the twenty-second drawing result.
And S5614, in a twenty-third memory space, the GPU of the electronic equipment predicts the twenty-third drawing result according to the motion vector S and the twenty-second drawing result, and the size of the twenty-third drawing result is the same as that of the twenty-second drawing result.
The objects in the U +2 th drawing frame and the U +3 th prediction frame are the same, except that the positions of the objects in the U +2 th drawing frame and the U +3 th prediction frame are different. The GPU may generate a twenty-third drawing result using the twenty-second drawing result of the U +2 th drawing frame and the motion vector of the twenty-second drawing result. Specifically, the GPU may predict the motion vector V of the twenty-third rendering result from the motion vector S of the twenty-second rendering result. The motion vector V may be equal to G times the motion vector S. G is less than 1 and G may be equal to 0.5. The GPU may generate a twenty-third drawing result according to the color value of each pixel point of the twenty-second drawing result and the displacement of each pixel point under the motion vector V.
In one possible implementation, the GPU of the electronic device generates a twenty-third rendering result in a twenty-third memory space. Further, the GPU may generate a twenty-third rendering result within a third rendering range in a twenty-third memory space. The size of the third rendering range is smaller than or equal to the size of the twenty-third memory space and larger than the size of the default memory space. It is to be understood that, in the embodiment of the present application, the drawing range in any one of the enclosures in the twenty-third memory space may be referred to as a third drawing range.
In one possible implementation, the size of the third rendering range is determined by the electronic device according to a viewport parameter in the target application program. The viewport parameter of the target application is used to specify the width and height of a drawing range of an image frame in the electronic device in which the target application is drawn. Generally, the width of the drawing range specified by the viewport parameter of the target application program is the same as the width of the default memory space in the electronic device, and the height of the drawing range specified by the viewport parameter is the same as the height of the default memory space in the electronic device. The electronic device may specify the size of the third rendering range by modifying a viewport parameter of the target application program through hook technology. For example, if the drawing range of the U-th drawing frame specified by the viewport of the target application is L in width and H in height. The electronic apparatus may modify the viewport parameter of the target application program by hook technique to designate the width of the third rendering range as L · K7 and the height as H · K8. K7 and K8 are both greater than 1. Optionally, K7 and K8 are floating point numbers greater than 1. K7 is less than or equal to K1 and K8 is less than or equal to K2.
As shown in fig. 64 (a), fig. 64 (a) illustrates one accessory 6402 in the twenty-third memory space and the third rendering range 6401. Accessory 6402 has a width L K1 and a height H K2. The third rendering range 6401 has a width L · K7 and a height H · K8. The following explains an example in which the electronic device will generate a twenty-third rendering result within the third rendering range 6401. The accessory 6402 may be the accessory 6001 or the accessory 6002 or the accessory 600n in the twenty-third memory space shown in fig. 60.
In one possible implementation, K7 is equal to K1, a fixed value configured for the system of the electronic device. K8 is equal to K2, a fixed value configured for the system of the electronic device. It is to be understood that the size of the third rendering range is equal to the size of the twenty-third memory space.
In one possible implementation, the values of K7 and K8 may be determined by including the rotation angle parameter of the camera in the rendering parameters in the U-th rendering frame and the U + 2-th rendering frame. The calculation processes of K7 and K8 can refer to the description of the calculation process of K3 above, and are not described herein again.
The twenty-third rendering result of the U +3 th prediction frame may be as shown by a twenty-third rendering result 6403 in (b) of fig. 64. The twenty-third rendering result 6403 has a width L · K5 and a height H · K6.
It is to be understood that the expansion factor of the drawing range in the predicted frame (e.g., K7 and K8) may be determined according to the drawing parameters of the two-frame drawing frame that generates the predicted frame. And the expansion factor of the drawing range of the drawing frame (for example, K3 and K4) may be determined by the drawing parameter of the drawing frame and the drawing parameter of the drawing frame previous to the drawing frame. If the drawing frame is the first frame at which the image frame prediction method starts to be performed, the system of the electronic device may configure the expansion factor of the drawing range of the drawing frame to a fixed value.
And S5615, in the default memory space, the GPU of the electronic equipment cuts the size of the twenty-third drawing result to be the same as that of the original memory space, and a U +3 th prediction frame is obtained.
Generally, the size of the display screen of the electronic device is the same as the size of the default memory space. Therefore, before the image frame of the target application is displayed, the electronic device needs to cut the size of the image frame to be the same as the size of the default memory space. In the default memory space, the GPU may cut the size of the twenty-second rendering result to be the same as the size of the default memory space, to obtain the U +3 th predicted frame. The U +3 th predicted frame is an image frame for display by the electronic device. The U +3 th predicted frame may be as shown by a U +3 th predicted frame 6404 illustrated in (c) in fig. 64. The specific implementation manner of the electronic device cutting the twenty-third drawing result may refer to the description of the electronic device cutting the twenty-first drawing result and the twenty-second drawing result, and is not described herein again.
In the embodiment of the present application, the U-th drawing frame may be referred to as a first drawing frame. The U +2 th drawing frame may be referred to as a second drawing frame. The U +3 th predicted frame may be referred to as a first predicted frame.
The main process for drawing a frame by an electronic device is that the GPU executes a number of drawcall instructions. The GPU draws the contents of each drawcall instruction one by one onto the FBO. Each drawcall instruction then requires the flow of the rendering pipeline to be executed once on the GPU. The rendering pipeline is mainly divided into vertex shader, tessellation (not necessary), geometry shader (not necessary), primitive assembly (not necessary), rasterization, fragment shader, and test blend stage (not necessary). Thus, the GPU is expensive to draw a frame of draw frames. In addition, vertex information, coordinate information, and the like required for the GPU to execute each drawcall instruction require the GPU to prepare, and these preparations are also a high amount of computation for the cpu.
The process of predicting a frame of image frames is computationally less expensive for the CPU and requires only a portion of the instructions to be sent to the GPU. For the GPU, only the motion vectors for the image frames need to be calculated. All the calculations are parallel, only one calculation is needed, and each calculation unit executes a small amount of basic operations, so that the calculation of the GPU can be reduced, and the performance can be improved.
It can be understood that the embodiment of the present application is not limited to the electronic device predicting the U +3 th predicted frame through the U-th drawn frame and the U +2 th drawn frame. Optionally, the electronic device may also predict the U +2 th drawing frame from the nth frame and the N +1 th frame. Optionally, the electronic device may also predict the N +4 th frame from the nth frame and the N +3 th frame. The embodiment of the present application does not limit this.
It is understood that the electronic device may predict the predicted frame according to different strategies during the process of displaying the video interface of the target application. Namely, the electronic equipment predicts the strategy of the (N + 3) th frame according to the (N) th frame and the (U + 2) th drawing frame in the first time period, and predicts the strategy of the (U + 2) th drawing frame according to the (N) th frame and the (N + 1) th frame in the second time period. For example, when the GPU performs more tasks, the electronic device may first predict the (N + 3) th frame according to the (N) th frame and the (U + 2) th drawing frame. When the GPU executes fewer tasks, the electronic device may predict the U +2 th rendered frame from the nth frame and the N +1 th frame. The embodiment of the present application does not limit this.
In a possible implementation manner, in the embodiment of the present application, the twenty-first rendering result and the twenty-second rendering result are rendered to interface scene content (for example, game content) of the target application, and a UI control in the target application is not rendered.
It is to be understood that the twenty-first drawing result in the embodiment of the present application may be a first drawing range of the drawing contents of the drawing instruction in which the U-th drawing frame is drawn, for example, a twenty-first drawing result 6103 illustrated in (b) in fig. 61. The twenty-first drawing result may be an attachment to a twenty-first memory space in which drawing contents of the drawing instruction of the U-th drawing frame are drawn. At this point, the size of the twenty-first rendering result is equal to the size of the accessory in the twenty-first memory space. That is, the width of the twenty-first rendering result is the same as the width of the twenty-first memory space, and the height of the twenty-first rendering result is the same as the height of the twenty-first memory space. Likewise, the twenty-second rendering result in the embodiment of the present application may be a second rendering range in which the rendering content of the rendering instruction of the U +2 th rendering frame is rendered, for example, a twenty-second rendering result 6203 illustrated in (b) in fig. 62. The twenty-second drawing result may be an attachment to a twenty-second memory space in which drawing contents of the drawing instruction of the U +2 th drawing frame are drawn. At this time, the size of the twenty-second rendering result is equal to the size of the accessory in the twenty-second memory space. That is, the width of the twenty-second rendering result is the same as the width of the twenty-second memory space, and the height of the twenty-second rendering result is the same as the height of the twenty-second memory space.
According to the image frame prediction method provided by the embodiment of the application, when a first drawing frame is drawn, the electronic equipment draws the drawing content of the drawing instruction of the first drawing frame into a twenty-first memory space to obtain a twenty-first drawing result, wherein the size of the twenty-first memory space is larger than that of a default memory space, and the default memory space is a memory space provided by an electronic equipment system and used for storing the image frame for display; when the second drawing frame is drawn, the electronic equipment draws the drawing content of the drawing instruction of the second drawing frame to a twenty-second memory space to obtain a twenty-second drawing result, wherein the size of the twenty-second memory space is larger than that of the default memory space; the electronic device generates a twenty-third drawing result in a twenty-third memory space according to the twenty-first drawing result and the twenty-second drawing result, wherein the size of the twenty-third memory space is larger than that of the default memory space; and the electronic equipment cuts the twenty-third drawing result into a drawing frame with the same size as the default memory space, so as to obtain a third prediction frame. In this way, the electronic device draws the first drawing frame and the second drawing frame in the enlarged memory space, and can draw more drawing contents than the first drawing frame and the second drawing frame displayed by the electronic device. In this way, the predicted frame predicted by the electronic device may have rendering content that is not present in the first rendering frame and the second rendering frame displayed by the electronic device. Thus, the drawing content in the predicted frame predicted by the electronic device is closer to the shooting content in the shooting field of view of the camera. Thus, the image frames predicted by the electronic device may be more accurate.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and these modifications or substitutions do not depart from the scope of the technical solutions of the embodiments of the present application.
As used in the above embodiments, the term "when 8230; may be interpreted to mean" if 8230, "or" after 8230; or "in response to a determination of 8230," or "in response to a detection of 8230," depending on the context. Similarly, the phrase "at the time of determination of \8230;" or "if (a stated condition or event) is detected" may be interpreted to mean "if it is determined 8230;" or "in response to the determination of 8230;" or "upon detection (a stated condition or event)" or "in response to the detection (a stated condition or event)" depending on the context.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), among others.
Those skilled in the art can understand that all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer readable storage medium and can include the processes of the method embodiments described above when executed. And the aforementioned storage medium includes: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.

Claims (134)

  1. A method of image frame prediction, comprising:
    when a first drawing frame is drawn, the electronic equipment determines that the spatial information of a first drawing object changes according to a drawing instruction of the first drawing object, determines that the spatial information of a second drawing object does not change according to a drawing instruction of the second drawing object, and writes the color data of the first drawing object into a first color accessory and writes the color data of the second drawing object into a second color accessory;
    when a second drawing frame is drawn, the electronic equipment determines that the spatial information of the first drawing object changes according to a drawing instruction of a third drawing object, determines that the spatial information of a fourth drawing object does not change according to a drawing instruction of the fourth drawing object, writes the color data of the third drawing object into a third color accessory, and writes the color data of the fourth drawing object into a fourth color accessory;
    Generating, by the electronic device, a fifth color attachment of a first predicted frame from the first color attachment and the third color attachment, and a sixth color attachment of the first predicted frame from the second color attachment and the fourth color attachment;
    the electronic device synthesizes the fifth color attachment and the sixth color attachment into the first predicted frame.
  2. The method of claim 1, wherein the electronic device generates a fifth color attachment for the first predicted frame from the first color attachment and the third color attachment, comprising:
    the electronic device determines a first motion vector of a third color accessory according to the first color accessory and the third color accessory;
    the electronic device generates a fifth color attachment for the first predicted frame from the third color attachment and the first motion vector.
  3. The method of claim 2, wherein the electronic device determines a first motion vector for a third color attachment based on the first color attachment and the third color attachment; the method specifically comprises the following steps:
    the electronic device dividing the third color accessory into Q pixel blocks, the electronic device fetching a first pixel block among the Q pixel blocks of the third color accessory;
    The electronic device determines a second pixel block in the first color attachment that matches the first pixel block;
    the electronic equipment obtains a motion vector of the first pixel block according to the displacement from the second pixel block to the first pixel block;
    the electronic device determines a first motion vector of the third color accessory according to the motion vector of the first pixel block.
  4. The method according to claim 3, wherein the electronic device determines a second pixel block in the first color attachment that matches the first pixel block, specifically comprising:
    the electronic equipment determines a plurality of candidate pixel blocks in the first color accessory through first pixel points in the first pixel block;
    the electronic equipment respectively calculates the difference values of the color values of the candidate pixel blocks and the first pixel block;
    and the electronic equipment determines the second pixel block matched with the first pixel block according to the difference value of the color values of the first pixel blocks of the candidate pixel blocks, wherein the second pixel block is the candidate pixel block with the minimum difference value of the color values of the first pixel block in the candidate pixel blocks.
  5. The method of claim 1, wherein generating, by the electronic device, a sixth color attachment of the first predicted frame from the second color attachment and the fourth color attachment comprises:
    the electronic equipment determines a second motion vector of the fourth color accessory according to the second color accessory and the fourth color accessory;
    the electronic device generates a sixth color attachment for the first predicted frame from the fourth color attachment and the second motion vector.
  6. The method according to claim 5, wherein the determining, by the electronic device, the second motion vector of the fourth color accessory according to the second color accessory and the fourth color accessory specifically comprises:
    the electronic device dividing the fourth color attachment into Q pixel blocks, the electronic device fetching a third pixel block in the fourth color attachment;
    the electronic device calculates a first position of the third block of pixels in the second color accessory;
    the electronic equipment determines a motion vector of the third pixel block according to the first position and a second position of the third pixel block in the fourth color accessory;
    And the electronic equipment determines a second motion vector of the fourth color accessory according to the motion vector of the third pixel block.
  7. The method according to claim 6, wherein the electronic device calculates a first position of the third pixel block in the second color accessory, specifically comprising:
    the electronic equipment acquires a first matrix in drawing instructions of the first drawing frame and a second matrix of drawing instructions of the second drawing frame, wherein the first matrix is used for recording corner information of a camera position of the first drawing frame, and the second matrix is used for recording corner information of a camera position of the second drawing frame;
    the electronic device calculates a first position of the third pixel block in the second color attachment from the first and second matrices and a depth value of the third pixel block.
  8. The method according to any one of claims 1 to 7, wherein the drawing instruction of the first drawing object comprises an execution drawing instruction of the first drawing object and a drawing state device instruction of the first drawing object, wherein the execution drawing instruction of the first drawing object is used for triggering the electronic device to perform drawing and rendering on the drawing state data of the first drawing object and generate a drawing result, the drawing state device instruction of the first drawing object is used for setting drawing state data on which the execution drawing instruction of the first drawing object depends, and the drawing state data of the first drawing object comprises vertex information data, a vertex index, texture information and a vertex information cache index of the first drawing object;
    The drawing instruction of the second drawing object comprises an execution drawing instruction of the second drawing object and a drawing state device instruction of the second drawing object, wherein the execution drawing instruction of the second drawing object is used for triggering the electronic device to perform drawing and rendering on the drawing state data of the second drawing object and generate a drawing result, the drawing state device instruction of the second drawing object is used for setting drawing state data on which the execution drawing instruction of the second drawing object depends, and the drawing state data of the second drawing object comprises vertex information data, vertex indexes, texture information and vertex information cache indexes of the second drawing object;
    the drawing instruction of the third drawing object comprises an execution drawing instruction of the third drawing object and a drawing state device instruction of the third drawing object, wherein the execution drawing instruction of the third drawing object is used for triggering the electronic device to perform drawing and rendering on the drawing state data of the third drawing object and generate a drawing result, the drawing state device instruction of the third drawing object is used for setting drawing state data on which the execution drawing instruction of the third drawing object depends, and the drawing state data of the third drawing object comprises vertex information data, vertex indexes, texture information and vertex information cache indexes of the third drawing object;
    The drawing instruction of the fourth drawing object comprises an execution drawing instruction of the fourth drawing object and a drawing state device instruction of the fourth drawing object, wherein the execution drawing instruction of the fourth drawing object is used for triggering the electronic device to perform drawing and rendering on the drawing state data of the fourth drawing object and generate a drawing result, the drawing state device instruction of the fourth drawing object is used for setting drawing state data on which the execution drawing instruction of the fourth drawing object depends, and the drawing state data of the fourth drawing object comprises vertex information data, vertex indexes, texture information and vertex information cache indexes of the fourth drawing object.
  9. The method according to any one of claims 1 to 8, wherein the electronic device determines that the spatial information of the first drawing object is changed according to the drawing instruction of the first drawing object, and includes: the electronic equipment determines that a transfer matrix parameter exists in a drawing instruction of the first drawing object, the electronic equipment determines that the transfer matrix parameter existing in the drawing instruction of the first drawing object is different from a transfer matrix parameter of the corresponding first drawing object, and the transfer matrix is used for describing a mapping relation from a local coordinate system of the drawing object to a world coordinate system;
    The electronic equipment determines that the spatial information of the second drawing object is not changed according to the drawing instruction of the second drawing object, and the method comprises the following steps: the electronic equipment determines that no transfer matrix parameter exists in the drawing instruction of the second drawing object, and the electronic equipment determines that the existing transfer matrix parameter in the drawing instruction of the second drawing object is the same as the transfer matrix parameter of the corresponding second drawing object;
    the electronic equipment determines that the spatial information of the first drawing object changes according to the drawing instruction of the third drawing object, and the method comprises the following steps: the electronic equipment determines that a transfer matrix parameter exists in the drawing instruction of the third drawing object, and the electronic equipment determines that the transfer matrix parameter existing in the drawing instruction of the third drawing object is different from the transfer matrix parameter of the corresponding third drawing object;
    the electronic equipment determines that the spatial information of the fourth drawing object is not changed according to the drawing instruction of the fourth drawing object, and the method comprises the following steps: the electronic equipment determines that no transition matrix parameter exists in the drawing instruction of the second drawing object, and the transition matrix parameter existing in the drawing instruction of the second drawing object determined by the electronic equipment is the same as the transition matrix parameter of the corresponding second drawing object.
  10. The method according to any one of claims 1 to 8, wherein when the first drawing frame is drawn, the electronic device determines that spatial information of the first drawing object is changed according to a drawing instruction of the first drawing object, and determines that spatial information of the second drawing object is not changed according to a drawing instruction of the second drawing object, and before the electronic device writes color data of the first drawing object in the first color accessory and writes color data of the second drawing object in the second color accessory, the method further comprises:
    the electronic equipment creates a first memory space, a second memory space, a third memory space, a fourth memory space, a fifth memory space, a sixth memory space and a seventh memory space; the first memory space is used for storing the first color accessory, the second memory space is used for storing the second color accessory, the third memory space is used for storing the third color accessory, the fourth memory space is used for storing the fourth color accessory, the fifth memory space is used for storing the fifth color accessory, the sixth memory space is used for storing the sixth color accessory, and the seventh memory space is used for storing the first prediction frame.
  11. The method according to claim 10, wherein when the first drawing frame is drawn, the electronic device determines that spatial information of the first drawing object is changed according to a drawing instruction of the first drawing object and determines that spatial information of the second drawing object is not changed according to a drawing instruction of the second drawing object, and wherein after the electronic device writes color data of the first drawing object in the first color attachment and writes color data of the second drawing object in the second color attachment, the method further comprises:
    and the electronic equipment synthesizes the first color attachment and the second color attachment into the first drawing frame in the seventh memory space.
  12. The method according to claim 10, wherein when the second drawing frame is drawn, the electronic device determines that the spatial information of the first drawing object is changed according to a drawing instruction of a third drawing object and determines that the spatial information of a fourth drawing object is not changed according to a drawing instruction of a fourth drawing object, and after the electronic device writes color data of the third drawing object in a third color attachment and writes color data of the fourth drawing object in a fourth color attachment, the method further comprises:
    And the electronic equipment synthesizes the third color attachment and the fourth color attachment into the second drawing frame in the seventh memory space.
  13. The method according to claim 11, wherein the synthesizing, by the electronic device, the first color attachment and the second color attachment into the first drawing frame in the seventh memory space specifically includes:
    and the electronic equipment synthesizes the first color attachment and the second color attachment into the first drawing frame in a seventh memory space according to the first depth attachment and the second depth attachment, wherein the first depth attachment is used for writing the depth data of the first drawing object, and the second depth attachment is used for writing the second color attachment of the second drawing object.
  14. The method according to claim 12, wherein the synthesizing, by the electronic device, the third color attachment and the fourth color attachment into the second drawing frame in the seventh memory space specifically includes:
    the electronic equipment synthesizes the third color attachment and the fourth color attachment into the second drawing frame in a seventh memory space according to the third depth attachment and the fourth depth attachment; the third depth attachment is for writing depth data of the third drawing object, and the fourth depth attachment is for writing depth data of the fourth drawing object.
  15. A method of image frame prediction, comprising:
    the electronic equipment determines a tenth moving object in the tenth drawing frame according to the tenth drawing instruction, and determines an eleventh moving object in the eleventh drawing frame according to the eleventh drawing instruction;
    the electronic equipment determines that the tenth moving object and the eleventh moving object are matched according to the attribute of the tenth moving object and the attribute of the eleventh moving object;
    and the electronic equipment determines a drawing result of a twelfth moving object in a tenth prediction frame according to the tenth moving object and the eleventh moving object.
  16. The method of claim 15, wherein the tenth rendered frame is displayed in the display screen of the electronic device before the eleventh rendered frame, and wherein the tenth predicted frame is displayed in the display screen of the electronic device after the eleventh rendered frame.
  17. The method according to any one of claims 15 or 16, wherein the tenth drawing frame and the eleventh drawing frame are separated by one frame image frame, and the tenth prediction frame is a next frame image frame adjacent to the eleventh drawing frame.
  18. The method according to any one of claims 15 to 17, wherein the determining, by the electronic device, that the tenth moving object and the eleventh moving object match according to the first attribute of the tenth moving object and the second attribute of the eleventh moving object specifically includes:
    The electronic equipment establishes a first index table, wherein the first index table is used for storing the moving object and the attribute of the moving object in the tenth drawing frame, and the first index table comprises the tenth moving object and the attribute of the tenth moving object;
    the electronic equipment establishes a second index table, wherein the second index table is used for storing the moving object and the attribute of the moving object in the eleventh drawing frame, and the second index table comprises the attributes of the eleventh moving object and the eleventh moving object;
    and the electronic equipment takes the eleventh moving object out of a second index table and determines the tenth moving object matched with the eleventh moving object from the first index table.
  19. The method of any of claims 15-18, wherein matching the tenth moving object with the eleventh moving object comprises: the attribute of the tenth moving object is the same as the attribute of the eleventh moving object.
  20. The method according to claim 15, wherein the determining, by the electronic device, a drawing result of a twelfth moving object in a tenth prediction frame according to the tenth moving object and the eleventh moving object specifically includes:
    The electronic device determines a first coordinate of a first point of the tenth moving object and determines a second coordinate of a second point of the eleventh moving object;
    the electronic equipment determines a tenth motion vector from the tenth moving object to the eleventh moving object according to the displacement from the first coordinate to the second coordinate;
    and the electronic equipment determines a drawing result of a twelfth moving object in the tenth prediction frame according to the tenth motion vector and the eleventh moving object.
  21. The method according to claim 20, wherein the electronic device determines the first coordinates of the first point of the tenth moving object, specifically comprising:
    the electronic equipment determines a first coordinate of a first point according to the coordinates of all pixel points of the tenth moving object;
    the determining, by the electronic device, a second coordinate of a second point of the eleventh moving object specifically includes:
    and the electronic equipment determines a second coordinate of a second point according to the coordinates of all pixel points of the eleventh moving object.
  22. The method according to any one of claims 20 or 21, wherein the first point is a geometric center point of the tenth moving object, and the second point is a geometric center point of the eleventh moving object.
  23. The method according to claim 20, wherein the determining, by the electronic device, a drawing result of a twelfth moving object in a tenth prediction frame according to the tenth motion vector and the eleventh moving object specifically includes:
    and the electronic equipment determines a second pixel point of a twelfth moving object in the tenth predicted frame according to the tenth motion vector and the first pixel point of the eleventh moving object.
  24. The method according to claim 23, wherein the determining, by the electronic device, a second pixel point of a twelfth moving object in the tenth predicted frame according to the tenth motion vector and the first pixel point of the eleventh moving object comprises:
    the electronic device determines an eleventh motion vector for the eleventh moving object to move to the twelfth moving object according to the tenth motion vector;
    and the electronic equipment determines a second pixel point of the twelfth moving object in the tenth prediction frame according to an eleventh motion vector and a first pixel point of the eleventh moving object, wherein the second pixel point is a pixel point of the first pixel point which moves from the eleventh rendering frame to the tenth prediction frame according to the eleventh motion vector.
  25. The method of claim 24, wherein the eleventh motion vector is K times the tenth motion vector, and wherein K is greater than 0 and less than 1.
  26. The method of claim 25, wherein K is equal to 0.5.
  27. An electronic device, characterized in that the electronic device comprises: one or more processors CPU, graphics processor GPU, memory and display screen; the memory is coupled with the one or more processors; the CPU is coupled with the GPU; wherein:
    the memory for storing computer program code, the computer program code comprising computer instructions;
    the CPU is used for determining a tenth moving object in a tenth drawing frame according to a tenth drawing instruction and determining an eleventh moving object in an eleventh drawing frame according to a second drawing instruction;
    the GPU is used for determining that the tenth moving object is matched with the eleventh moving object according to the attribute of the tenth moving object and the attribute of the eleventh moving object; determining a drawing result of a twelfth moving object in a tenth prediction frame according to the tenth moving object and the eleventh moving object;
    the display screen is used for displaying the tenth drawing frame, the eleventh drawing frame and the tenth prediction frame.
  28. The electronic device of claim 27, wherein the tenth rendered frame is displayed in the display screen before the eleventh rendered frame, and wherein the tenth predicted frame is displayed in the display screen after the eleventh rendered frame.
  29. The electronic device of any one of claims 27 or 28, wherein the tenth rendering frame and the eleventh rendering frame are separated by one frame image frame, and wherein the tenth prediction frame is a next frame image frame adjacent to the eleventh rendering frame.
  30. The electronic device according to any of claims 27-29, wherein the CPU is specifically configured to:
    establishing a first index table, where the first index table is used to store the moving object and the attribute of the moving object in the tenth drawing frame, and the first index table includes the tenth moving object and the attribute of the tenth moving object;
    establishing a second index table, wherein the second index table is used for storing the moving object and the attribute of the moving object in the eleventh drawing frame, and the second index table comprises the attributes of the eleventh moving object and the eleventh moving object;
    the GPU is specifically configured to:
    And taking the eleventh moving object out of the second index table, and determining the tenth moving object matched with the eleventh moving object from the first index table.
  31. The electronic device of any of claims 27-30, wherein matching the tenth moving object with the eleventh moving object comprises: the attribute of the tenth moving object is the same as the attribute of the eleventh moving object.
  32. The electronic device of claim 31, wherein the GPU is specifically configured to:
    determining a first coordinate of a first point of the tenth moving object, and determining a second coordinate of a second point of the eleventh moving object;
    determining a tenth motion vector of the tenth moving object to the eleventh moving object according to the displacement of the first coordinate to the second coordinate;
    and determining a drawing result of the twelfth moving object in the tenth prediction frame according to the tenth motion vector and the eleventh moving object.
  33. The electronic device of claim 32, wherein the GPU is specifically configured to:
    determining a first coordinate of a first point according to the coordinates of all pixel points of the tenth moving object;
    And determining a second coordinate of the second point according to the coordinates of all pixel points of the eleventh moving object.
  34. The electronic device according to claim 32 or 33, wherein the first point is a geometric center point of the tenth moving object, and the second point is a geometric center point of the eleventh moving object.
  35. The electronic device of claim 32, wherein the GPU is specifically configured to:
    and determining a second pixel point of a twelfth motion object in the tenth prediction frame according to the tenth motion vector and the first pixel point of the eleventh motion object.
  36. The electronic device of claim 35, wherein the GPU is specifically configured to:
    determining an eleventh motion vector of the eleventh moving object moving to the twelfth moving object according to the tenth motion vector;
    and determining a second pixel point of the twelfth moving object in the tenth prediction frame according to an eleventh motion vector and the first pixel point of the eleventh moving object, wherein the second pixel point is a pixel point of the first pixel point which moves from the eleventh drawing frame to the tenth prediction frame according to the eleventh motion vector.
  37. The electronic device of claim 36, wherein the eleventh motion vector is K times the tenth motion vector, and wherein K is greater than 0 and less than 1.
  38. The electronic device of claim 37, wherein K is equal to 0.5.
  39. A method of frame prediction, the method comprising:
    determining a predicted motion vector of a first vertex from a target reference frame to a predicted frame according to predicted motion vectors of blocks around the first vertex from the target reference frame to the predicted frame, wherein the first vertex is one vertex in the first block, the first block is one block in the target reference frame, the target reference frame is one frame determined from the first reference frame or the second reference frame according to the position of the predicted frame relative to the first reference frame or the second reference frame, and the first reference frame and the second reference frame are two adjacent frames in a video stream;
    determining coordinates of a first vertex in the target reference frame in the prediction frame according to the coordinates of the first vertex and a predicted motion vector of the first vertex;
    determining a pixel block of the first block in the predicted frame according to the coordinates of the vertex of the first block in the predicted frame and the coordinates in the target reference frame;
    Displaying the predicted frame, including the block of pixels.
  40. The method of claim 39, wherein said determining the block of pixels of the first block in the predicted frame based on the coordinates of the vertex of the first block in the predicted frame comprises:
    obtaining the corresponding relation between the coordinates of the pixels of the first block in the predicted frame and the coordinates of the pixels of the first block in the target reference frame according to the coordinates of the vertex of the first block in the predicted frame and the coordinates of the vertex of the first block in the target reference frame;
    and determining the coordinates of the pixels in the first block in the predicted frame according to the coordinates of the pixels in the first block in the target reference frame and the corresponding relation.
  41. The method according to claim 40, wherein said obtaining the corresponding relationship between the coordinates of the pixel of the first block in the predicted frame and the coordinates of the pixel of the first block in the target reference frame according to the coordinates of the vertex of the first block in the predicted frame and the coordinates of the vertex of the first block in the target reference frame comprises:
    respectively inputting coordinates of four vertexes of the first block in the prediction frame and coordinates of the four vertexes of the first block in the target reference frame into a homography transformation formula to obtain a homography equation set, wherein the homography equation set comprises four equations;
    Solving the homography equation set to obtain a homography transformation matrix corresponding to the first block;
    and obtaining a homography transformation formula corresponding to the first block according to the homography transformation matrix corresponding to the first block, wherein the homography transformation formula corresponding to the first block is used for expressing the corresponding relation between the coordinates of the pixels of the first block in the prediction frame and the coordinates in the target reference frame.
  42. The method according to claim 41, wherein said determining coordinates of the pixel in the first block in the predicted frame according to the coordinates of the pixel in the first block in the target reference frame and the correspondence comprises:
    and inputting the coordinate of the first pixel in the target reference frame into a homography transformation formula corresponding to the first block to obtain the coordinate of the first pixel in the prediction frame, wherein the first pixel is a pixel of the first block.
  43. The method of claim 42 wherein the region of the block of pixels comprises a greater number of coordinates than the first block in the predicted frame, the region of the block of pixels being determined by the vertex of the first block, the determining the coordinates of the pixels in the first block in the predicted frame based on the coordinates of the pixels in the first block in the target reference frame and the correspondence, further comprising:
    Eliminating the coordinates of the pixels of the first block in the prediction frame from the coordinates in the pixel block area to obtain first coordinates;
    and inputting the first coordinate into a homography transformation formula corresponding to the first block to obtain a pixel of the first coordinate in the target reference frame.
  44. The method of claim 43, wherein prior to determining the predicted motion vector for the first vertex from the target reference frame to the predicted frame based on predicted motion vectors for blocks surrounding the first vertex from the target reference frame to the predicted frame, the method further comprises:
    acquiring the first reference frame and the second reference frame;
    determining the target reference frame from the first reference frame and the second reference frame according to the position of the frame to be predicted;
    dividing the target reference frame into blocks according to square blocks of a first size;
    a motion vector for the block from the target reference frame to the predicted frame is calculated.
  45. The method of claim 44, wherein said calculating the motion vector of the block from the target reference frame to the predicted frame when the target reference frame is the first reference frame comprises:
    Obtaining a motion vector of the first block from the first reference frame to the second reference frame;
    determining half of a motion vector of the first block from the first reference frame to the second reference frame as a predicted motion vector of the first block from the target reference frame to the predicted frame.
  46. The method of claim 45, wherein when the target reference frame is the second reference frame, the calculating a motion vector for the first block from the target reference frame to the predicted frame comprises:
    obtaining a motion vector of the first block from the second reference frame to the first reference frame;
    determining half of a negative value of a motion vector of the first block from the second reference frame to the first reference frame as a predicted motion vector of the first block from the target reference frame to the predicted frame.
  47. The method of claim 46, wherein determining the predicted motion vector for the first vertex from the target reference frame to the predicted frame based on the predicted motion vector for the first vertex from the target reference frame to the predicted frame for the blocks surrounding the first vertex comprises:
    determining an average of predicted motion vectors of blocks around the first vertex from the target reference frame to the predicted frame as a predicted motion vector of the first vertex from the target reference frame to the predicted frame.
  48. The method of claim 47, wherein determining the coordinates of the first vertex in the predicted frame based on the coordinates of the first vertex in the target reference frame and the predicted motion vector of the first vertex comprises:
    and adding the coordinates of the first vertex in the target reference frame and the predicted motion vector of the first vertex to obtain the coordinates of the first vertex in the predicted frame.
  49. An electronic device, characterized in that the electronic device comprises: one or more processors, memory, and a display screen;
    the memory coupled with the one or more processors, the memory to store computer program code, the computer program code comprising computer instructions, the one or more processors to invoke the computer instructions to cause the electronic device to perform:
    determining a predicted motion vector of a first vertex from a target reference frame to a predicted frame according to predicted motion vectors of blocks around the first vertex from the target reference frame to the predicted frame, wherein the first vertex is one vertex in the first block, the first block is one block in the target reference frame, the target reference frame is one frame determined from the first reference frame or the second reference frame according to the position of the predicted frame relative to the first reference frame or the second reference frame, and the first reference frame and the second reference frame are two adjacent frames in a video stream;
    Determining coordinates of a first vertex in the target reference frame in the prediction frame according to the coordinates of the first vertex and a predicted motion vector of the first vertex;
    determining a pixel block of the first block in the predicted frame according to the coordinates of the vertex of the first block in the predicted frame and the coordinates in the target reference frame;
    displaying the predicted frame, including the block of pixels.
  50. The electronic device of claim 49, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    obtaining the corresponding relation between the coordinates of the pixels of the first block in the predicted frame and the coordinates of the pixels of the first block in the target reference frame according to the coordinates of the vertex of the first block in the predicted frame and the coordinates of the vertex of the first block in the target reference frame;
    and determining the coordinates of the pixels in the first block in the predicted frame according to the coordinates of the pixels in the first block in the target reference frame and the corresponding relation.
  51. The electronic device of claim 50, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    Respectively inputting coordinates of four vertexes of the first block in the prediction frame and coordinates of the four vertexes of the first block in the target reference frame into a homography transformation formula to obtain a homography equation set, wherein the homography equation set comprises four equations;
    solving the homography equation set to obtain a homography transformation matrix corresponding to the first block;
    and obtaining a homography transformation formula corresponding to the first block according to the homography transformation matrix corresponding to the first block, wherein the homography transformation formula corresponding to the first block is used for expressing the corresponding relation between the coordinates of the pixels of the first block in the prediction frame and the coordinates in the target reference frame.
  52. The electronic device of claim 51, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    and inputting the coordinate of the first pixel in the target reference frame into a homography transformation formula corresponding to the first block to obtain the coordinate of the first pixel in the prediction frame, wherein the first pixel is a pixel of the first block.
  53. The electronic device of claim 52, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    Eliminating the coordinates of the pixels of the first block in the prediction frame from the coordinates in the pixel block area to obtain first coordinates;
    and inputting the first coordinate into a homography transformation formula corresponding to the first block to obtain a pixel of the first coordinate in the target reference frame.
  54. The electronic device of claim 49, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    acquiring the first reference frame and the second reference frame;
    determining the target reference frame from the first reference frame and the second reference frame according to the position of the frame to be predicted;
    dividing the target reference frame into blocks according to square blocks of a first size;
    a motion vector for the block from the target reference frame to the predicted frame is calculated.
  55. The electronic device of claim 54, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    obtaining a motion vector of the first block from the first reference frame to the second reference frame;
    determining half of a motion vector of the first block from the first reference frame to the second reference frame as a predicted motion vector of the first block from the target reference frame to the predicted frame.
  56. The electronic device of claim 54, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    obtaining a motion vector of the first block from the second reference frame to the first reference frame;
    determining half of a negative value of a motion vector of the first block from the second reference frame to the first reference frame as a predicted motion vector of the first block from the target reference frame to the predicted frame.
  57. The electronic device of claim 49, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    determining an average of predicted motion vectors of blocks around the first vertex from the target reference frame to the predicted frame as a predicted motion vector of the first vertex from the target reference frame to the predicted frame.
  58. The electronic device of claim 49, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    and adding the coordinates of the first vertex in the target reference frame and the predicted motion vector of the first vertex to obtain the coordinates of the first vertex in the predicted frame.
  59. A method of image frame generation, the method comprising:
    determining a tenth position coordinate of an eleventh vertex of the prediction block in the prediction image frame according to the depth values of the tenth block and the twelfth block and the position coordinates of the tenth block and the twelfth block in the image frame;
    wherein the tenth block is a block in the first image frame; the twelfth block is a block which is determined in the second image frame according to a matching algorithm and is matched with the eleventh block;
    generating a prediction block according to the color data of a reference block and the tenth position coordinate, wherein the reference block is one of the tenth block or the twelfth block;
    generating the predicted image frame, the predicted image frame including the predicted block.
  60. The method of claim 59, wherein the position coordinates in the image frame are position coordinates in a first coordinate system, the first coordinate system is a two-dimensional coordinate system, and the determining the eleventh position coordinates of the eleventh vertex of the prediction block in the prediction image frame comprises:
    according to the first depth value of the eleventh block and the twelfth position coordinate of the eleventh block in the first coordinate system, a thirteenth position coordinate in a second coordinate system is obtained through calculation, and the second coordinate system is a three-dimensional coordinate system;
    Calculating to obtain a fifteenth position coordinate under the second coordinate system according to the second depth value of the twelfth block and a fourteenth position coordinate of the twelfth block under the first coordinate system;
    according to the thirteenth position coordinate and the fifteenth position coordinate, a sixteenth position coordinate in the second coordinate system is obtained through calculation;
    and calculating the tenth position coordinate in the first coordinate system according to the sixteenth position coordinate.
  61. The method of claim 59 or 60, wherein before generating the prediction block, the method further comprises:
    determining a seventeenth position coordinate of a twelfth vertex of the prediction block in the prediction image frame according to the depth values of the thirteenth block and the fourteenth block and the position coordinates of the thirteenth block and the fourteenth block in the image frame;
    wherein the thirteenth block is a block adjacent to the eleventh block, the fourteenth block is a block adjacent to the twelfth block, and the thirteenth block and the fourteenth block are mutually matched blocks determined according to the matching algorithm;
    determining an eighteenth position coordinate of a thirteenth vertex of the prediction block in the prediction image frame according to the depth values of the fifteenth block and the sixteenth block and the position coordinates of the fifteenth block and the sixteenth block in the image frame;
    Wherein the fifteenth block is a block adjacent to the thirteenth block, the sixteenth block is a block adjacent to the fourteenth block, and the fifteenth block and the sixteenth block are mutually matched blocks determined according to the matching algorithm;
    determining a nineteenth position coordinate of a fourteenth vertex of the prediction block in the prediction image frame according to the depth values of the seventeenth block and the eighteenth block and the position coordinates of the seventeenth block and the eighteenth block in the image frame;
    wherein the seventeenth block is a block adjacent to both the tenth block and the fifteenth block, the eighteenth block is a block adjacent to both the twelfth block and the sixteenth block, and the seventeenth block and the eighteenth block are mutually matched blocks determined according to the matching algorithm.
  62. The method of claim 61, wherein the position coordinate in the image frame is a position coordinate in a first coordinate system, and wherein determining a seventeenth position coordinate of a twelfth vertex of the prediction block in the prediction image frame according to the depth values of the thirteenth and fourteenth blocks and the position coordinates of the thirteenth and fourteenth blocks in the image frame comprises:
    Calculating to obtain a twenty-first position coordinate under a second coordinate system according to a third depth value of the thirteenth block and a twentieth position coordinate of the thirteenth block under the first coordinate system, wherein the second coordinate system is a three-dimensional coordinate system;
    calculating to obtain a twenty-third position coordinate under the second coordinate system according to the second depth value of the fourteenth block and a twenty-second position coordinate of the fourteenth block under the first coordinate system;
    calculating to obtain a twenty-fourth position coordinate under the second coordinate system according to the twenty-first position coordinate and the twenty-third position coordinate;
    and calculating the seventeenth position coordinate under the first coordinate system according to the twenty-fourth position coordinate.
  63. The method according to claim 61 or 62, wherein the generating a prediction block according to the color data of the reference block and the tenth position coordinate comprises:
    determining the corresponding relation between the tenth position coordinate, the seventeenth position coordinate, the eighteenth position coordinate and the nineteenth position coordinate and the position coordinates of four vertexes in a reference block respectively;
    generating the prediction block according to the correspondence and the color data of the reference block.
  64. The method of claim 63, wherein the color data comprises color data of each pixel in the reference block, and wherein the generating the prediction block according to the correspondence and the color data of the reference block comprises:
    generating a homography transformation formula according to the corresponding relation;
    determining the position coordinates of each pixel in the prediction image frame according to the homography transformation formula and the position coordinates of each pixel in the reference image frame;
    and generating the prediction block according to the color data of each pixel and the position coordinate of each speed limit in the prediction image frame.
  65. The method of claim 63 or 64, wherein the reference block comprises: the determining of the corresponding relationship between the tenth position coordinate, the seventeenth position coordinate, the eighteenth position coordinate and the nineteenth position coordinate and the position coordinates of the four vertexes in the reference block includes:
    determining the tenth position coordinate and the fifteenth vertex position coordinate as a set of corresponding position coordinates;
    Determining the seventeenth position coordinate and the sixteenth vertex position coordinate as a group of corresponding position vertices;
    determining the eighteenth position coordinate and the seventeenth vertex position coordinate as a group of corresponding position coordinates;
    determining the nineteenth position coordinate and the eighteenth vertex position coordinate as a group of corresponding position coordinates;
    wherein the reference block is a quadrangle, and a position direction of the fifteenth vertex relative to a center point of the reference block is the same as a position direction of the eleventh block relative to the fifteenth block; a positional direction of the sixteenth vertex with respect to the center point of the reference block is the same as a positional direction of the thirteenth block with respect to the seventeenth block; a positional direction of the seventeenth vertex with respect to the center point of the reference block is the same as a positional direction of the fifteenth block with respect to the eleventh block; the position direction of the eighteenth vertex with respect to the center point of the reference block is the same as the position direction of the seventeenth block with respect to the thirteenth block.
  66. The method according to claim 60, wherein said calculating a sixteenth position coordinate in said second coordinate system based on said thirteenth position coordinate and said fifteenth position coordinate comprises:
    Calculating a displacement vector from the thirteenth position coordinate to the fifteenth position coordinate to obtain a first displacement vector;
    the sixteenth position coordinate is calculated from the thirteenth position coordinate or the fifteenth position coordinate and the first displacement vector of the first proportion.
  67. The method of claim 66, wherein the first ratio is equal to a preset value.
  68. The method of claim 66, wherein the image frame in which the reference block is located is a reference image frame, wherein the first image frame precedes the second image frame in the data stream, and wherein the sixteenth position coordinate is calculated before the method further comprises:
    determining that the first ratio is equal to a ratio of an absolute value of the first time difference to an absolute value of the second time difference;
    the first time difference value is equal to a difference between a point in time of the predicted image frame in a data stream and a point in time of a reference frame in the data stream;
    the second time difference value is equal to a difference between a point in time of the first image frame in the data stream and a point in time of the second image frame in the data stream.
  69. The method of claim 66, wherein the image frame in which the reference block is located is a reference image frame, wherein the first image frame precedes the second image frame in the data stream, and wherein the sixteenth position coordinate is calculated before the method further comprises:
    determining that the first ratio is equal to a ratio of the first value to the second value;
    the first value is equal to the first number plus one; the second number is equal to the second number plus one;
    the first number is equal to a number of image frames spaced between the reference image frame and the predicted image frame in the data stream; the second number is equal to a number of image frames spaced between the reference image frame and the predicted image frame in the data stream.
  70. The method of claim 68 or 69, wherein said calculating the sixteenth position coordinate from the thirteenth position coordinate or the fifteenth position coordinate and a first proportion of the first displacement vector comprises:
    if the reference image frame is the first image frame, calculating the sixteenth position coordinate according to the thirteenth position coordinate and the first displacement vector of the first proportion;
    And if the reference image frame is the second image frame, calculating the sixteenth position coordinate according to the fifteenth position coordinate and the first displacement vector of the first proportion.
  71. The method according to any of claims 59-70, wherein the image frame in which the reference block is located is a reference image frame, the first image frame is located before the second image frame in the data stream, the determining the eleventh vertex of the prediction block is located before the tenth position coordinate in the prediction image frame, the method further comprising:
    determining one of the first image frame and the second image frame as the reference image frame;
    determining another image frame of the first image frame and the second image frame except the reference image frame as a matching image frame;
    dividing the reference image frame into blocks to obtain the reference blocks;
    determining a block matched with the reference block in the matched image frame as a matched block according to the matching algorithm; wherein if the reference block is the tenth block, the matching block is the twelfth block; if the reference block is the twelfth block, the matching block is the tenth block.
  72. The method of claim 71, wherein said determining a block in the matched image frame that matches the reference block as a matched block according to the matching algorithm comprises:
    determining a block with the highest similarity to the reference block in a first region of the matched image frame as the matched block; the first region is a region of a preset shape of a first area centered on a reference position coordinate in the matching image frame, the reference position coordinate being a position coordinate of the reference block in the reference image frame.
  73. The method of claim 72, wherein prior to said determining a block in the first region of the matched image frame that is most similar to the reference block as the matched block, the method further comprises:
    and calculating the first area according to the reference depth value of the reference block, wherein the larger the reference depth value is, the smaller the calculated first area is.
  74. The method of claim 72, wherein prior to said determining a block in the first region of the matched image frame that is most similar to the reference block as the matched block, the method further comprises:
    If the reference depth value of the reference block is larger than or equal to a first threshold value, determining that the first area is equal to a first preset area;
    if the reference depth value is larger than a second threshold value and smaller than the first threshold value, calculating to obtain the first area according to the reference depth value, wherein the larger the reference depth value is, the smaller the first area is;
    if the reference depth value is smaller than or equal to the second threshold value, determining that the first area is equal to a second preset area;
    the first threshold is larger than the second threshold, and the second preset area is larger than the first preset area.
  75. An electronic device, characterized in that the electronic device comprises: one or more processors and memory; the memory coupled with the one or more processors, the memory to store computer program code, the computer program code including computer instructions, the one or more processors to invoke the computer instructions to cause the electronic device to perform:
    determining a tenth position coordinate of an eleventh vertex of the prediction block in the prediction image frame according to the depth values of the tenth block and the twelfth block and the position coordinates of the tenth block and the twelfth block in the image frame;
    Wherein the tenth block is a block in the first image frame; the twelfth block is a block which is determined in the second image frame according to a matching algorithm and is matched with the eleventh block;
    generating a prediction block according to the color data of a reference block and the tenth position coordinate, wherein the reference block is one of the tenth block or the twelfth block;
    generating the predicted image frame, the predicted image frame including the prediction block.
  76. The electronic device of claim 75, wherein the location coordinates in the image frame are location coordinates in a first coordinate system, wherein the first coordinate system is a two-dimensional coordinate system, and wherein the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform, in terms of the determining the eleventh location coordinates of the eleventh vertex of the predicted block in the predicted image frame:
    according to the first depth value of the eleventh block and a twelfth position coordinate of the eleventh block in the first coordinate system, a thirteenth position coordinate in a second coordinate system is obtained through calculation, and the second coordinate system is a three-dimensional coordinate system;
    calculating to obtain a fifteenth position coordinate under the second coordinate system according to the second depth value of the twelfth block and a fourteenth position coordinate of the twelfth block under the first coordinate system;
    Calculating a sixteenth position coordinate under the second coordinate system according to the thirteenth position coordinate and the fifteenth position coordinate;
    and calculating the tenth position coordinate in the first coordinate system according to the sixteenth position coordinate.
  77. The electronic device of claim 75 or 76, wherein prior to the generating the prediction block, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    determining a seventeenth position coordinate of a twelfth vertex of the prediction block in the prediction image frame according to the depth values of the thirteenth block and the fourteenth block and the position coordinates of the thirteenth block and the fourteenth block in the image frame;
    wherein the thirteenth block is a block adjacent to the eleventh block, the fourteenth block is a block adjacent to the twelfth block, and the thirteenth block and the fourteenth block are mutually matched blocks determined according to the matching algorithm;
    determining an eighteenth position coordinate of a thirteenth vertex of the prediction block in the prediction image frame according to the depth values of the fifteenth block and the sixteenth block and the position coordinates of the fifteenth block and the sixteenth block in the image frame;
    Wherein the fifteenth block is a block adjacent to the thirteenth block, the sixteenth block is a block adjacent to the fourteenth block, and the fifteenth block and the sixteenth block are mutually matched blocks determined according to the matching algorithm;
    determining a nineteenth positional coordinate of a fourteenth vertex of the prediction block in the prediction image frame according to depth values of the seventeenth block and the eighteenth block, and positional coordinates of the seventeenth block and the eighteenth block in the image frame;
    wherein the seventeenth block is a block adjacent to both the tenth block and the fifteenth block, the eighteenth block is a block adjacent to both the twelfth block and the sixteenth block, and the seventeenth block and the eighteenth block are mutually matched blocks determined according to the matching algorithm.
  78. The electronic device of claim 77, wherein the position coordinates in the image frame are position coordinates in a first coordinate system, and wherein the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform, in said determining a seventeenth position coordinate of a twelfth vertex of the prediction block in the predicted image frame from the position coordinates in the image frame of the thirteenth and fourteenth blocks, and from the depth values of the thirteenth and fourteenth blocks:
    Calculating to obtain a twenty-first position coordinate under a second coordinate system according to a third depth value of the thirteenth block and a twentieth position coordinate of the thirteenth block under the first coordinate system, wherein the second coordinate system is a three-dimensional coordinate system;
    according to the second depth value of the fourteenth block and a twenty-second position coordinate of the fourteenth block in the first coordinate system, calculating to obtain a twenty-third position coordinate in the second coordinate system;
    calculating to obtain a twenty-fourth position coordinate under the second coordinate system according to the twenty-first position coordinate and the twenty-third position coordinate;
    and calculating the seventeenth position coordinate in the first coordinate system according to the twenty fourth position coordinate.
  79. The electronic device of claim 77 or 78, wherein, in said generating a prediction block from color data of a reference block and the tenth location coordinate, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform:
    determining the corresponding relation between the tenth position coordinate, the seventeenth position coordinate, the eighteenth position coordinate and the nineteenth position coordinate and the position coordinates of four vertexes in a reference block respectively;
    Generating the prediction block according to the correspondence and the color data of the reference block.
  80. The electronic device of claim 79, wherein color data for each pixel in the reference block is included in the color data, wherein the generating the prediction block aspect as a function of the correspondence and the color data for the reference block, and wherein the one or more processors are further configured to, in particular, invoke the computer instructions to cause the electronic device to perform:
    generating a homography transformation formula according to the corresponding relation;
    determining the position coordinates of each pixel in the prediction image frame according to the homography transformation formula and the position coordinates of each pixel in the reference image frame;
    and generating the prediction block according to the color data of each pixel and the position coordinate of each speed limit in the prediction image frame.
  81. The electronic device of claim 79 or 80, wherein the reference block comprises: fifteenth vertex, sixteenth vertex, seventeenth vertex and eighteenth vertex, in the determining correspondence of the tenth position coordinate, the seventeenth position coordinate, the eighteenth position coordinate and the nineteenth position coordinate to position coordinates of four vertices in a reference block, the one or more processors being configured to invoke the computer instructions to cause the electronic device to perform:
    Determining the tenth position coordinate and the fifteenth vertex position coordinate as a set of corresponding position coordinates;
    determining the seventeenth position coordinate and the sixteenth vertex position coordinate as a group of corresponding position vertices;
    determining the eighteenth position coordinate and the seventeenth vertex position coordinate as a group of corresponding position coordinates;
    determining the nineteenth position coordinate and the eighteenth vertex position coordinate as a set of corresponding position coordinates;
    wherein the reference block is a quadrangle, and a position direction of the fifteenth vertex relative to a center point of the reference block is the same as a position direction of the eleventh block relative to the fifteenth block; a positional direction of the sixteenth vertex with respect to the center point of the reference block is the same as a positional direction of the thirteenth block with respect to the seventeenth block; a positional direction of the seventeenth vertex with respect to the center point of the reference block is the same as a positional direction of the fifteenth block with respect to the tenth block; the position direction of the eighteenth vertex with respect to the center point of the reference block is the same as the position direction of the seventeenth block with respect to the thirteenth block.
  82. The electronic device according to claim 76, wherein, in said calculating sixteenth position coordinate in the second coordinate system from the thirteenth position coordinate and the fifteenth position coordinate, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    calculating a displacement vector from the thirteenth position coordinate to the fifteenth position coordinate to obtain a first displacement vector;
    the sixteenth position coordinate is calculated from the thirteenth position coordinate or the fifteenth position coordinate and the first displacement vector of the first proportion.
  83. The electronic device of claim 82, wherein the first ratio is equal to a preset value.
  84. The electronic device of claim 82, wherein the image frame in which the reference block is located is a reference image frame, wherein the first image frame is located before the second image frame in the data stream, and wherein before the sixteenth position coordinate is obtained through the calculating, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    Determining that the first ratio is equal to a ratio of an absolute value of the first time difference to an absolute value of the second time difference;
    the first time difference value is equal to a difference between a point in time of the predicted image frame in a data stream and a point in time of a reference frame in the data stream;
    the second time difference value is equal to a difference between a point in time of the first image frame in the data stream and a point in time of the second image frame in the data stream.
  85. The electronic device of claim 82, wherein the image frame in which the reference block is located is a reference image frame, wherein the first image frame is located before the second image frame in the data stream, and wherein before the sixteenth position coordinate is obtained through the calculating, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    determining that the first ratio is equal to a ratio of a first value to a second value;
    the first value is equal to the first number plus one; the second number is equal to the second number plus one;
    the first number is equal to a number of image frames spaced between the reference image frame and the predicted image frame in the data stream; the second number is equal to a number of image frames spaced between the reference image frame and the predicted image frame in the data stream.
  86. The electronic device of claim 81 or 82, wherein, in said calculating the sixteenth position coordinate from the thirteenth position coordinate or the fifteenth position coordinate and the first displacement vector of the first proportion, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform:
    if the reference image frame is the first image frame, calculating the sixteenth position coordinate according to the thirteenth position coordinate and the first displacement vector of the first proportion;
    and if the reference image frame is the second image frame, calculating the sixteenth position coordinate according to the fifteenth position coordinate and the first displacement vector of the first proportion.
  87. A method for generating image frames, which is applied to an electronic device, the method comprising:
    in a first drawing cycle, according to a drawing instruction of a target application program, when the drawing instruction meets a first condition, storing a drawing result into a seventh storage space as a first drawing result, and when the drawing instruction meets a second condition, storing the drawing result into an eighth storage space as a second drawing result;
    Generating a seventh image frame according to the first drawing result and the second drawing result;
    in a second drawing cycle, according to a drawing instruction of the target application program, when the drawing instruction meets the first condition, storing a drawing result into the seventh storage space to serve as a third drawing result, and when the drawing instruction meets the second condition, not drawing;
    and generating an eighth image frame according to the third drawing result and the second drawing result.
  88. The method of claim 87, further comprising:
    in a third drawing period, according to a drawing instruction of a target application program, when the drawing instruction meets a first condition, storing a drawing result into the seventh storage space as a fourth drawing result, when the drawing instruction meets a second condition, storing the drawing result into the eighth storage space as a fifth drawing result, wherein the third drawing period is positioned after the first drawing period;
    generating a ninth image frame according to the fourth drawing result and the fifth drawing result;
    in a fourth drawing period, generating a sixth drawing result according to the first drawing result and the fourth drawing result;
    Generating a tenth image frame according to a sixth drawing result and the seventh drawing result; the seventh drawing result is a drawing result obtained when the drawing instruction of the target application program satisfies the second condition in a fifth drawing cycle, and the fifth drawing cycle is before the fourth drawing cycle.
  89. The method of claim 88, wherein said generating a seventh image frame from said first rendering result and said second rendering result comprises:
    storing the first drawing result and the second drawing result into a ninth storage space;
    generating the seventh image frame according to the first drawing result and the second drawing result in the ninth storage space.
  90. The method of claim 89, wherein said generating a ninth image frame from said fourth rendering result and said fifth rendering result comprises:
    storing the fourth drawing result and the fifth drawing result in the ninth storage space;
    generating the ninth image frame according to the fourth drawing result and the fifth drawing result in the ninth storage space.
  91. The method of claim 88 or 89, wherein said generating a tenth image frame based on a sixth rendering result and said seventh rendering result comprises:
    Storing the sixth drawing result and the seventh drawing result in the ninth storage space;
    generating the tenth image frame from the sixth drawing result and the seventh drawing result in the ninth storage space.
  92. The method according to any of claims 88-91, wherein the seventh storage space is comprised of a tenth storage space and an eleventh storage space, wherein the first rendering result is stored in the tenth storage space, wherein the fourth rendering result is stored in the eleventh storage space, and wherein generating a sixth rendering result based on the first rendering result and the fourth rendering result comprises:
    storing the first drawing result into the eleventh storage space;
    and generating the sixth drawing result according to the first drawing result and the fourth drawing result in the eleventh storage space.
  93. The method of any of claims 88-91, wherein the seventh storage space is comprised of at least three storage spaces, the at least three storage spaces including a tenth storage space, an eleventh storage space, and a twelfth storage space; the first drawing result is stored in a tenth storage space, the fourth drawing result is stored in an eleventh storage space, and the generating a sixth drawing result according to the first drawing result and the fourth drawing result includes:
    Storing the first drawing result and the fourth drawing result into a twelfth storage space;
    and generating the sixth drawing result according to the first drawing result and the fourth drawing result in the twelfth storage space.
  94. The method of any of claims 88-93, wherein said third rendering period is adjacent to said fourth rendering period; the fifth drawing cycle and the third drawing cycle are the same drawing cycle, and the fifth drawing result and the seventh drawing result are the same drawing result.
  95. The method of claim 87, wherein said generating a seventh image frame from said first rendering result and said second rendering result comprises:
    storing the second drawing result into the seventh storage space;
    generating the seventh image frame according to the first drawing result and the second drawing result in the seventh storage space.
  96. The method of any one of claims 88-95, wherein the electronic device comprises a counting unit; the initialized value of the counting unit is a first value, and the value of the counting unit is switched between the first value and a second value once every time the value of the counting unit is updated; the counting unit updates at the moment when the drawing cycle starts;
    In a single rendering period of time,
    if the updated numerical value of the counting unit is a second numerical value, emptying the eighth storage space at the moment when the drawing cycle starts; when the drawing instruction meets the second condition, storing the drawing result into the eighth storage space;
    if the updated numerical value of the counting unit is a first numerical value, the second storage unit is not emptied at the moment when the drawing cycle starts; and when the drawing instruction meets the second condition, not executing drawing.
  97. The method of claim 87, further comprising:
    in a third drawing period, generating a fourth drawing result according to the first drawing result and the third drawing result;
    and generating a ninth image frame according to the second drawing result and the fourth drawing result.
  98. The method of claim 97, wherein the electronic device comprises a counting unit, wherein the initialized value of the counting unit is a first value, wherein the counting unit repeatedly updates and switches among three values according to the sequence of the first value, the second value and the third value, wherein the counting unit updates at the beginning of the drawing cycle,
    In a single rendering period of time,
    if the updated numerical value of the counting unit is a second numerical value, emptying the eighth storage space at the moment when the drawing cycle starts; when the drawing instruction meets the second condition, storing the drawing result into the eighth storage space;
    if the updated numerical value of the counting unit is the first numerical value or the third numerical value, the second storage unit is not emptied at the moment when the drawing cycle starts; and when the drawing instruction meets the second condition, not executing drawing.
  99. The method of any of claims 87 to 98 wherein the ninth memory space is comprised of at least two memory spaces that rotate between a first state and a second state when the target application is running according to the first instructions of the target application; at the same time point, only one of the at least two storage spaces is in the first state, and the rest of the storage spaces are in the second state; when the at least two storage spaces are in the first state, the image frames in the at least two storage spaces can be transmitted to a display device for displaying; when the at least two storage spaces are in the second state, the electronic device can draw in the at least two storage spaces; two frame image frames generated by two adjacent drawing cycles are respectively stored in two different storage spaces in the ninth storage space.
  100. The method of any one of claims 86-99,
    the first condition is: the drawing instruction comprises an instruction for starting the depth test;
    the second condition is that: the drawing instruction comprises an instruction for closing the depth test.
  101. The method of any one of claims 86-100, wherein the time nodes of any two adjacent rendering cycles are points in time at which a second instruction is invoked for the target application.
  102. An electronic device, characterized in that the electronic device comprises: one or more processors and memory;
    the memory coupled with the one or more processors, the memory to store computer program code, the computer program code including computer instructions, the one or more processors to invoke the computer instructions to cause the electronic device to perform:
    in a first drawing period, according to a drawing instruction of a target application program, when the drawing instruction meets a first condition, storing a drawing result into a seventh storage space to serve as a first drawing result, and when the drawing instruction meets a second condition, storing the drawing result into an eighth storage space to serve as a second drawing result;
    Generating a seventh image frame according to the first drawing result and the second drawing result;
    in a second drawing cycle, according to a drawing instruction of the target application program, when the drawing instruction meets the first condition, storing a drawing result into the seventh storage space to serve as a third drawing result, and when the drawing instruction meets the second condition, not drawing;
    and generating an eighth image frame according to the third drawing result and the second drawing result.
  103. The electronic device of claim 102, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    in a third drawing period, according to a drawing instruction of a target application program, when the drawing instruction meets a first condition, storing a drawing result into the seventh storage space to serve as a fourth drawing result, and when the drawing instruction meets a second condition, storing the drawing result into the eighth storage space to serve as a fifth drawing result, wherein the third drawing period is positioned after the first drawing period;
    generating a ninth image frame according to the fourth drawing result and the fifth drawing result;
    In a fourth drawing period, generating a sixth drawing result according to the first drawing result and the fourth drawing result;
    generating a tenth image frame according to a sixth drawing result and the seventh drawing result; the seventh drawing result is a drawing result obtained when the drawing instruction of the target application program satisfies the second condition in a fifth drawing cycle, and the fifth drawing cycle is before the fourth drawing cycle.
  104. The electronic device of claim 103, wherein, in said generating a seventh image frame from the first and second rendering results, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    storing the first drawing result and the second drawing result into a ninth storage space;
    generating the seventh image frame according to the first drawing result and the second drawing result in the ninth storage space.
  105. The electronic device of claim 104, wherein, in said generating a ninth image frame from the fourth and fifth rendering results, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    Storing the fourth drawing result and the fifth drawing result into the ninth storage space;
    generating the ninth image frame according to the fourth drawing result and the fifth drawing result in the ninth storage space.
  106. The electronic device of claim 104 or 105, wherein, in said generating a tenth image frame from a sixth rendering result and a seventh rendering result, the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform:
    storing the sixth drawing result and the seventh drawing result in the ninth storage space;
    generating the tenth image frame according to the sixth drawing result and the seventh drawing result in the ninth storage space.
  107. The electronic device according to any of claims 103-106, wherein the seventh memory space is comprised of a tenth memory space and an eleventh memory space, wherein the first rendering result is stored in the tenth memory space and the fourth rendering result is stored in the eleventh memory space, and wherein the one or more processors are further configured to, in said generating a sixth rendering result from the first rendering result and the fourth rendering result, invoke the computer instructions to cause the electronic device to perform:
    Storing the first drawing result into the eleventh storage space;
    and generating the sixth drawing result according to the first drawing result and the fourth drawing result in the eleventh storage space.
  108. The electronic device of any one of claims 103-106, wherein the seventh storage space is comprised of at least three storage spaces, the at least three storage spaces including a tenth storage space, an eleventh storage space, and a twelfth storage space; the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform, in the aspect of generating a sixth rendering result according to the first rendering result and the fourth rendering result, the first rendering result being stored in a tenth storage space, the fourth rendering result being stored in an eleventh storage space, the:
    storing the first drawing result and the fourth drawing result into a twelfth storage space;
    and generating the sixth drawing result according to the first drawing result and the fourth drawing result in the twelfth storage space.
  109. The electronic device of any one of claims 103-108, wherein the third drawing period is adjacent to the fourth drawing period; the fifth drawing cycle and the third drawing cycle are the same drawing cycle, and the fifth drawing result and the seventh drawing result are the same drawing result.
  110. The electronic device of claim 102, wherein, in said generating a seventh image frame from the first and second rendering results, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    storing the second drawing result into the seventh storage space;
    generating the seventh image frame according to the first drawing result and the second drawing result in the seventh storage space.
  111. The electronic device of any one of claims 103-110, wherein the electronic device comprises a counting unit; the initialized value of the counting unit is a first value, and the value of the counting unit is switched between the first value and a second value once every time the value of the counting unit is updated; the counting unit updates at the moment when the drawing cycle starts;
    in a single rendering period of the image data,
    if the updated numerical value of the counting unit is a second numerical value, emptying the eighth storage space at the moment when the drawing cycle starts; when the drawing instruction meets the second condition, storing the drawing result into the eighth storage space;
    If the updated numerical value of the counting unit is a first numerical value, the second storage unit is not emptied at the moment when the drawing cycle starts; and when the drawing instruction meets the second condition, not executing drawing.
  112. The electronic device of claim 102, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
    in a third drawing period, generating a fourth drawing result according to the first drawing result and the third drawing result;
    and generating a ninth image frame according to the second drawing result and the fourth drawing result.
  113. The electronic device of any one of claims 102-112,
    the first condition is that: the drawing instruction comprises an instruction for starting the depth test;
    the second condition is: the drawing instruction comprises an instruction for closing the depth test.
  114. A method for image frame prediction is applied to an electronic device, and is characterized by comprising the following steps:
    when drawing a twenty-first drawing frame of a first application, the electronic equipment draws a drawing instruction of the twenty-first drawing frame according to a first drawing range to obtain a twenty-first drawing result, wherein the size of the first drawing range is larger than that of the twenty-first drawing frame of the first application;
    When drawing a twenty-second drawing frame of the first application, the electronic device draws a drawing instruction of the twenty-second drawing frame according to a second drawing range to obtain a twenty-second drawing result, wherein the size of a twenty-second memory space is larger than that of the twenty-second drawing frame, and the size of the twenty-first drawing frame is the same as that of the twenty-second drawing frame;
    and the electronic equipment predicts and generates a twenty-third prediction frame of the first application according to the twenty-first drawing result and the twenty-second drawing result, wherein the size of the twenty-third prediction frame is the same as that of the twenty-first drawing frame.
  115. The method of claim 114, wherein the electronic device draws the drawing instruction of the twenty-first drawing frame according to a first drawing range to obtain a twenty-first drawing result, and specifically comprises:
    the electronic equipment modifies a first parameter in a first drawing instruction of a twenty-first drawing frame issued by the first application into a first drawing range; the first parameter is used for setting the drawing range size of a twenty-first drawing frame;
    and the electronic equipment draws the drawing instruction of the twenty-first drawing frame after modification according to the first drawing range to obtain a twenty-first drawing result.
  116. The method of claim 115, wherein the first rendering range has a size larger than a twenty-first rendering frame size of the first application, and further comprising:
    the width of the first drawing range is K3 times the width of the twenty-first drawing frame, the height of the first drawing range is K4 times the height of the twenty-first drawing frame, and K3. K4 is greater than 1.
  117. The method of claim 116, wherein K3. The K4 is determined by a fixed value configured by a system of the electronic device, or by the electronic device according to a drawing parameter contained in a drawing instruction of the twenty-first drawing frame.
  118. The method of claim 117, wherein the electronic device draws the drawing instruction of the modified twenty-first drawing frame according to a first drawing range to obtain a twenty-first drawing result, and specifically comprises:
    and the electronic equipment generates a first conversion matrix according to the K4 and the K3, and the electronic equipment adjusts the size of the drawing content in the drawing instruction of the modified twenty-first drawing frame according to the first conversion matrix and draws the drawing content in a first drawing range to obtain a twenty-first drawing result.
  119. The method of claim 114, wherein the electronic device draws the drawing instruction of the twenty-second drawing frame according to a second drawing range to obtain a twenty-first drawing result, specifically comprising:
    the electronic equipment modifies a second parameter in a second drawing instruction of a twenty-second drawing frame issued by the first application into a second drawing range; the second parameter is used for setting the drawing range size of a twenty-second drawing frame;
    and the electronic equipment draws the drawing instruction of the modified twenty-second drawing frame according to a second drawing range to obtain a twenty-second drawing result.
  120. The method according to claim 119, wherein the size of the second rendering range is larger than the size of a twenty-second rendering frame of the first application, specifically comprising:
    the width of the second rendering range is K5 times the width of the twenty-second rendering frame, the height of the second rendering range is K6 times the height of the twenty-second rendering frame, and K5. K6 is greater than 1.
  121. The method of claim 120, wherein K5. The K6 is determined by a fixed value configured by a system of the electronic device, or by the electronic device according to a drawing parameter contained in a drawing instruction of the twenty-first drawing frame.
  122. The method according to claim 121, wherein the electronic device draws the drawing instruction of the modified twenty-second drawing frame according to a second drawing range to obtain a twenty-second drawing result, specifically comprising:
    and the electronic equipment generates a second conversion matrix according to the K6 and the K5, and the electronic equipment adjusts the size of the drawing content in the drawing instruction of the modified twenty-second drawing frame according to the second conversion matrix and draws the drawing content in a second drawing range to obtain a twenty-first drawing result.
  123. The method according to claim 114, wherein the electronic device predicts, according to the twenty-first drawing result and the twenty-second drawing result, a twenty-third predicted frame of the first application, and specifically comprises:
    the current device predicts a twenty-third drawing result of the twenty-third predicted frame according to the twenty-first drawing result and the twenty-second drawing result;
    the electronic device clips the twenty-third rendering results into the twenty-third predicted frame.
  124. The method of claim 123, wherein the electronic device predicts a twenty-third rendering result for generating the twenty-third predicted frame based on the twenty-first rendering result and the twenty-second rendering result, and specifically comprises:
    The electronic equipment determines a first motion vector of the twenty-second drawing result according to the twenty-first drawing result and the twenty-second drawing result;
    the electronic device predicts a twenty-third rendering result that generates the twenty-third predicted frame based on the twenty-second rendering result and the first motion vector.
  125. The method of claim 124, wherein the determining, by the electronic device, the first motion vector of the twenty-second rendering result according to the twenty-first rendering result and the twenty-second rendering result, specifically comprises:
    the electronic device divides the twenty-second drawing result into Q pixel blocks, and the electronic device takes out a first pixel block from the Q pixel blocks of the twenty-second drawing result;
    the electronic equipment determines a second pixel block matched with the first pixel block in the twenty-first drawing result;
    the electronic equipment obtains a motion vector of the first pixel block according to the displacement from the second pixel block to the first pixel block;
    the electronic device determines a first motion vector of the twenty-second rendering result from the motion vector of the first pixel block.
  126. The method of claim 125, wherein the determining, by the electronic device, a second pixel block that matches the first pixel block in the twenty-first rendering result comprises:
    the electronic equipment determines a plurality of candidate pixel blocks in the twenty-first drawing result through first pixel points in the first pixel blocks;
    the electronic equipment respectively calculates the difference values of the color values of the candidate pixel blocks and the first pixel block;
    and the electronic equipment determines the second pixel block matched with the first pixel block according to the difference value of the color values of the first pixel blocks of the candidate pixel blocks, wherein the second pixel block is the candidate pixel block with the minimum difference value of the color values of the first pixel block in the candidate pixel blocks.
  127. The method according to any one of claims 114 to 126, wherein when drawing a twenty-first drawing frame of a first application, the electronic device draws the drawing instruction of the twenty-first drawing frame according to a first drawing range to obtain a twenty-first drawing result, specifically including:
    when drawing a twenty-first drawing frame of a first application, the electronic device draws a drawing instruction of the twenty-first drawing frame in a twenty-first memory space according to a first drawing range to obtain a twenty-first drawing result, wherein the size of the twenty-first memory space is greater than or equal to that of the first drawing range.
  128. The method according to any one of claims 114 to 127, wherein, when drawing a twenty-second drawing frame of the first application, the electronic device draws the drawing instruction of the twenty-second drawing frame according to a second drawing range, and obtains a twenty-second drawing result, specifically comprising:
    when drawing a twenty-second drawing frame of the first application, the electronic device draws the drawing instruction of the twenty-second drawing frame in a twenty-second memory space according to a second drawing range to obtain a twenty-second drawing result, wherein the size of the twenty-second memory space is greater than or equal to the size of the second drawing range.
  129. The method of claim 125, wherein the electronic device predicts a twenty-third rendering result for generating the twenty-third predicted frame based on the twenty-second rendering result and the first motion vector, and further comprising:
    the electronic equipment predicts and generates a twenty-third drawing result according to a third drawing range according to the twenty-second drawing result and the first motion vector; wherein a size of the third rendering range is larger than a size of the twenty-third prediction frame.
  130. The method of any of claims 114-129, wherein when drawing a twenty-first drawing frame of a first application, the electronic device draws the drawing instructions of the twenty-first drawing frame according to a first drawing range, resulting in a twenty-first drawing result, the method further comprising:
    the electronic device clips the twenty-first drawing result into the twenty-first drawing frame.
  131. The method of any one of claims 114-130, wherein when drawing a twenty-second drawing frame of the first application, the electronic device draws the drawing instruction of the twenty-second drawing frame according to a second drawing range, and after obtaining a twenty-second drawing result, the method further comprises:
    the electronic device clips the twenty-second drawing result into the twenty-second drawing frame.
  132. An electronic device, characterized in that the electronic device comprises: one or more processors and memory; the memory coupled with the one or more processors for storing computer program code comprising computer instructions that the one or more processors invoke to cause the electronic device to perform the method of any of claims 1-26, 39-49, 59-70, 87-101, 114-131.
  133. A computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1 to 26, 39 to 49, 59 to 70, 87 to 101, 114 to 131.
  134. A computer-readable storage medium comprising instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-26, 39-49, 59-70, 87-101, 114-131.
CN202180026284.XA 2020-09-30 2021-07-16 Image frame prediction method and electronic equipment Pending CN115398907A (en)

Applications Claiming Priority (15)

Application Number Priority Date Filing Date Title
CN202011063375 2020-09-30
CN2020110633754 2020-09-30
CN202011069443 2020-09-30
CN2020110694438 2020-09-30
CN202011197968X 2020-10-31
CN202011197968 2020-10-31
CN202011377449 2020-11-30
CN2020113774491 2020-11-30
CN202011377306 2020-11-30
CN2020113773060 2020-11-30
CN202011493948.7A CN114708289A (en) 2020-12-16 2020-12-16 Image frame prediction method and electronic equipment
CN2020114939487 2020-12-16
CN2020116291712 2020-12-30
CN202011629171 2020-12-30
PCT/CN2021/106928 WO2022068326A1 (en) 2020-09-30 2021-07-16 Image frame prediction method and electronic device

Publications (1)

Publication Number Publication Date
CN115398907A true CN115398907A (en) 2022-11-25

Family

ID=80949634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180026284.XA Pending CN115398907A (en) 2020-09-30 2021-07-16 Image frame prediction method and electronic equipment

Country Status (2)

Country Link
CN (1) CN115398907A (en)
WO (1) WO2022068326A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116433464A (en) * 2023-06-14 2023-07-14 北京象帝先计算技术有限公司 Storage address offset calculating device and method, electronic component and electronic equipment

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095221B (en) * 2022-08-10 2023-11-21 荣耀终端有限公司 Frame rate adjusting method in game and related device
CN117292039B (en) * 2023-11-27 2024-02-13 芯瞳半导体技术(山东)有限公司 Vertex coordinate generation method, vertex coordinate generation device, electronic equipment and computer storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101116112B (en) * 2005-01-04 2010-12-29 新世代株式会社 Plotting device and plotting method
CN102111613B (en) * 2009-12-28 2012-11-28 中国移动通信集团公司 Image processing method and device
CN102970527B (en) * 2012-10-18 2015-04-08 北京航空航天大学 Video object extraction method based on hexagon search under five-frame-background aligned dynamic background
WO2020007093A1 (en) * 2018-07-02 2020-01-09 华为技术有限公司 Image prediction method and apparatus
CN110378259A (en) * 2019-07-05 2019-10-25 桂林电子科技大学 A kind of multiple target Activity recognition method and system towards monitor video

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116433464A (en) * 2023-06-14 2023-07-14 北京象帝先计算技术有限公司 Storage address offset calculating device and method, electronic component and electronic equipment
CN116433464B (en) * 2023-06-14 2023-11-17 北京象帝先计算技术有限公司 Storage address offset calculating device and method, electronic component and electronic equipment

Also Published As

Publication number Publication date
WO2022068326A1 (en) 2022-04-07

Similar Documents

Publication Publication Date Title
WO2020119444A1 (en) Game image rendering method and device, terminal, and storage medium
WO2022068326A1 (en) Image frame prediction method and electronic device
US11044398B2 (en) Panoramic light field capture, processing, and display
WO2022052620A1 (en) Image generation method and electronic device
CN112287852B (en) Face image processing method, face image display method, face image processing device and face image display equipment
CN109413399B (en) Apparatus for synthesizing object using depth map and method thereof
JP7205386B2 (en) IMAGING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM
CN112004041B (en) Video recording method, device, terminal and storage medium
CN110673944B (en) Method and device for executing task
CN116055857B (en) Photographing method and electronic equipment
CN115689963B (en) Image processing method and electronic equipment
CN116166259A (en) Interface generation method and electronic equipment
CN114708289A (en) Image frame prediction method and electronic equipment
CN116166256A (en) Interface generation method and electronic equipment
CN114979457B (en) Image processing method and related device
CN115908120B (en) Image processing method and electronic device
CN113160270A (en) Visual map generation method, device, terminal and storage medium
US10701286B2 (en) Image processing device, image processing system, and non-transitory storage medium
CN116263971A (en) Image frame prediction method, electronic device, and computer-readable storage medium
CN116166255A (en) Interface generation method and electronic equipment
CN115150542A (en) Video anti-shake method and related equipment
CN113767649A (en) Generating an audio output signal
CN114693538A (en) Image processing method and device
CN116708931B (en) Image processing method and electronic equipment
WO2024002164A1 (en) Display method and related apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination