CN117710548A - Image rendering method and related equipment thereof - Google Patents

Image rendering method and related equipment thereof Download PDF

Info

Publication number
CN117710548A
CN117710548A CN202310947840.8A CN202310947840A CN117710548A CN 117710548 A CN117710548 A CN 117710548A CN 202310947840 A CN202310947840 A CN 202310947840A CN 117710548 A CN117710548 A CN 117710548A
Authority
CN
China
Prior art keywords
rendering
rendering result
frame
electronic device
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310947840.8A
Other languages
Chinese (zh)
Inventor
孙慧嵩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310947840.8A priority Critical patent/CN117710548A/en
Publication of CN117710548A publication Critical patent/CN117710548A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Generation (AREA)

Abstract

The application provides an image rendering method and related equipment, and relates to the field of image processing. According to the method, an application program issues a first rendering instruction stream, and the first rendering instruction stream is used for indicating an electronic device to execute a rendering operation of a first frame image; the application program issues a second rendering instruction stream, and the second rendering instruction stream is used for indicating the electronic equipment to execute the rendering operation of the second frame image; the electronic equipment determines a third rendering result according to the first rendering result and the second rendering result; the first rendering result is a rendering result corresponding to the first main scene, and the second rendering result is a rendering result corresponding to the second main scene; the electronic equipment mixes the third rendering result and the fourth rendering result to determine a predicted frame image; the fourth rendering result is a rendering result corresponding to the first dynamic semitransparent object. According to the method and the device, the dynamic semitransparent objects in each frame and the main scene are firstly rendered separately and then mixed according to the needs, so that the problem of shaking can be avoided.

Description

Image rendering method and related equipment thereof
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image rendering method and related devices.
Background
With the development of electronic devices, the content of the display image is also becoming more and more rich. In some images, translucent objects such as water columns, bubbles, glass, transparent shells, etc. may be included.
While when the translucent object is in motion, the rendering effect corresponding to these image frames is not satisfactory. For example, when the semitransparent object is in a motion state and the motion state is inconsistent with the motion trend (or referred to as the motion direction) of the object overlapped with the semitransparent object, the processing according to the related art causes the shaking phenomenon of the corresponding picture of the semitransparent object.
Thus, there is a need for an image rendering method capable of improving the rendering effect of a semitransparent dynamic object.
Disclosure of Invention
The application provides an image rendering method and related equipment. According to the method, the semitransparent object is singly rendered in a motion state (or called dynamic state), so that the corresponding picture quality can be effectively improved without jitter.
In a first aspect, the present application provides an image rendering method applied to an electronic device having an application installed thereon, the method may include:
the application program issues a first rendering instruction stream, wherein the first rendering instruction stream is used for instructing the electronic equipment to execute a rendering operation of a first frame image, and the first frame image comprises a first main scene and a first dynamic semitransparent object;
The application program issues a second rendering instruction stream, and the second rendering instruction stream is used for instructing the electronic equipment to execute rendering operation of a second frame image, wherein the second frame image comprises a second main scene and a second dynamic semitransparent object;
the electronic equipment determines a third rendering result according to the first rendering result and the second rendering result; the first rendering result is a rendering result corresponding to the first main scene, and the second rendering result is a rendering result corresponding to the second main scene;
the electronic equipment mixes the third rendering result and the fourth rendering result and determines a predicted frame image; and the fourth rendering result is a rendering result corresponding to the first dynamic semitransparent object.
According to the image rendering method provided by the embodiment of the application, aiming at the rendering instruction stream corresponding to each frame of image, a dynamic semitransparent object texture and a main scene pipeline texture which are drawn independently can be obtained; and then determining a main scene rendering result of the predicted frame based on rendering results corresponding to the two frames of main scene pipelines, determining a dynamic semitransparent object rendering result of the predicted frame based on the rendering result of one frame of dynamic semitransparent object, and combining the two to obtain the predicted frame. Here, since the dynamic semitransparent object is drawn independently and is not affected by other objects which are superimposed, the drawing quality of the dynamic semitransparent object can be improved and the display effect of the dynamic semitransparent object in the predicted frame can be improved.
In some possible implementations, the first rendering instruction stream includes a first instruction stream for instructing the electronic device to render the first main scene to obtain the first rendering result and a second instruction stream for instructing the electronic device to render the first dynamic semitransparent object to obtain the fourth rendering result;
before the electronic device mixes the third rendering result and the fourth rendering result, the method further includes:
the electronic equipment performs rendering according to the first instruction stream to obtain the first rendering result, and stores the first rendering result in a first frame buffer, wherein the first instruction stream comprises an instruction pointing to the first frame buffer;
the electronic device performs rendering according to the second instruction stream to obtain the fourth rendering result, and stores the fourth rendering result in a second frame buffer, wherein the second instruction stream comprises an instruction pointing to the second frame buffer, and the second frame buffer is different from the first frame buffer.
In the implementation mode, firstly, a moving semitransparent object is identified, after the moving semitransparent object is identified, frame buffering is switched, and the semitransparent object is independently rendered by utilizing the switched frame buffering, so that an independently drawn semitransparent object texture is obtained; the separately rendered semi-transparent object texture is then blended with the main scene texture to obtain the final image frame. In the process, the moving semitransparent object is singly drawn by utilizing the newly built frame buffer, so that the influence of the superposition object on the movement trend of the semitransparent object during drawing can be avoided, the shaking of a picture during the movement of the semitransparent object is further avoided, and the aim of improving the picture quality corresponding to the semitransparent object can be realized.
In some possible implementations, before the electronic device performs rendering according to the second instruction stream to obtain the fourth rendering result, the method further includes: the electronic device creates the second frame buffer.
In an implementation, a render target change is made between a dynamically translucent object and a non-dynamically translucent object, rendering is performed using a different frame buffer correspondence, such as rendering of the dynamically translucent object using a second frame buffer.
In some possible implementations, when the electronic device performs rendering, it is determined that the semitransparent object is drawn according to a preset first instruction and a preset second instruction in the second instruction stream.
Illustratively, the preset first instruction includes at least one of: a gleable instruction, a gleable instruction. The preset second instruction comprises at least one of the following: glDisable instruction, gldiscard frame buffer ext () instruction. Based on this scheme, an example of a specific first instruction and second instruction is provided. In this implementation, the determination of the semi-transparent object may be implemented.
In some possible implementations, when the electronic device performs rendering, it determines that the dynamic object is drawn according to the third instruction, the fourth instruction, and the fifth instruction included in the second instruction stream.
Illustratively, the third instruction may be a glBindBuffer instruction; the fourth instruction may be glBufferSubData; the fifth instruction may be a glBindBufferRange instruction.
In this implementation, the determination of the dynamic object may be implemented; and combining the judgment of the semi-transparent object, and judging the dynamic semi-transparent object.
In some possible implementations, the method further includes:
the electronic equipment mixes the third rendering result and the fifth rendering result and determines the predicted frame image; and the fifth rendering result is a rendering result corresponding to the second dynamic semitransparent object.
In this implementation, the predicted frame images may be mixed in another way.
In some possible implementations, after determining the third rendering result, the method further includes:
the electronic equipment determines a seventh rendering result according to the fifth rendering result and the sixth rendering result; the fifth rendering result is a rendering result corresponding to the first dynamic semitransparent object, and the sixth rendering result is a rendering result corresponding to the second dynamic semitransparent object;
the electronic device mixes the third rendering result and the seventh rendering result and determines the predicted frame image.
In this implementation, the predicted frame images may be mixed in another way.
In some possible implementations, the first frame image is an nth frame image, the second frame image is an n+2th frame image, the predicted frame image is an n+1th frame image, or;
the first frame image is an nth frame image, the second frame image is an (n+1) th frame image, and the predicted frame image is an (n+2) th frame image.
In the implementation mode, an intermediate frame can be obtained as a predicted frame image through a front frame and a rear frame; alternatively, the third frame may be obtained as a predicted frame image by two consecutive frames.
In a second aspect, the present application provides an image rendering apparatus comprising means for performing the method of the first aspect described above. The apparatus may correspond to performing the method described in the first aspect, and the relevant descriptions of the units in the apparatus are referred to the description of the first aspect, which is omitted herein for brevity.
The method described in the first aspect may be implemented by hardware, or may be implemented by executing corresponding software by hardware. The hardware or software includes one or more modules or units corresponding to the functions described above.
In a third aspect, the present application provides an electronic device, comprising: one or more processors and memory; the memory is coupled to one or more processors for storing computer program code comprising computer instructions that are invoked by the one or more processors to cause the electronic device to perform a method as referred to by the first electronic device or the second electronic device in any implementation of the first aspect or the first aspect.
In a fourth aspect, the present application provides a computer storage medium storing a computer program which, when executed by an electronic device, causes the electronic device to perform a method as referred to in the first aspect or any implementation of the first aspect.
In a fifth aspect, embodiments of the present application provide a chip system, which may be applied to an electronic device, the chip system comprising one or more processors for invoking computer instructions to cause the electronic device to perform a method as described in the first aspect or any implementation of the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product comprising: computer program code which, when run by an electronic device, causes the electronic device to perform any of the methods of the first aspect.
It will be appreciated that the advantages of the second to sixth aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
FIG. 1 is a schematic diagram of an image frame including a semitransparent object according to an embodiment of the present application;
FIG. 2 is a schematic image frame diagram of another semitransparent object according to an embodiment of the present application;
fig. 3 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic software structure of an electronic device according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a rendering process according to an embodiment of the present application;
FIG. 6 is a rendering instruction flow provided in an embodiment of the present application;
FIG. 7 is a diagram of data relating to world coordinates in Renderdoc provided in an embodiment of the present application;
fig. 8 is a schematic flow chart of an image rendering method according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a rendering pipeline sequence provided by an embodiment of the present application;
Fig. 10 is another flow chart of an image rendering method according to an embodiment of the present application;
fig. 11 is a schematic flowchart of another image rendering method according to an embodiment of the present application;
fig. 12 is a schematic flowchart of another image rendering method according to an embodiment of the present application;
fig. 13 is a schematic flowchart of another image rendering method according to an embodiment of the present application;
fig. 14 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. Wherein, in the description of the embodiments of the present application, "/" means or is meant unless otherwise indicated, for example, a/B may represent a or B; the text "and/or" is merely an association relation describing the associated object, and indicates that three relations may exist, for example, a and/or B may indicate: the three cases where a exists alone, a and B exist together, and B exists alone, and in addition, in the description of the embodiments of the present application, "plural" means two or more than two.
It should be understood that the terms first, second, and the like in the description and in the claims and drawings of the present application are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly understand that the embodiments described herein may be combined with other embodiments.
For ease of understanding, the terms referred to in the embodiments of the present application will be first described.
1. Rendering (render): a process for generating images from models in software. A model is a description of a three-dimensional object in a well-defined language or data structure, which includes geometric, viewpoint, texture, and illumination information. And two-dimensionally projecting the model in the three-dimensional scene into a digital image according to the set environment, light, materials and rendering parameters.
2. Game scene: representing a run phase in a gaming application, such as a lobby scene, a sports scene, a small change scene, a perspective scene, etc.
3. Frame prediction techniques (also known as interpolation algorithms): the image information of the other frame may be predicted by using the image information of two frames, for example, the image information of the next frame may be predicted by using the image information of two adjacent frames, or the image information of the intermediate frame may be predicted by using the image information of two frames at an interval. For example, in the embodiment of the present application, the image information of the n+1st frame may be predicted using the image information of the N-th frame and the n+2nd frame.
4. Semitransparent object: may appear as a semi-transparent state in the image. For example, taking an image displayed by an electronic device as an image in a game scene, in a shooting game, the image may include a translucent object such as smoke, gun fire, or spray. The electronic equipment can increase the reality of scene display and promote user experience by increasing the rendering special effect of the semitransparent object in the image. In embodiments of the present application, translucency may include all transparency levels except opacity, including being completely transparent.
5. Engine (Engine): indicating the core components of a development program or system on the electronic platform. With the engine, a developer can quickly build, lay out functions required by the program, or assist in the operation of the program. Generally, an engine is a supporting part of a program or a set of systems. Common program engines are game engines, search engines, and the like.
6. World space (world space): absolute coordinate space of the world (scene) simulated by the electronic device. World coordinates (world position) are used to indicate the location in the simulated world.
With the improvement of the performance of the hardware platform of the electronic device, the display contents such as games and videos gradually develop towards the directions of high frame rate and high image quality. High frame rates can bring a smoother experience to the user, but high frame rates also bring high power consumption problems. To reduce rendering overhead and power consumption, frame prediction techniques have evolved.
Taking a game scene as an example, in the frame prediction technology, a history frame can be processed by using a frame interpolation algorithm to determine a predicted frame; and then, the determined predicted frame is used for replacing the original real frame to render, so that the purpose of reducing the power consumption can be achieved. It will be appreciated that much of the image information between adjacent frames can be multiplexed and the consumption of such multiplexed image information can be reduced by frame prediction.
However, when the display content includes a translucent object, the predicted frame generated using the interpolation algorithm does not necessarily achieve the same image quality as the real frame; the rendering effect of the predicted frame is less satisfactory, especially when the translucent object is in motion.
Analysis shows that the semitransparent object does not have depth, and the color and depth finally rendered are related to the overlapped object, so that the influence of the overlapped object is large, and therefore, the frame prediction technology provided by the related art usually processes the semitransparent object together with the overlapped object (or called a background object) when processing. However, when the semitransparent object is in a motion state (or called dynamic state) and is inconsistent with the motion trend of the superimposed object, if the processing is continued by using the frame prediction technology provided by the related technology, the frame portion corresponding to the semitransparent object will generate a shaking phenomenon, so that the visual experience of the user is affected. It should be noted that, the dynamic semitransparent object has a shaking phenomenon, and the static semitransparent object does not have a shaking phenomenon.
Illustratively, fig. 1 shows a schematic image frame of a frame including a semitransparent object according to an embodiment of the present application. As shown in fig. 1, in this game scene, when a game character throws against a certain game object, a throwing line (having a certain width and height) of the game object is displayed on a screen, where the throwing line may be a semitransparent object, and the content covered by the throwing line is an overlapped object corresponding to the semitransparent object.
When the viewing angle corresponding to the game character moves leftwards, the throwing line moves along with the change direction of the viewing angle of the game character, such as leftwards, and the overlapped object moves rightwards. It can be seen that translucent objects have a different tendency to move than superimposed objects and can even be considered to be completely opposite. At this time, if the frame prediction technology provided by the related technology is used for processing, the semitransparent objects in the plurality of predicted frames may be influenced by the superimposed objects to present a rightward motion trend, but the semitransparent objects in the non-predicted frames present a leftward motion trend, and when the two frames are displayed after being sequenced according to the frame sequence, the motion trend of the semitransparent objects presents a situation of continuously alternating left and right; for the user, the translucent object (throwing line) that is seen will be dithered during the game play.
Illustratively, fig. 2 shows a schematic image frame diagram of another frame provided in an embodiment of the present application including a translucent object. As shown in fig. 2, in the game scene, a game character drives a "wave car" to move, and a shell of the "wave car" displayed in a picture is a semitransparent object, and the content covered by the shell is an overlapped object corresponding to the semitransparent object.
When the game character driving wave vehicle runs forward, the vehicle shell moves along with the running direction of the game character, such as inward (vertical to the display screen), and the overlapped object moves outwards. It can be seen that translucent objects have a different tendency to move than superimposed objects and can even be considered to be completely opposite. At this time, if the frame prediction technology provided by the related technology is used for processing, the semitransparent objects in the plurality of predicted frames may be affected by the superimposed objects to present an outward movement trend, but the semitransparent objects in the non-predicted frames present an inward movement trend, and when the two frames are displayed after being sequenced according to the frame sequence, the movement trend of the semitransparent objects presents a situation that the inside and the outside are continuously alternated; for the user, the translucent object (hull) that is seen will be dithered during the game play.
In view of this, the present application provides a new image rendering method, for a rendering instruction stream, identifying a moving semitransparent object first, switching frame buffering after identifying the moving semitransparent object, and performing independent rendering on the semitransparent object by using the switched frame buffering to obtain an independent drawn semitransparent object texture; the separately rendered semi-transparent object texture is then blended with the main scene texture to obtain the final image frame. In the process, the moving semitransparent object is singly drawn by utilizing the newly built frame buffer, so that the influence of the superposition object on the movement trend of the semitransparent object during drawing can be avoided, the shaking of a picture during the movement of the semitransparent object is further avoided, and the aim of improving the picture quality corresponding to the semitransparent object can be realized.
The following describes the schemes provided in the embodiments of the present application in detail with reference to the accompanying drawings.
It should be noted that, the image rendering method provided in the embodiment of the present application may be applied to an electronic device having a display function. The electronic device may be an electronic device such as a mobile phone, a tablet computer, or the like, and may also be a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a cellular phone, a personal digital assistant (personal digital assistant, PDA), an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, an artificial intelligence (artificial intelligence, AI) device, a wearable device (such as a smart watch, a smart bracelet, or the like), a vehicle-mounted device, a smart home device (such as a smart television, a large screen, or the like), and/or a smart city device.
Illustratively, in some embodiments, from a hardware composition perspective, as shown in fig. 3, an electronic device according to an embodiment of the present application may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be noted that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device. In other embodiments of the present application, the electronic device may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. The interface connection relation between the modules illustrated in the embodiment of the present invention is only illustrated schematically, and does not limit the structure of the electronic device.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. A memory may also be provided in the processor 110 for storing instructions and data.
The electronic device implements display functions through the GPU, the display screen 140, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 140 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information. The display screen 140 is used to display images, videos, and the like. In some embodiments, the electronic device may include 1 or N display screens 140, N being a positive integer greater than 1.
In the embodiment of the present application, the function of displaying the game interface by the electronic device may be implemented by a GPU, the display screen 140, an application processor, and the like.
The electronic device may implement a photographing function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like. The camera 150 is used to capture still images or video. The ISP is used to process the data fed back by the camera 150. The light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. The electronic device may include 1 or N cameras 150, N being a positive integer greater than 1. Video codecs are used to compress or decompress digital video. The electronic device may support one or more video codecs. In this way, the electronic device may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The internal memory 121 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (NVM). The random access memory may be read directly from and written to by the processor 110, may be used to store executable programs (e.g., machine instructions) for an operating system or other on-the-fly programs, may also be used to store data for users and applications, and the like. The nonvolatile memory may store executable programs, store data of users and applications, and the like, and may be loaded into the random access memory in advance for the processor 110 to directly read and write.
The external memory interface 120 may be used to connect external non-volatile memory to enable expansion of the memory capabilities of the electronic device.
The electronic device may implement audio functions through the audio module 130, speaker 130A, receiver 130B, microphone 130C, headphone interface 130D, and application processor, among others.
In other embodiments, the electronic device according to the embodiments of the present application may further have software partitioning. Taking an example that the electronic device is operated with an android operating system. In the android operating system, there may be a hierarchical software partition.
Fig. 4 is a schematic software structure of an electronic device according to an embodiment of the present application.
As shown in fig. 4, the electronic device may include: an Application (APP) program layer, an application Framework (Framework) layer, an Zhuoyun runtime (Android run) and system libraries, and a kernel layer.
As shown in fig. 4, the application layer may include a series of application packages. The application package may include camera, gallery, calendar, talk, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
In embodiments of the present application, the application package may also include an application that needs to present an image or video to a user by rendering the image. Video is understood to mean, among other things, the continuous play of a plurality of frame images, which may include a moving translucent object. By way of example, the application may include a game-like application.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. By way of example, the application framework layer may include a window manager, a content provider, a view system, a resource manager, a notification manager, an activity manager, an input manager, and the like.
The window manager is for providing a window management service (window manager service, WMS). The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the terminal equipment vibrates, and an indicator light blinks.
The activity manager may provide acquisition management services (activity manager service, AMS) that may be used for system component (e.g., activity, service, content provider, broadcast receiver) start-up, handoff, scheduling, and application process management and scheduling tasks.
The input manager may provide input management services (input manager service, IMS), which may be used to manage inputs to the system, such as touch screen inputs, key inputs, sensor inputs, and the like. The input manager fetches events from the device node and distributes the events to the appropriate windows through interactions with the WMSs.
In the embodiment of the application, one or more functional modules may be disposed in an application framework layer, so as to implement the image rendering method provided in the application.
By way of example, the application framework layer may be provided with an identification module, an interception module, a creation module, a mixing module, and the like. The identification module is used for offline pipeline analysis, for example, to determine a main scene Jing Guanxian by offline analysis of main scene features, and to analyze dynamic semitransparent object identification features.
The interception module may be configured to intercept real-time related instructions, for example, the related instructions may refer to instructions issued by an application program for instructing the rendering of a semitransparent object.
The frame prediction module may be configured to perform a correlation algorithm process to predict image information of another frame using image information of two frames, for example, image information of an n+1th frame may be predicted based on image information of an N-th frame and an n+2th frame.
The creation module may be used to make the creation of a new memory space. For example, a new Frame Buffer (FB) is created, which buffers a corresponding texture (texture), etc.
The mixing module may be configured to instruct the electronic device to synthesize a rendering result of the semitransparent object stored in the newly created frame buffer with other rendering results, so that a rendering result corresponding to the frame image may be obtained. Wherein the other rendering results may include static semi-transparent objects and/or non-semi-transparent objects, etc.
The system library may comprise a graphics library. In different implementations, the graphics library may include at least one of: development graphics library (open graphics library, openGL), open graphics library of embedded systems (open graphics library for embedded systems, openGL ES), and 2D graphics engine (e.g., skia graphics library (skia graphics library, SGL)), etc. In some embodiments, other modules may also be included in the system library. For example: surface manager (surface manager), media library (Media Libraries), etc.
Wherein the surface manager is configured to manage the display subsystem and provide a fusion of two-dimensional (2D) and three-dimensional (3D) layers for the plurality of applications. The media library supports playback and recording of multiple audio formats, playback and recording of multiple video formats, and still image files. The media library may support a variety of audio video encoding formats, such as: moving picture experts group 4 (moving pictures experts group, MPEG 4), h.264, moving picture experts group audio layer 3 (moving picture experts group audio layer III, MP 3), advanced audio coding (advanced audio coding, AAC), adaptive multi-rate (AMR), joint picture experts group (joint photographic experts group, JPG), and portable network graphics (portable network graphics, PNG), among others. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing.
As shown in fig. 4, the electronic device may also include a hardware layer. The hardware layer may include a central processing unit (central processing unit, CPU), a graphics processor (graphic processing unit, GPU), and a memory having a storage function. In some implementations, the CPU may be configured to control each module in the framework layer to implement its respective function, and the GPU may be configured to perform a corresponding rendering process according to an API in a graphics library (e.g., openGL ES) called by an instruction processed by each module in the application framework layer.
For easy understanding, the scheme provided in the embodiments of the present application will be described in detail below with reference to the software division shown in fig. 4, taking an application program as an example of a game application.
For the electronic device, in order to acquire image data for display, the electronic device may perform image rendering according to a rendering instruction stream issued by an application program (such as a game application), so as to acquire image data including a moving semitransparent object for display.
Illustratively, FIG. 5 is a schematic diagram of a rendering process, in conjunction with FIGS. 4 and 5, where a game application may issue a stream of rendering instructions when rendering a frame of image; the CPU can call an interface in the graphic library according to the rendering instruction stream so as to instruct the GPU to execute corresponding rendering operation; the rendering result of the GPU performing the rendering operation may be stored in the electronic device, and after rendering corresponding to the subsequent rendering instruction stream, the send-display data may be obtained. The electronic device may then display the frame image on a display screen based on the send display data.
A moving semi-transparent object may be included in the frame image, and then the rendering instruction stream issued by the game application may include an instruction segment for instructing to render the main scene, an instruction segment for instructing to render the semi-transparent object, and so on. It will be appreciated that the translucent objects are drawn in the main scene.
The main scene can be corresponding to the scene with the highest rendering load in the rendering process of the current frame image, and the electronic equipment executes the corresponding rendering load. Illustratively, in some embodiments, the primary scene may correspond to a rendering pipeline (render pass) with the greatest number of rendering commands (draw calls). Multiple render pass may be included in the rendering of a frame of image. The rendering results of each render pass may be stored in a frame buffer. The rendering of each render pass may include multiple draw calls. The more the number of draw calls is executed, the more the corresponding render pass gets the content of the map obtained after execution is completed. In other embodiments, the main scene may include a plurality of rendering targets (color buffers); alternatively, the master scene may be a scene where the rendering command is greater than a preset threshold.
In the case of drawing a plurality of objects including a semitransparent object in a main scene, since a mixing equation involved in the subsequent synthesis is related to the drawing order, it is generally necessary to sort the objects when drawing the plurality of objects. For example, an opaque object is drawn first, then over the opaque object, and then the opaque object is processed in a back-to-front order, otherwise serious distortion occurs. During this drawing process, the game application will typically draw a translucent object by starting with a specific start instruction (e.g., a glEnable instruction) and ending with a specific end instruction (e.g., a glDisable instruction).
It will be appreciated that when drawing an object on a screen, the degree of transparency of the object at a given pixel is described by an alpha value, so in order to render different levels of transparency (i.e. different alpha values), opaque and different degrees of transparency objects are distinguished, and an (enable) color mixing state is required to be enabled at the time of rendering. In the embodiment of the application, the game application may enable (enable) the color mixing state through a gleable instruction, that is, instruct the electronic device to start rendering of the semitransparent object through the gleable instruction.
Illustratively, FIG. 6 illustrates a rendering instruction flow provided by an embodiment of the present application. As shown in fig. 6, the instruction with ID 1567 issued by the game application is a gleable, that is, the gleable instruction may instruct the electronic device to enable the color mixing state, and then, in the subsequent instruction, the game application may start to instruct the electronic device to perform the corresponding operation of semitransparent object rendering through a different instruction. For example, the game application may set a blend factor or the like via the glBlendedFuncseparator instruction ID 1568.
It should be further understood that if only the glEnable instruction is issued to instruct the start of the operation corresponding to the rendering of the semitransparent object before the main scene rendering process is finished, but the glDisable instruction is not enabled to instruct the semitransparent object to be rendered, at this time, whether the semitransparent object is currently rendered or not is indicated; that is, from the glEnable, until the object currently drawn, the translucent object is drawn.
In addition, each object in the game scene corresponds to a world coordinate, including static objects and dynamic objects. In addition, through analyzing the implementation characteristics of a game engine (such as a Unreal engine), the dynamic object needs to continuously update world coordinates in the process of rendering each frame of image frame, and the static object does not need to update or change the world coordinates corresponding to the static object. When the world coordinates of the object in a motion state are continuously updated, the data are stored in a uniform buffer object (uniform buffer object, UBO), so that when the object is rendered, each frame of image frame firstly needs to call glBufferSubData to update UBO data; in contrast, the object in a static state only needs to call glBufferData once because the world coordinates are unchanged; therefore, UBO storing world coordinates can be identified by matching the size of buffer size, and whether an object is a dynamic object is determined according to whether the currently drawn object uses corresponding UBO data.
The size of buffer size is used to indicate the size corresponding to world coordinate information UBO in the buffer, and size is in bytes. Illustratively, UBO describing world coordinate information may be referred to as VB1, with a corresponding size (Byte Range) of 0-352. Therefore, at the time of recognition, whether or not the buffer is UBO (or VB 1) storing world coordinates can be judged by matching whether or not the size of the buffer size is 352.
It will be appreciated that UBO is a buffer object storing a unitorm type variable in GLSL, by which data sharing of different shaders can be achieved.
Illustratively, FIG. 7 shows data related to world coordinates in RenderDoc provided by an embodiment of the present application. As shown in fig. 7, information of Vertex UBO1 grasped from RenderDoc is indicated. The primary_actorworld position vb1 represents the position of the object in world space, that is, the world coordinate corresponding to the object is (53326.00,54811.00,1429.21997). When an object is in motion, the value will also change, so the corresponding binding buffer will complete the update of the corresponding data before each frame of the object is drawn.
From the above analysis, the main scene, the semitransparent object and the motion state can be identified based on the corresponding conditions. The following is a description of the solution of the present application in connection with this analysis.
Fig. 8 is a schematic flow chart of an image rendering method according to an embodiment of the present application.
As shown in fig. 8, the image rendering method provided in the embodiment of the present application may include a first stage, which may include the following S10 to S20, a second stage, which may include the following S30 to S60, and a third stage, which may include the following S70 and S80. The first stage may also be referred to as a pre-stage and the third stage may also be referred to as an output stage.
The first stage:
s10, the identification module determines whether the scene is a main scene; if yes, executing S20; if not, drawing normally.
Alternatively, the identification module may determine whether the current rendering pipeline is a main scene pipeline (main pass) by identifying whether it is a main scene.
The main scene pipeline is a pass for drawing a main scene, main scene characteristics can be obtained through offline analysis, and then whether the main scene is determined through whether the analyzed characteristics are the main scene characteristics or not when the current frame is drawn in the game.
The main scene feature refers to the feature that the rendering command described above is the largest, includes a plurality of rendering targets or includes a plurality of color buffers, or the rendering command is greater than a preset threshold. Of course, the main scene feature may be other features, which are not limited in this application. It is understood that translucent objects are drawn in a main pass.
Illustratively, as shown in FIG. 9, taking a gaming application as an example, a typical rendering may include a depth pipeline (depth pass), a main scene pipeline, and a UI pipeline (UI pass), which may draw in conjunction with depth information provided by the depth pipeline when executed by the main scene Jing Guanxian, the UI pipeline may draw to generate separate UI textures. Since there are multiple pipelines at the time of rendering, the current rendering pipeline needs to be identified; only in case the current rendering pipeline is the main scene pipeline, the following scheme of the present application is continued.
Alternatively, the process of determining the main scene pipeline may be performed before starting the rendering of the current frame image.
S20, the creation module creates a frame buffer and a map corresponding to the semitransparent object.
It will be appreciated that the main scene may include semi-transparent objects, and that the present application may require that new frame buffers and maps be created first for use with the semi-transparent objects in order to render the semi-transparent objects individually.
Illustratively, the following is an instruction stream for creating a frame buffer and a map corresponding to a semitransparent object provided in an embodiment of the present application. Such as:
unsigned int blendFB;
glGenFramebuffers (1, & blendFB); creating a frame buffer (blendFB);
glBindFramebuffer (gl_ FRAMEBUFFER, blendFB); a// binding frame buffer;
unsigned int textureColorbuffer;
glgentext (1, & textureColorbuffer); creating texture map (texture color);
glBindTexture (gl_text_2d, TEXTURE color buffer); a// binding texture map;
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,SCR_WIDTH,SCR_HEIGHT,0,GL_RGB,GL_UNSIGNED_BYTE,NULL);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glFramebufferTexture2D (gl_framebuffer, gl_color_ ATTACHMENT0, gl_text_2d, TEXTURE COLOR buffer, 0); the textureColorbuffer is bound to FB (blendFB).
It should be appreciated that based on the above instructions, a frame buffer (blendFB) and a map (attachment texture) corresponding to the translucent object may be created. The creation of the frame buffer and map corresponding to the semi-transparent object is performed before starting the rendering of the current frame image.
And a second stage:
s30, intercepting the rendering instruction stream by the interception module, determining whether the drawn object is a semitransparent object, and if so, continuing to execute the S40; if not, S60 is performed.
Optionally, the game application issues a rendering instruction stream, and the interception module intercepts a first instruction stream to determine to render the semi-transparent object, where the first instruction stream may be an instruction that instructs the electronic device to render the semi-transparent object.
For example, during the rendering process of the nth frame image, the interception module may monitor whether a preset beginning instruction appears in the rendering instruction stream issued by the game application. For example, the start instruction may be a glEnable () instruction. The intercept module may determine to begin drawing the translucent object upon detecting a gleable () instruction. The interception module may also monitor whether a preset ending instruction appears in the rendering instruction stream issued by the game. For example, the end instruction may be a glDisable () instruction. The interception module may determine to stop drawing the semitransparent object after detecting the glDisable () instruction, and may further stop interception of the instruction, where the intercepted instruction may be referred to as a first instruction stream.
Optionally, the first instruction stream may include a preset beginning instruction and an ending instruction, and other instructions before the beginning instruction and the ending instruction; alternatively, the first instruction stream may include only the preset beginning instruction and other instructions after the beginning instruction. The start instruction may also be referred to as a first instruction and the end instruction may also be referred to as a second instruction.
S40, the interception module intercepts the rendering instruction stream, determines whether the semitransparent object moves, and if so, continues to execute S50; if not, S60 is performed.
Alternatively, the interception module may determine whether the buffer bound to the semitransparent object is updated by intercepting the second instruction stream, and further determine whether the semitransparent object is moving, and if updated, may consider the semitransparent object to be moving.
The second instruction stream may be an instruction that instructs the electronic device to render using the updated UBO data.
As an implementation, as shown in fig. 10, the above S40 may include the following S401 to S403.
S401, the interception module intercepts a third instruction and determines the Identity (ID) of the UBO currently bound.
For example, a third instruction may be a glBindBuffer, glBindBuffer function for binding a buffer with an ID for an object. Therefore, the interception module can determine the ID of the currently bound UBO by intercepting the glBindBuffer, identifying the target as gl_uniform_buffer. For example, if the ID is 1, it may be determined that the current binding is UBO to be UBO1.
S402, the interception module intercepts a fourth instruction, and stores the UBO to the corresponding Map when the current bound UBO is determined to store world coordinates.
For example, the fourth instruction may be a glBufferSubData, glBufferSubData function for updating data in already bound buffers. The interception module determines whether the currently bound UBO1 is VB1 by intercepting glBufferSubData and identifying the size of the buffer size, for example, when the buffer size is equal to a preset threshold (a difference between a maximum value and a minimum value of byte range), if the size of the preset threshold is 352 and the size of the buffer size is 352, the size of the buffer size is the same as the size of the preset threshold, so that the buffer can be determined to be VB1.VB1 indicates a code structure for storing information about the world coordinates of the object. VB1 is an index (index) value of 1.
At this time, if it is determined that VB1 is the current UBO, it is explained that updated world coordinate information is stored therein. After being determined as VB1, the currently bound UBO (UBO 1) needs to be stored in a corresponding Map, such as VB1 Map. If it is determined that VB1 is not available, the current command drawing is not considered dynamic and is not stored.
S403, the interception module intercepts the fifth instruction and determines whether the drawn object is a dynamic object.
For example, the fifth instruction may be a glBindBufferRange, glBindBufferRange function for binding buffers to the binding point pointed to by the target-specified index. The interception module judges whether the unique buffer bound on the index1 is in VB1 Map or not by intercepting the glBindBufferRange when the index value is identified as 1 (or VB1 is identified), and if so, the interception module indicates that the current command draws a dynamic object; if not, it means that the current command draws a not dynamic object. It should be appreciated that since there may be multiple buffers bound to index1, and the buffers may not exist in the VB1 Map, a query and comparison in the VB1 Map is required.
Optionally, the second instruction stream may include the third instruction, the fourth instruction, and the fifth instruction described above.
It should be understood that, in combination with the above-described determination procedure that the motion state is dynamic, if the object is a semitransparent object, it may be determined that the dynamic semitransparent object is currently drawn.
Optionally, the determining sequence of S30 and S40 may be exchanged before and after, or may be performed simultaneously, which is not limited in this application, so long as the interception module intercepts the rendering instruction stream, identifies the rendering instruction for drawing the dynamic semitransparent object from the main pass, and determines to draw the dynamic semitransparent object.
S50, the frame buffer (blendFB) corresponding to the semitransparent object is replaced and drawn.
Alternatively, when a dynamic semitransparent object in the main scene pipeline is determined for a rendering instruction stream, a rendering target (render target) may be changed first, and a glbindbrame buffer (gl_ FRAMEBUFFER, blendFB) is executed, so that the semitransparent object is rebind to a new frame buffer created and drawn. In this way, the rendering results of the dynamic semitransparent object may be stored in a frame buffer (blendFB).
S60, exchanging a frame buffer (main FB) corresponding to the main scene pipeline, and drawing.
It should be appreciated that after the binding blendFB is changed, if other objects than the dynamic semitransparent object are found to be drawn, such as: dynamic semitransparent objects, dynamic non-transparent objects and the like, the render target needs to be continuously changed, and the glbindframe buffer (GL_ FRAMEBUFFER, mainFB) is executed, so that the bledFB is changed to be main FB; then, drawing is performed in combination with the main fb. In this way, rendering results of objects other than the dynamic semitransparent object may be stored in a frame buffer (mainFB).
S70, obtaining the dynamic semitransparent object textures which are drawn independently.
It should be appreciated that the dynamic semitransparent object is drawn on blendFB, such that a separately drawn dynamic semitransparent object texture may be obtained.
S80, obtaining the texture of the main scene pipeline.
It should be appreciated that drawing on main fb other objects than the dynamically translucent object may result in a main scene pipeline texture that includes other objects than the dynamically translucent object.
It should be appreciated that when it is determined after S30 that it is not a semi-transparent object and when it is determined after S40 that it is not a dynamic object, if the binding has not been changed before, drawing is performed on the mainFB; if the binding is changed before, the binding is changed from the blendFB to the main FB, and then drawing is performed.
In the embodiment of the application, whether the drawn semitransparent object is dynamic or not can be analyzed according to preset conditions, render target change is carried out between the dynamic semitransparent object and the non-dynamic semitransparent object, and the drawing is carried out by utilizing different frame buffer correspondence until main Pass drawing is finished; finally, a dynamic semitransparent object texture and a main scene pipeline texture which are drawn independently can be obtained; it should be appreciated that the main scene pipeline texture does not include a dynamic translucent object texture. Here, since the dynamic semitransparent object is drawn independently and is not affected by other objects which are superimposed, the drawing quality of the dynamic semitransparent object can be improved and the display effect of the dynamic semitransparent object in the corresponding frame can be improved.
If the binding is not replaced, other objects are correspondingly overlapped in the process of drawing the dynamic semitransparent object, the problems in the prior art can occur, and the drawing effect is poor.
It should be appreciated that the second and third phases described above may be processes for a certain image frame.
In combination with the above method and frame prediction technique, fig. 11 shows a flowchart of an image rendering method provided in an embodiment of the present application. As shown in fig. 11, the image rendering method provided in the embodiment of the present application may include the following S110 to S160, which are described below.
S110, corresponding first dynamic semitransparent object textures are determined for the Nth image frame.
Wherein N is an integer greater than or equal to 1.
S120, determining a first main scene pipeline texture for an Nth image frame.
Alternatively, whether the rendering instruction stream corresponding to the nth frame image frame includes a semitransparent object and whether world coordinates are updated may be determined by the methods shown in fig. 8 and 10; when the condition is met, frame buffering corresponding to the semitransparent object is replaced and drawn to obtain a frame of dynamic semitransparent object texture which is drawn independently, and when the condition is not met, frame buffering corresponding to the main scene is replaced and drawn to obtain a frame of main scene texture. Thus, dynamic semitransparent object textures and other textures besides the dynamic semitransparent object textures can be drawn separately by changing binding frame buffering aiming at the Nth frame, and the drawing quality of the dynamic semitransparent object included in the Nth frame is improved.
S130, for the (N+2) th image frame, determining a corresponding second dynamic semitransparent object texture.
And S140, determining a second main scene texture for the (N+2) th image frame.
Alternatively, whether the rendering instruction stream corresponding to the n+2th frame image frame includes a semitransparent object and whether world coordinates are updated may be determined by the methods shown in fig. 8 and 10; when the condition is met, frame buffering corresponding to the semitransparent object is replaced and drawn to obtain a frame of dynamic semitransparent object texture which is drawn independently, and when the condition is not met, frame buffering corresponding to the main scene is replaced and drawn to obtain a frame of main scene pipeline texture. Thus, the dynamic semitransparent object textures and other textures besides the dynamic semitransparent object textures can be drawn separately by changing the binding frame buffer for the (N+2) th frame, and the drawing quality of the dynamic semitransparent object included in the (N+2) th frame is improved.
S150, for the n+1st image frame, the frame prediction module determines a predicted frame main field Jing Guanxian texture (or referred to as a third main scene pipeline texture) from the first main scene pipeline texture and the second main scene pipeline texture.
Here, the predicted frame is the n+1st frame image frame.
It should be appreciated that the main scenes of the different frame images may be the same or different, for example, the main scene of the nth frame image may be referred to as a first main scene, and the main scene in the n+2th frame image may be referred to as a second main scene, which may be the same or different. The frame buffer main fb corresponding to the first main scene is different from the frame buffer main fb corresponding to the second main scene.
It should also be appreciated that the dynamic semitransparent objects of the different frame images may be the same or different, for example, the dynamic semitransparent object in the nth frame image may be a first dynamic semitransparent object, and the dynamic semitransparent object in the n+2th frame image may be referred to as a second dynamic semitransparent object, and the first dynamic semitransparent object may be the same as or different from the second dynamic semitransparent object. The frame buffer blendFB corresponding to the first dynamic semitransparent object is different from the frame buffer blendFB corresponding to the second dynamic semitransparent object.
The frame prediction module may generate a new frame texture according to the correlation of the input textures by using the prediction method provided by the related technology, that is, determine the predicted frame main field Jing Guanxian texture, which is not limited in the embodiment of the present application. It should be appreciated that since the data input to the frame prediction module does not include a dynamic semitransparent object, the prediction generated data does not include a dynamic semitransparent object either.
S160, for the (n+1) th image frame, the mixing module mixes the main field Jing Guanxian texture of the predicted frame and the first dynamic semitransparent object texture to obtain the predicted frame. The predicted frame is the n+1st image frame.
Here, the method is equivalent to multiplexing the first dynamic semitransparent object texture corresponding to the nth frame, and directly mixing the dynamic semitransparent object texture corresponding to the nth frame with the main scene texture determined by the (n+1) th frame when determining the (n+1) th frame, so that the power consumption is reduced, the processing efficiency is improved, and meanwhile, the drawing quality of the dynamic semitransparent object can be ensured.
Alternatively, as another possible implementation manner, for the n+1st image frame, the blending module may further blend the predicted frame main field Jing Guanxian texture and the second dynamic semitransparent object texture to obtain the predicted frame.
Here, the frame buffer blendFB corresponding to the first dynamic semitransparent object may be the same as the frame buffer blendFB corresponding to the second dynamic semitransparent object, so that when the second dynamic semitransparent object is drawn, the data stored in the frame buffer blendFB is replaced, and the first dynamic semitransparent object is replaced by the second dynamic semitransparent object.
Optionally, as a further possible implementation manner, the frame prediction module may determine a third dynamic semitransparent object texture corresponding to the predicted frame according to the first dynamic semitransparent object texture and the second dynamic semitransparent object texture; then, the mixing module mixes the third main scene pipeline texture and the third dynamic semitransparent object texture to obtain a predicted frame.
In the method provided by the embodiment of the application, based on the main scene pipeline texture drawn by the nth frame and the main scene pipeline texture drawn by the (n+2) th frame, the main scene pipeline texture corresponding to the (n+1) th frame can be determined; the dynamic semitransparent object texture drawn by the nth frame is multiplexed to be used as the dynamic semitransparent object texture drawn by the (n+1) th frame, and then the multiplexed dynamic semitransparent object texture is mixed with the main scene pipeline texture corresponding to the (n+1) th frame, so that the (n+1) th frame can be obtained. In the process, the dynamic semitransparent object textures are independently drawn, so that the dynamic semitransparent object textures are not influenced by overlapped objects, the drawing quality is relatively high, the picture quality of a predicted frame can be improved, shaking is avoided, and the use experience of a user can be improved.
The above is implemented based on the frame prediction process of predicting the n+1th frame from the N-th frame and the n+2th frame, and similarly, the process when predicting the n+2th frame for the N-th frame and the n+1th frame is similar to that described above, and is illustrated below with reference to fig. 12.
As shown in fig. 12, the image rendering method provided in the embodiment of the present application may include the following S210 to S260, which are described below.
S210, for an N-th image frame, determining a corresponding first dynamic semitransparent object texture.
S220, determining a first main scene pipeline texture for an Nth image frame.
S230, for the (n+1) th image frame, determining a corresponding second dynamic semitransparent object texture.
S240, determining a second main scene pipeline texture for the (N+1) th image frame.
S250, for the n+2th frame image frame, the frame prediction module determines a predicted frame main field Jing Guanxian texture (alternatively referred to as a third main scene pipeline texture) from the first main scene pipeline texture and the second main scene pipeline texture.
And S260, for the (n+2) th frame image frame, the mixing module mixes the main field Jing Guanxian texture of the predicted frame and the second dynamic semitransparent object texture to obtain the predicted frame. The predicted frame is the n+2th frame image frame.
Alternatively, as another possible implementation, the blending module may blend the predicted frame main field Jing Guanxian texture and the first dynamic semi-transparent object texture to obtain the predicted frame.
Optionally, as a further possible implementation manner, the frame prediction module may determine a third dynamic semitransparent object texture corresponding to the predicted frame according to the first dynamic semitransparent object texture and the second dynamic semitransparent object texture; then, the mixing module mixes the third main scene pipeline texture and the third dynamic semitransparent object texture to obtain a predicted frame.
In the method provided by the embodiment of the application, based on the main scene pipeline texture drawn by the nth frame and the main scene pipeline texture drawn by the (n+1) th frame, the main scene pipeline texture corresponding to the (n+2) th frame can be determined; the dynamic semitransparent object texture drawn by the (N+1) th frame is multiplexed to be used as the dynamic semitransparent object texture drawn by the (N+2) th frame, and then the multiplexed dynamic semitransparent object texture is mixed with the main scene pipeline texture corresponding to the (N+2) th frame, so that the (N+2) th frame can be obtained. In the process, the dynamic semitransparent object textures are independently drawn, so that the dynamic semitransparent object textures are not influenced by overlapped objects, the drawing quality is relatively high, the picture quality of a predicted frame can be improved, shaking is avoided, and the use experience of a user can be improved.
It should be appreciated that in the above-described several ways of determining the predicted frame, the UI textures rendered based on the UI pipeline may also be mixed to obtain the predicted frame.
Fig. 13 illustrates that an embodiment of the present application provides yet another image rendering method.
As shown in fig. 13, the image rendering method provided in the embodiment of the present application may include the following S310 to S340, which are applied to an electronic device with an application installed therein, and are described below.
S310, an application program issues a first rendering instruction stream, wherein the first rendering instruction stream is used for instructing the electronic device to execute a rendering operation of a first frame image, and the first frame image comprises a first main scene and a first dynamic semitransparent object.
The first rendering instruction stream may indicate a rendering instruction stream corresponding to an nth frame image described in fig. 11, the first frame image indicating the nth frame image; the first rendering instruction stream may also indicate a rendering instruction stream corresponding to an nth frame image described in fig. 12, the first frame image indicating the nth frame image.
S320, the application program issues a second rendering instruction stream, wherein the second rendering instruction stream is used for instructing the electronic device to execute a rendering operation of a second frame image, and the second frame image comprises a second main scene and a second dynamic semitransparent object.
The second rendering instruction stream may indicate a rendering instruction stream corresponding to the n+2th frame image described in fig. 11, and the second frame image indicates the n+2th frame image; the second rendering instruction stream may also indicate a rendering instruction stream corresponding to the n+1st frame image described in fig. 12, and the second frame image indicates the n+1st frame image.
S330, the electronic equipment determines a third rendering result according to the first rendering result and the second rendering result; the first rendering result is a rendering result corresponding to the first main scene, and the second rendering result is a rendering result corresponding to the second main scene.
The third rendering result may indicate a rendering result corresponding to the third main scene described in fig. 11 or fig. 12.
S340, the electronic equipment mixes the third rendering result and the fourth rendering result to determine a predicted frame image; the fourth rendering result is a rendering result corresponding to the first dynamic semitransparent object.
Aiming at the rendering instruction stream corresponding to each frame of image, the method can obtain a dynamic semitransparent object texture which is drawn independently and a main scene pipeline texture; and then determining a main scene rendering result of the predicted frame based on rendering results corresponding to the two frames of main scene pipelines, determining a dynamic semitransparent object rendering result of the predicted frame based on the rendering result of one frame of dynamic semitransparent object, and combining the two to obtain the predicted frame. Here, since the dynamic semitransparent object is drawn independently and is not affected by other objects which are superimposed, the drawing quality of the dynamic semitransparent object can be improved and the display effect of the dynamic semitransparent object in the predicted frame can be improved.
The image rendering method provided in the embodiment of the present application is described in detail above with reference to fig. 1 to 13; the electronic device of the present application will be described in detail below with reference to fig. 14. It should be understood that, the electronic device in the embodiments of the present application may perform the foregoing methods in the embodiments of the present application, that is, specific working processes of the following various products may refer to corresponding processes in the foregoing method embodiments.
Fig. 14 shows a schematic structural diagram of an electronic device provided in the present application. The dashed line in fig. 14 indicates that the unit or the module is optional. The electronic device 200 may be used to implement the methods described in the method embodiments described above.
The electronic device 200 includes one or more processors 201, which one or more processors 201 may support the electronic device 200 to implement the image rendering method in the method embodiments. The processor 201 may be a general purpose processor or a special purpose processor. For example, the processor 201 may be a central processing unit (central processing unit, CPU), digital signal processor (digital signal processor, DSP), application specific integrated circuit (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA), or other programmable logic device such as discrete gates, transistor logic, or discrete hardware components.
The processor 201 may be used to control the electronic device 200, execute software programs, and process data of the software programs. The electronic device 200 may further comprise a communication unit 205 for enabling input (reception) and output (transmission) of signals.
For example, the electronic device 200 may be a chip, the communication unit 205 may be an input and/or output circuit of the chip, or the communication unit 205 may be a communication interface of the chip, which may be an integral part of a terminal device or other electronic device.
For another example, the electronic device 200 may be a terminal device, the communication unit 205 may be a transceiver of the terminal device, or the communication unit 205 may be a transceiver circuit of the terminal device.
The electronic device 200 may include one or more memories 202 having a program 204 stored thereon, the program 204 being executable by the processor 201 to generate instructions 203 such that the processor 201 performs the methods described in the method embodiments described above in accordance with the instructions 203.
Optionally, the memory 202 may also have data stored therein. Alternatively, processor 201 may also read data stored in memory 202, which may be stored at the same memory address as program 204, or which may be stored at a different memory address than program 204.
The processor 201 and the memory 202 may be provided separately or may be integrated together, for example, on a System On Chip (SOC) of the terminal device.
Illustratively, the memory 202 may be used to store a related program 204 of the image rendering method provided in the embodiments of the present application, and the processor 201 may be used to call the related program 204 of the image rendering method stored in the memory 202 at the time of video processing, to execute the image rendering method of the embodiments of the present application; for example, the application program issues a first rendering instruction stream for instructing the processor to perform a rendering operation of a first frame image, the first frame image including a first main scene and a first dynamic semitransparent object; the application program issues a second rendering instruction stream, and the second rendering instruction stream is used for instructing the processor to execute rendering operation of a second frame image, wherein the second frame image comprises a second main scene and a second dynamic semitransparent object; the processor determines a third rendering result according to the first rendering result and the second rendering result; the first rendering result is a rendering result corresponding to the first main scene, and the second rendering result is a rendering result corresponding to the second main scene; the processor mixes the third rendering result and the fourth rendering result to determine a predicted frame image; the fourth rendering result is a rendering result corresponding to the first dynamic semitransparent object.
The present application also provides a computer program product which, when executed by the processor 201, implements the method described in any of the method embodiments of the present application.
The computer program product may be stored in the memory 202, for example, the program 204, and the program 204 is finally converted into an executable object file capable of being executed by the processor 201 through preprocessing, compiling, assembling, and linking processes.
The present application also provides a computer readable storage medium having stored thereon a computer program which, when executed by a computer, implements a method according to any of the method embodiments of the present application. The computer program may be a high-level language program or an executable object program.
Such as memory 202. The memory 202 may be volatile memory or nonvolatile memory, or the memory 202 may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working processes and technical effects of the apparatus and device described above may refer to corresponding processes and technical effects in the foregoing method embodiments, which are not described in detail herein.
In several embodiments provided in the present application, the disclosed systems, apparatuses, and methods may be implemented in other manners. For example, some features of the method embodiments described above may be omitted, or not performed. The above-described apparatus embodiments are merely illustrative, the division of units is merely a logical function division, and there may be additional divisions in actual implementation, and multiple units or components may be combined or integrated into another system. In addition, the coupling between the elements or the coupling between the elements may be direct or indirect, including electrical, mechanical, or other forms of connection.
It should be understood that, in various embodiments of the present application, the size of the sequence number of each process does not mean that the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
In addition, the terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely one association relationship describing the associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
In summary, the foregoing description is only a preferred embodiment of the technical solution of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (11)

1. An image rendering method applied to an electronic device installed with an application program, the method comprising:
the application program issues a first rendering instruction stream, wherein the first rendering instruction stream is used for instructing the electronic equipment to execute a rendering operation of a first frame image, and the first frame image comprises a first main scene and a first dynamic semitransparent object;
the application program issues a second rendering instruction stream, and the second rendering instruction stream is used for instructing the electronic equipment to execute rendering operation of a second frame image, wherein the second frame image comprises a second main scene and a second dynamic semitransparent object;
The electronic equipment determines a third rendering result according to the first rendering result and the second rendering result; the first rendering result is a rendering result corresponding to the first main scene, and the second rendering result is a rendering result corresponding to the second main scene;
the electronic equipment mixes the third rendering result and the fourth rendering result and determines a predicted frame image; and the fourth rendering result is a rendering result corresponding to the first dynamic semitransparent object.
2. The method of claim 1, wherein the first stream of rendering instructions comprises a first stream of instructions for instructing the electronic device to render the first master scene to obtain the first rendering result and a second stream of instructions for instructing the electronic device to render the first dynamically translucent object to obtain the fourth rendering result;
before the electronic device mixes the third rendering result and the fourth rendering result, the method further includes:
the electronic equipment performs rendering according to the first instruction stream to obtain the first rendering result, and stores the first rendering result in a first frame buffer, wherein the first instruction stream comprises an instruction pointing to the first frame buffer;
The electronic device performs rendering according to the second instruction stream to obtain the fourth rendering result, and stores the fourth rendering result in a second frame buffer, wherein the second instruction stream comprises an instruction pointing to the second frame buffer, and the second frame buffer is different from the first frame buffer.
3. The method of claim 2, wherein prior to the electronic device rendering according to the second instruction stream to obtain the fourth rendering result, the method further comprises: the electronic device creates the second frame buffer.
4. A method according to claim 2 or 3, wherein the electronic device determines that the semitransparent object is drawn according to the preset first instruction and second instruction in the second instruction stream when the electronic device performs rendering.
5. The method according to any one of claims 2 to 4, wherein the electronic device determines that the dynamic object is drawn based on a third instruction, a fourth instruction, and a fifth instruction included in the second instruction stream when rendering.
6. The method according to any one of claims 1 to 5, further comprising:
The electronic equipment mixes the third rendering result and the fifth rendering result and determines the predicted frame image; and the fifth rendering result is a rendering result corresponding to the second dynamic semitransparent object.
7. The method of any of claims 1 to 6, wherein after determining the third rendering result, the method further comprises:
the electronic equipment determines a seventh rendering result according to the fifth rendering result and the sixth rendering result; the fifth rendering result is a rendering result corresponding to the first dynamic semitransparent object, and the sixth rendering result is a rendering result corresponding to the second dynamic semitransparent object;
the electronic device mixes the third rendering result and the seventh rendering result and determines the predicted frame image.
8. The method according to any one of claims 1 to 7, wherein the first frame image is an nth frame image, the second frame image is an n+2th frame image, the predicted frame image is an n+1th frame image, or;
the first frame image is an nth frame image, the second frame image is an (n+1) th frame image, and the predicted frame image is an (n+2) th frame image.
9. An electronic device, comprising:
one or more processors and memory;
the memory is coupled with the one or more processors, the memory for storing computer program code comprising computer instructions that the one or more processors invoke to cause the electronic device to perform the method of any of claims 1-8.
10. A chip system for application to an electronic device, the chip system comprising one or more processors for invoking computer instructions to cause the electronic device to perform the method of any of claims 1 to 8.
11. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which when executed by an electronic device, causes the electronic device to perform the method of any one of claims 1 to 8.
CN202310947840.8A 2023-07-28 2023-07-28 Image rendering method and related equipment thereof Pending CN117710548A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310947840.8A CN117710548A (en) 2023-07-28 2023-07-28 Image rendering method and related equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310947840.8A CN117710548A (en) 2023-07-28 2023-07-28 Image rendering method and related equipment thereof

Publications (1)

Publication Number Publication Date
CN117710548A true CN117710548A (en) 2024-03-15

Family

ID=90155840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310947840.8A Pending CN117710548A (en) 2023-07-28 2023-07-28 Image rendering method and related equipment thereof

Country Status (1)

Country Link
CN (1) CN117710548A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140092109A1 (en) * 2012-09-28 2014-04-03 Nvidia Corporation Computer system and method for gpu driver-generated interpolated frames
CN112051995A (en) * 2020-10-09 2020-12-08 腾讯科技(深圳)有限公司 Image rendering method, related device, equipment and storage medium
CN114210055A (en) * 2022-02-22 2022-03-22 荣耀终端有限公司 Image rendering method and electronic equipment
CN114299219A (en) * 2021-12-29 2022-04-08 杭州群核信息技术有限公司 Rendering graph scene dynamic switching method and device, electronic equipment and medium
CN115920370A (en) * 2021-05-12 2023-04-07 华为云计算技术有限公司 Image rendering method, device and equipment
WO2023102275A1 (en) * 2022-11-04 2023-06-08 Innopeak Technology, Inc. Multi-pipeline and jittered rendering methods for mobile
CN116246004A (en) * 2021-12-07 2023-06-09 脸萌有限公司 Scene rendering method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140092109A1 (en) * 2012-09-28 2014-04-03 Nvidia Corporation Computer system and method for gpu driver-generated interpolated frames
CN112051995A (en) * 2020-10-09 2020-12-08 腾讯科技(深圳)有限公司 Image rendering method, related device, equipment and storage medium
CN115920370A (en) * 2021-05-12 2023-04-07 华为云计算技术有限公司 Image rendering method, device and equipment
CN116246004A (en) * 2021-12-07 2023-06-09 脸萌有限公司 Scene rendering method and device
CN114299219A (en) * 2021-12-29 2022-04-08 杭州群核信息技术有限公司 Rendering graph scene dynamic switching method and device, electronic equipment and medium
CN114210055A (en) * 2022-02-22 2022-03-22 荣耀终端有限公司 Image rendering method and electronic equipment
WO2023102275A1 (en) * 2022-11-04 2023-06-08 Innopeak Technology, Inc. Multi-pipeline and jittered rendering methods for mobile

Similar Documents

Publication Publication Date Title
KR101980990B1 (en) Exploiting frame to frame coherency in a sort-middle architecture
EP3657327A1 (en) Method for rendering game, and method, apparatus and device for generating game resource file
KR101239029B1 (en) Multi-buffer support for off-screen surfaces in a graphics processing system
CN114210055B (en) Image rendering method and electronic equipment
CN113244614B (en) Image picture display method, device, equipment and storage medium
KR20100004119A (en) Post-render graphics overlays
US11587280B2 (en) Augmented reality-based display method and device, and storage medium
CN108986013B (en) Method and program storage device for executing tasks on a graphics processor
KR20220088924A (en) Display method and apparatus based on augmented reality, and storage medium
CN114708369B (en) Image rendering method and electronic equipment
US20130127849A1 (en) Common Rendering Framework and Common Event Model for Video, 2D, and 3D Content
CN116166259A (en) Interface generation method and electronic equipment
US20140161173A1 (en) System and method for controlling video encoding using content information
CN114570020A (en) Data processing method and system
CN115018692A (en) Image rendering method and electronic equipment
CN116166256A (en) Interface generation method and electronic equipment
CN116091329B (en) Image processing method, device, equipment and storage medium
CN116958344A (en) Animation generation method and device for virtual image, computer equipment and storage medium
CN117710548A (en) Image rendering method and related equipment thereof
CN114780012A (en) Display method and related device for screen locking wallpaper of electronic equipment
CN116166255A (en) Interface generation method and electronic equipment
CN112231029A (en) Frame animation processing method applied to theme
WO2024051471A1 (en) Image processing method and electronic device
CN116672707B (en) Method and electronic device for generating game prediction frame
RU2810701C2 (en) Hybrid rendering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination