CN116672702A - Image rendering method and electronic equipment - Google Patents

Image rendering method and electronic equipment Download PDF

Info

Publication number
CN116672702A
CN116672702A CN202210713538.1A CN202210713538A CN116672702A CN 116672702 A CN116672702 A CN 116672702A CN 202210713538 A CN202210713538 A CN 202210713538A CN 116672702 A CN116672702 A CN 116672702A
Authority
CN
China
Prior art keywords
rendering
electronic device
instruction
frame image
frame buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210713538.1A
Other languages
Chinese (zh)
Inventor
高巍伟
刘智超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210713538.1A priority Critical patent/CN116672702A/en
Publication of CN116672702A publication Critical patent/CN116672702A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Image Generation (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the application discloses an image rendering method and electronic equipment, which relate to the field of image processing and can realize reasonable multiplexing of semi-transparent particle rendering results, thereby effectively reducing repeated expenditure for semitransparent particle rendering in the multi-frame image rendering process and further reducing the power consumption and the calculation power waste caused by the repeated expenditure. The specific scheme is as follows: the application program issues a first instruction stream, and the first instruction stream is used for instructing the electronic device to execute a rendering operation of a first frame image, wherein the first frame image comprises a first main scene and first semi-transparent particles. The electronic equipment synthesizes the first rendering result and the second rendering result and acquires the first frame image. The first rendering result is a rendering result of the first main scene, the second rendering result is a rendering result of the first semi-transparent particles, and the second rendering result is stored in a first frame buffer of the electronic device.

Description

Image rendering method and electronic equipment
The present application is a divisional application, the application number of which is 202210159851.5, the application date of which is 2022, 2 and 22, the entire contents of which are incorporated herein by reference.
Technical Field
The embodiment of the application relates to the field of image processing, in particular to an image rendering method and electronic equipment.
Background
With the development of electronic devices, the content of the display image is also becoming more and more rich. In some images, translucent particles may be included. And for the rendering of semitransparent particles in continuous multiple frame images, larger rendering expenditure is generated for the electronic equipment, and the rendering is manifested by large power consumption, serious heating and the like of the electronic equipment, so that the user experience is influenced.
Disclosure of Invention
The embodiment of the application provides an image rendering method and electronic equipment, which can realize reasonable multiplexing of semi-transparent particle rendering results, thereby effectively reducing repeated expenditure for semi-transparent particle rendering in the multi-frame image rendering process and further reducing the power consumption and the calculation power waste caused by the repeated expenditure.
In order to achieve the above purpose, the embodiment of the application adopts the following technical scheme:
in a first aspect, an image rendering method is provided, and is applied to an electronic device, in which an application program is installed, the method including: the application program issues a first instruction stream, and the first instruction stream is used for instructing the electronic device to execute a rendering operation of a first frame image, wherein the first frame image comprises a first main scene and first semi-transparent particles. The electronic equipment synthesizes the first rendering result and the second rendering result and acquires the first frame image. The first rendering is a rendering result of the first main scene, the second rendering result is a rendering result of the first semi-transparent particles, and the second rendering result is stored in a first frame buffer of the electronic device.
Based on this scheme, the electronic device can perform the rendering of the first frame image without performing the rendering process of the semitransparent particles in the image. Instead, the electronic device may read the rendering result of the semi-transparent particles from a corresponding storage space, such as a first frame buffer. Therefore, the cost for rendering the semitransparent particles of the current frame image can be saved, and meanwhile, the display of the semitransparent particles on the first frame image can not be influenced.
In one possible design, the first instruction stream includes a first instruction segment and a second instruction segment, where the first instruction segment is used to instruct the electronic device to render the first main scene to obtain the first rendering result, and the second instruction segment is used to instruct the electronic device to render the first semi-transparent particle. Before the electronic device synthesizes the first rendering result and the second rendering result, the method further includes: and the electronic equipment performs rendering according to the first instruction segment to obtain the first rendering result. The electronic device obtains the second rendering result from the first frame buffer. Based on the scheme, an acquisition scheme of a main scene rendering result and a semitransparent particle rendering result in a first frame image is provided. For example, the electronic device may perform rendering based on the corresponding instruction stream issued in the current frame image, and obtain the corresponding main scene rendering result. For another example, the electronic device may directly read the rendering result corresponding to the semitransparent particle from the first frame buffer. In this example, the first frame image may be a frame image multiplexed with the semitransparent particle rendering result.
In one possible design, the second rendering result is that the electronic device renders a second frame image that is stored in the first frame buffer before the first frame image. Based on the scheme, the electronic device can store the rendering result in the first frame buffer when rendering the first frame image, such as when rendering the second frame image, so as to realize multiplexing of the subsequent frame images.
In one possible design, before the application issues the first instruction stream, the method further includes: the application program issues a second instruction stream, and the second instruction stream is used for instructing the electronic device to execute a rendering operation of a second frame image, wherein the second frame image comprises a second main scene and the first semi-transparent particles. Based on this scheme, the application program can issue an instruction to perform rendering of the second frame image before multiplexing of the translucent particles is performed. The second frame image may include a corresponding main scene rendering instruction and a rendering instruction of the semitransparent particles. In the display of a plurality of continuous frame images, each frame image can comprise semitransparent particles, so that the scheme provided by the embodiment of the application can be used for multiplexing semitransparent particle rendering results.
In one possible design, the second instruction stream includes a third instruction segment and a fourth instruction segment, where the third instruction segment is used to instruct the electronic device to render the second main scene to obtain a third rendering result, and the fourth instruction segment is used to instruct the electronic device to render the first semi-transparent particles in the second frame image. Before the application issues the second instruction stream, the method further comprises: and the electronic equipment performs rendering according to the third instruction segment to obtain a third rendering result. And the electronic equipment acquires the fourth rendering result according to the fourth instruction segment. Similar to the instruction stream corresponding to the first frame image, the instruction stream of the other frame images may also include rendering instructions for the main scene and the semitransparent particles. In this example, the second frame image may be a frame image in which no translucent particle multiplexing is performed. The second frame image may be a 1 st frame image after the application program starts to run. Then no other frame images are rendered and therefore there is no translucent particle rendering result that can be multiplexed. Alternatively, the second frame image may be a frame image that does not meet the requirement of the preset rule. For example, in the frame image corresponding to the rendering result of the semitransparent particles stored in the second frame image, the difference between the position of the semitransparent particles and the like is large, and good multiplexing cannot be performed. Then, the semitransparent particles can be rendered again in the second frame image, so that a new rendering result is obtained for multiplexing the subsequent frame image.
In one possible design, the method further comprises: the electronic device creates the first frame buffer. The electronic device obtains the fourth rendering result according to the fourth instruction segment, including: the electronic device replaces the frame buffer indicated by the fourth instruction segment with the first frame buffer to obtain a fifth instruction segment. The electronic device performs a rendering operation of the fifth instruction segment to obtain a second rendering result of the first semi-transparent particle, and stores the second rendering result in the first frame buffer. Based on the scheme, a specific scheme for carrying out backup storage on semitransparent particle rendering results is provided. In the second frame image, the semitransparent particle rendering result may be stored in the corresponding first frame buffer by replacing the form of the frame buffer ID. The first frame buffer may be newly created based on the present scheme, and then the electronic device may multiplex data through the newly created frame buffer during subsequent rendering.
In one possible design, the electronic device determines the second instruction segment according to a preset beginning instruction and an ending instruction in the first instruction stream. Based on this scheme, a specific scheme of determining a rendering instruction stream of semitransparent particles is provided. For example, the start instruction may be a glEnable () or a glEnable () instruction. As another example, the end instruction may be a glDisable () instruction or a glDisable () instruction. By identifying the second instruction segment, the electronic device can clarify the rendering instruction stream of the translucent particles. In other implementations of the application, the end instruction may also be glDiscardFrameBufferEXT ().
In one possible design, the electronic device determines the fourth instruction segment according to a preset beginning instruction and an ending instruction in the second instruction stream. Based on this scheme, a scheme of determining a rendering instruction stream of the semitransparent particles for the second instruction stream is provided. For example, the start instruction may be a glEnable () or a glEnable () instruction. As another example, the end instruction may be a glDisable () instruction or a glDisable () instruction. In other implementations of the application, the end instruction may also be glDiscardFrameBufferEXT ().
In one possible design, the electronic device is provided with an interception module, a creation module, and a replacement module, and the method includes: the interception module is used for intercepting the fourth instruction segment. The creation module is configured to create the first frame buffer. The replacing module is configured to replace a frame buffer ID in a fourth instruction segment according to an Identification (ID) of the first frame buffer and the intercepted fourth instruction segment, so as to obtain a fifth instruction segment pointing to the first frame buffer. A Graphics Processor (GPU) of the electronic device performs rendering of the first semi-transparent particles according to the fifth instruction segment and stores the acquired second rendering result in the first frame buffer. Based on the scheme, a specific software division in the electronic equipment is provided, and the multiplexing scheme of the semitransparent particles is realized through interaction of the modules.
In one possible design, the electronic device further includes a combining module, and the method further includes: the merging module is used for instructing the GPU to merge the second rendering result and the third rendering result so as to obtain the rendering result of the second frame image. Based on the scheme, in the application, the rendering result of the semitransparent particles and the rendering result of the main scene can be respectively rendered, so that the GPU can be instructed to combine the two rendering results through the instruction issued by the combining module, thereby obtaining the complete rendering result.
In one possible design, the method further comprises: the frame buffer ID of the main scene is determined according to the process of the third frame image, and the frame buffer of the main scene is the frame buffer with the largest number of drawing commands (Drawcall) in the process of the third frame image. Based on the scheme, a determination mode of the main scene is provided. After determining the frame buffer ID of the main field, the frame buffer ID can be used for rendering the main scene by other subsequent frame buffers, so that the electronic device can clearly perform the instruction stream for rendering the main scene in the subsequent frame images, and the complete rendering result of the corresponding frame images can be obtained by combining the data in the frame buffer corresponding to the main scene and the rendering result of the semitransparent particles.
In one possible design, a counter is provided in the electronic device, which is incremented by 1 every time the electronic device performs rendering of a frame image. Before the electronic device synthesizes the first rendering result and the second rendering result and obtains the first frame image, the method further includes: when the electronic equipment determines that the first frame image is rendered, the value of the counter accords with a preset rule. Based on this scheme, a multiplexing scheme of translucent particles is provided. In the rendering process for a plurality of frame images, multiplexing of the translucent particles may be performed for several of the frame images. For example, multiplexed once every other frame. Therefore, the cost of semitransparent particle rendering of partial frame images can be saved, and meanwhile, the semitransparent particles can be updated in time through a scheme of multiplexing once every other frame, so that the multiplexing effect is accurate and reasonable.
In one possible design, when the electronic device determines that the first frame image is rendered, the method further includes, if the value of the counter does not meet a preset rule: the electronic device creates the first frame buffer, replaces the frame buffer in the first instruction stream pointed by the instruction segment for instructing the rendering of the first semi-transparent particles with the first frame buffer, and performs the rendering of the first semi-transparent particles and stores the frame buffer in the first frame buffer. Based on this scheme, the electronic device can correspondingly perform normal rendering of the native logic instructions for frame images that do not need to be multiplexed. Thereby realizing the updating of the rendering result of the semitransparent particles corresponding to the rendering of the frame image.
In one possible design, the preset rule is: the value of the counter is even. Based on this scheme, a scheme of preset rules is provided. Thereby achieving the effect of frame-by-frame multiplexing. For example, multiplexing is performed from frame 2. In addition, for the scheme setting that the odd frames are not multiplexed, the 1 st frame image can be made not to be multiplexed, so that the situation of multiplexing failure caused by rendering of semi-transparent particles which is not performed before is avoided.
In one possible design, before the electronic device synthesizes the first rendering result and the second rendering result, and obtains the first frame image, the method further includes: and the electronic equipment determines that the visual angle change of the first frame image when rendering is smaller than a preset visual angle threshold value. Based on the scheme, the multiplexing effect can be more accurate by other judging mechanisms before multiplexing is performed. For example, when the change in the angle of view is small, it is indicated that the positions of the translucent particles in the two frame images are relatively close, thereby ensuring the accuracy of the subsequent multiplexing effect.
In one possible design, the electronic device determines the change in view based on the MVP matrix of the first frame image and the MVP matrix of a second frame image that is rendered earlier than the first frame image. Based on this scheme, a specific scheme for determining the change of the viewing angle is provided. The MVP matrix of the current frame image (e.g., the first frame image) may be determined by an instruction stream issued by the application program. The MVP matrix of the second frame image may be cached in the electronic device during the rendering of the second frame image. In the application, when updating semitransparent particle rendering, the electronic device can update the MVP matrix of the corresponding frame image at the same time.
In one possible design, in a case where a change in a viewing angle at the time of rendering the first frame image is greater than a preset viewing angle threshold, the method further includes: the electronic device creates the first frame buffer, replaces the frame buffer in the first instruction stream pointed by the instruction segment for instructing the rendering of the first semi-transparent particles with the first frame buffer, and performs the rendering of the first semi-transparent particles and stores the frame buffer in the first frame buffer. Based on this scheme, when the change of the angle of view is large, the multiplexing of the semitransparent particles may not be performed, but the rendering of the semitransparent particles may be directly performed.
In one possible design, the method further comprises: and the electronic equipment combines and acquires the rendering result of the first frame image according to the first rendering result and the rendering result in the first frame buffer. Based on the scheme, the rendering result of the corresponding frame image can be obtained through combining the instructions.
In a second aspect, an electronic device is provided, the electronic device comprising one or more processors and one or more memories; one or more memories coupled to the one or more processors, the one or more memories storing computer instructions; the computer instructions, when executed by one or more processors, cause the electronic device to perform the image rendering method of the first aspect and any of the various possible designs described above.
In a third aspect, a chip system is provided, the chip system comprising an interface circuit and a processor; the interface circuit and the processor are interconnected through a circuit; the interface circuit is used for receiving signals from the memory and sending signals to the processor, and the signals comprise computer instructions stored in the memory; when the processor executes the computer instructions, the chip system performs the image rendering method as described above in the first aspect and any of various possible designs.
In a fourth aspect, there is provided a computer readable storage medium comprising computer instructions which, when executed, perform the image rendering method of the first aspect and any of the various possible designs described above.
In a fifth aspect, a computer program product is provided, comprising instructions in the computer program product, which when run on a computer, enables the computer to perform the image rendering method of the first aspect and any of the various possible designs as described above according to the instructions.
It should be appreciated that the technical features of the technical solutions provided in the second aspect, the third aspect, the fourth aspect, and the fifth aspect may all correspond to the image rendering method provided in the first aspect and the possible designs thereof, so that the advantages that can be achieved are similar, and are not repeated herein.
Drawings
FIG. 1 is a schematic illustration of a translucent particle;
FIG. 2 is a schematic diagram of a rendering process;
FIG. 3 is a schematic view of semitransparent particle rendering of a multi-frame image;
FIG. 4 is a schematic view of semitransparent particle rendering of a multi-frame image according to an embodiment of the present application;
fig. 5 is a schematic software division diagram of an electronic device according to an embodiment of the present application;
FIG. 6 is a schematic diagram of module interaction of image rendering according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating module interaction for still another image rendering according to an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating module interaction for still another image rendering according to an embodiment of the present application;
fig. 9 is a flowchart of an image rendering method according to an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating another module interaction for image rendering according to an embodiment of the present application;
FIG. 11 is a schematic diagram illustrating module interaction for still another image rendering according to an embodiment of the present application;
FIG. 12 is a flowchart of another image rendering method according to an embodiment of the present application;
fig. 13 is a flowchart of another image rendering method according to an embodiment of the present application;
FIG. 14 is a flowchart of another image rendering method according to an embodiment of the present application;
FIG. 15 is a schematic diagram of a coordinate transformation;
FIG. 16 is a schematic view of a reference line of sight provided by an embodiment of the present application;
FIG. 17 is a flowchart of another image rendering method according to an embodiment of the present application;
fig. 18 is a schematic diagram of an electronic device according to an embodiment of the present application;
fig. 19 is a schematic diagram of a system-on-chip according to an embodiment of the present application.
Detailed Description
The electronic device may present the image to the user through a display screen provided thereon. In some scenarios, translucent particles may be included in the image. Wherein the translucent particles may appear translucent in the image. For example, an image displayed by an electronic device is taken as an image in a game scene. In shooting-type games, the images may include translucent particles such as smoke, fire, spray, etc. The electronic equipment can increase the reality of scene display and promote user experience by increasing the rendering special effect of semitransparent particles in the image. For example, as shown in fig. 1, a rendering effect of semitransparent particles corresponding to the spray may be included in the region a. It can be seen that the spray in region a may appear translucent. Therefore, in the frame image, the user can see semitransparent spoons and can see the following scenery through the spoons, so that the effect of simulating real visual experience is achieved. As another example, a rendering effect of smoke-corresponding translucent particles may be included in region B. It can be seen that the smoke in region B may appear translucent. The transparency of the smoke may be lower than the spray in region a. Therefore, by displaying the frame image, the user obtains simulated real visual experience.
In order to acquire image data for display, before the electronic device displays the image, the electronic device may perform image rendering according to a rendering instruction stream issued by an application program (such as a game application), so as to acquire the image data for display.
In connection with fig. 2, the gaming application may issue a stream of rendering instructions when rendering a frame of an image. The central processing unit (Central Processing Unit, CPU) may call up an interface in the image library in accordance with the rendering instruction stream in order to instruct the graphics processor (Graphic Processing Unit, GPU) to perform the corresponding rendering operation. The rendering result of the GPU performing the rendering operation may be stored in the electronic device, and after rendering corresponding to the subsequent rendering instruction stream, the sending display data may be obtained. The electronic device may display the frame image on a display screen according to the send display data.
In some scenarios, if semitransparent particles are included in the current frame image, an instruction stream a indicating that semitransparent particle rendering is performed may be included in the instruction stream issued by the game application. Correspondingly, the electronic device may also implement rendering of the corresponding semi-transparent particles through the flow as shown in fig. 2.
It should be appreciated that the translucent particles do not exist in isolation in a certain frame of image, but the same or similar translucent particles exist in adjacent frames of images, thereby achieving continuity of translucent particle display. That is, during the rendering of adjacent multi-frame images, the game application will issue a similar instruction stream to instruction stream A for rendering the same or similar translucent particles. For example, as shown in fig. 3, during the rendering of the 1 st frame image, the gaming application issues a command stream a instructing the electronic device to render translucent particles (e.g., spray in region a shown in fig. 1). Correspondingly, the CPU, the graphics library and the GPU perform the rendering of the spray in the area a according to the flow shown in fig. 2, so as to obtain the rendering result corresponding to the spray shown in the area a in fig. 1. A spray similar to that of the first frame image is also included in the next 2 nd frame image. Thus, in the rendering instruction stream of the 2 nd frame image, the instruction stream a (or an instruction stream similar to the instruction stream a) may be included so as to instruct the electronic device to render the spray. Correspondingly, the CPU, the graphics library, and the GPU still perform the rendering of the spray according to the flow shown in fig. 2.
It can be seen that the rendering process of the semitransparent particles (e.g. the spray) corresponding to the instruction stream a is repeatedly performed by the CPU, the graphics library and the GPU for a plurality of times in a plurality of frame images, and the obtained results are substantially the same. The rendering process for translucent particles is more complex. Thus, a repetitive overhead for the rendering of translucent particles in the multi-frame image rendering process is caused. The problems of power consumption and calculation power waste in the image rendering process of the electronic equipment, heating, clamping and frame loss of display of the electronic equipment are caused.
In order to solve the problems, the embodiment of the application provides a rendering method for semitransparent particles in an image, which can realize reasonable multiplexing of rendering results of the semitransparent particles. Therefore, the repeated expenditure for semitransparent particle rendering in the multi-frame image rendering process is effectively reduced, and the power consumption and the calculation force waste caused by the repeated expenditure are further reduced.
For example, in conjunction with fig. 4, according to the scheme provided by the embodiment of the present application, the rendering result of the semitransparent particles may be stored in a preset position during the rendering process of the previous frame image (such as the 1 st frame image). In this way, in the process of rendering the next frame image (such as the 2 nd frame image), after receiving the instruction stream A for rendering the semi-transparent particles issued by the game application, the CPU can directly return to the instruction stream. That is, in the rendering process of the 2 nd frame image, the semi-transparent particles do not need to be rendered again, and the electronic device can multiplex the rendering result of the 1 st frame image on the semi-transparent particles when the rendering result of the semi-transparent particles needs to be used. For example, the rendering result of the semitransparent particles is called from a preset position. Therefore, repeated execution of the same or similar semitransparent particle rendering process in a plurality of frame images is avoided, and the rendering cost for semitransparent particles is reduced.
The following describes the scheme provided by the embodiment of the application in detail with reference to the accompanying drawings.
It should be noted that, the image rendering method provided by the embodiment of the application can be applied to the electronic equipment of the user. For example, the electronic device may be a mobile phone, a tablet computer, a personal digital assistant (personal digital assistant, PDA), an augmented reality (augmented reality, AR), a Virtual Reality (VR) device, a media player, or the like, or a wearable electronic device such as a smart watch that can provide display capabilities. The embodiment of the application does not limit the specific form of the device.
By way of example, in some embodiments, from a hardware component perspective, an electronic device according to embodiments of the present application may include a processor, an external memory interface, an internal memory, a universal serial bus (universal serial bus, USB) interface, a charge management module, a power management module, a battery, an antenna 1, an antenna 2, a mobile communication module, a wireless communication module, an audio module, a speaker, a receiver, a microphone, an earphone interface, a sensor module, a key, a motor, an indicator, a camera, a display screen, a subscriber identity module (subscriber identification module, SIM) card interface, and the like. The sensor module may include, among other things, a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc.
The above hardware components do not constitute a specific limitation on the electronic device. In other embodiments, the electronic device may include more or fewer components, or certain components may be combined, or certain components may be split, or different arrangements of components.
In other embodiments, the electronic device according to the embodiments of the present application may also have software partitioning. Taking an example that an android operating system runs in the electronic equipment. In the android operating system, there may be a hierarchical software partition.
Fig. 5 is a schematic diagram of software components of an electronic device according to an embodiment of the present application. As shown in fig. 5, the electronic device may include an Application (APP) layer, a Framework (Framework) layer, a system library, and a HardWare (HardWare) layer, etc.
The application layer may also be referred to as an application layer. In some implementations, the application layer can include a series of application packages. The application package may include camera, gallery, calendar, talk, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications. In embodiments of the present application, the application package may also include an application that needs to present images or video to a user by rendering the images. Video is understood to mean the continuous play of a plurality of frames of images. Which may include a frame image with semi-transparent particles. The application may, for example, comprise a game-like application, such as Etc.
The framework layer may also be referred to as an application framework layer. The framework layer may provide an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The framework layer includes some predefined functions. By way of example, the framework layer may include a window manager, a content provider, a view system, a resource manager, a notification manager, an activity manager, an input manager, and the like. The window manager provides window management services (Window Manager Service, WMS) that may be used for window management, window animation management, surface management, and as a transfer station to the input system. The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc. The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture. The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like. The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc. The activity manager may provide activity management services (Activity Manager Service, AMS) that may be used for system component (e.g., activity, service, content provider, broadcast receiver) start-up, handoff, scheduling, and application process management and scheduling tasks. The input manager may provide input management services (Input Manager Service, IMS), which may be used to manage inputs to the system, such as touch screen inputs, key inputs, sensor inputs, and the like. The IMS retrieves events from the input device node and distributes the events to the appropriate windows through interactions with the WMS.
In the embodiment of the present application, one or more functional modules may be disposed in the frame layer, so as to implement the solution provided in the embodiment of the present application. Illustratively, the framework layer may have an interception module, a creation module, a replacement module, a composition module, and the like. The interception module can be used for intercepting related instructions. The creation module may be configured to create a new Frame Buffer (FB), where the new frame buffer may correspond to the preset position as shown in fig. 4. The replacing module may be configured to replace the frame buffer bound in the original command stream according to the newly created frame buffer, so that the rendering result of the semitransparent particles may be stored in the newly created frame buffer for subsequent multiplexing. The synthesis module may be configured to combine the semitransparent particles stored in the newly created frame buffer with the main scene, thereby obtaining a complete rendering result.
The system library may comprise a graphics library. In different implementations, the graphics library may include at least one of: open graphics library (Open Graphics Library, openGL), open graphics library of embedded system (OpenGL for Embedded Systems, openGL ES), vulkan, etc. In some embodiments, other modules may also be included in the system library. For example: surface manager (surface manager), media Framework (Media Framework), standard C library (Standard C library, libc), SQLite, webkit, etc.
Wherein the surface manager is configured to manage the display subsystem and provide a fusion of two-dimensional (2D) and three-dimensional (3D) layers for the plurality of applications. Media frames support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: motion picture expert group 4 (Moving Pictures Experts Group, MPEG 4), h.264, motion picture expert compression standard audio layer3 (Moving Picture Experts Group Audio Layer, MP 3), advanced audio coding (Advanced Audio Coding, AAC), adaptive Multi-Rate (AMR), joint photographic expert group (Joint Photographic Experts Group, JPEG, or JPG), portable network graphics (Portable Network Graphics, PNG), and the like. OpenGL ES and/or Vulkan provide drawing and manipulation of 2D graphics and 3D graphics in applications. SQLite provides a lightweight relational database for applications of the electronic device 400.
In the example of fig. 5, a hardware layer may also be included in the electronic device. The hardware layer may include a CPU, a GPU, and a memory with a memory function. In some implementations, the CPU may be configured to control each module in the framework layer to implement its respective function, and the GPU may be configured to perform a corresponding rendering process according to an API in a graphics library (e.g., openGL ES) called by an instruction processed by each module in the framework layer.
In the following description, a scheme provided by the embodiment of the present application will be described in detail with reference to software partitioning as shown in fig. 5.
According to the rendering scheme provided by the embodiment of the application, the semitransparent particles can be prestored in the newly built frame buffer in the rendering process of the Nth frame image. In the process of rendering the following frame image, the electronic device can directly return the rendering instruction stream of the semitransparent particles issued by the application program (such as the game application), and the rendering result of the semitransparent particles in the newly built frame buffer is directly multiplexed without repeated execution.
In the following description, the n+1st frame image is taken as an example of multiplexing semitransparent particles rendered in the N frame image. In the present application, the n+1st frame image may correspond to the first frame image, and to the first instruction stream. The nth frame image may correspond to a second frame image corresponding to a second instruction stream.
For easy understanding, the following will first use an application program as a game application, and a brief description will be made of the composition of an instruction stream and related concepts (such as a main scenario) in the game running process involved in the scheme provided in the embodiment of the present application.
It will be appreciated that after the game application is run, an instruction stream comprising a plurality of instructions is issued to the electronic device when display of an nth frame of image is required. Which may include an instruction stream 12 for instructing the electronic device to render the semi-transparent particles; an instruction stream 11 for instructing the electronic device to render the main scene; an instruction stream 14 for instructing the electronic device to merge the semi-transparent particles with the main scene of the nth frame onto the same map, etc. Wherein the instruction stream 12 may correspond to a fourth instruction segment of the first instruction stream. The instruction stream 11 may correspond to a third instruction segment of the first instruction stream.
Similar to the nth frame image, the instruction stream issued by the game application may also include a plurality of instructions in the rendering of the other images. Take the n+1st frame image as an example. The instruction stream issued by the game application may include: an instruction stream 22 for instructing the electronic device to render the semi-transparent particles, an instruction stream 21 for instructing the electronic device to render the main scene, and an instruction stream 24 for instructing the electronic device to merge the semi-transparent particles with the main scene of the n+1st frame (e.g. main scene 21) onto the same map, etc. Wherein the instruction stream 22 may correspond to a second instruction segment of the first instruction stream. The instruction stream 21 may correspond to a first instruction segment of a first instruction stream.
The main scene may correspond to a rendering pipeline (rendering) with the largest number of corresponding drawing instructions (drawcall) in the rendering and rendering process of the current frame image. Multiple renderings may be included in the rendering of a frame of image. The rendering results of each rendering may be stored in a frame buffer. Each rendering of the render may include multiple drawcall. The more drawcall is executed, the more rich the content of the map is obtained after the corresponding rendering execution is completed.
In general, the main scene of different frame images may be different. For example, the main scene of the nth frame image may be the main scene 11, and the main scene in the nth frame image may also be referred to as a second main scene. The main scene of the n+1st frame image may be the main scene 21. The main scene in the n+1st frame image may also be referred to as a first main scene.
It should be noted that, for a fixed game scene, the randerpass corresponding to the main scene (i.e., the frame buffer corresponding to the main scene) generally does not change. That is, the frame buffer of the randerpass rendering the main scene is the same for several consecutive frames of images. Therefore, in the embodiment of the application, the main scene of the next several frames of images can be determined according to the number of drawcall included in each randerpass in the rendering process of the N-1 th frame of images. For example, after the rendering of the N-1 st frame image is completed, the electronic device may determine that the number of drawcall performed on frame buffer FB0 (i.e., the frame buffer with ID 0) is the largest. Then the main scene corresponding frame buffer for the subsequent frame image may be determined to be FB0. In the present application, FB0 may also be referred to as frame buffer 11.
In other implementations of the application, the validation of the primary scene may also be performed in real-time. For example, after performing the operation of all rendering instructions of the nth frame image, the electronic apparatus may buffer, as the frame buffer of the main scene, the frame buffer having the largest number of drawcall in the rendering process of the nth frame image. Similarly, for other frame images, such as the n+1st frame image, the electronic device may also perform confirmation and update of the main scene frame buffer in real time.
In the present application, the process of determining the main scene may be completed before performing the rendering of the nth frame image. In some embodiments, after the game starts to run and finishes loading, the electronic device may determine a main scene in the subsequent frame image rendering process according to a rendering instruction of the first frame image issued by the game application, or determine a main scene in the subsequent frame image rendering process according to a preset rendering instruction of the mth frame image. Wherein, the mth frame image may be a frame image before the nth frame image. In some implementations, M may be greater than 1, which may ensure that the main scene is determined after the game is running steadily.
In addition, the step of determining the main scene may be performed only once during the game running, and the subsequent frame images may each determine the frame image based on the determination result. In other embodiments, the step of determining the main scene may be performed cyclically according to a preset period to update the frame buffer information (e.g. the frame buffer ID, etc.) of the main scene. In other embodiments, the step of determining the main scenario may be triggered and executed according to a real-time load condition of the electronic device. For example, for gaming applications, the load of an electronic device may change significantly when switching main scenarios. And then, when the load change of the electronic equipment exceeds a preset load threshold, triggering the determining step of the main scene, and updating the buffer information of the main scene. In the subsequent frame image rendering process, a related operation may be performed according to the updated buffer information of the main scene.
In the embodiment of the present application, taking an example in the rendering process of the nth frame image, the electronic device may respond to the instruction stream 11 issued by the game application to perform rendering of the main scene.
For example, in connection with fig. 6. The gaming application may issue a command stream 11 for instructing the electronic device to render the main scene. In this example, the interception module may be configured to perform interception of the semi-transparent particle related instruction stream. For other instruction streams, the interception module may call the instruction streams back to the graphics library to instruct the GPU to perform corresponding rendering operations, thereby avoiding rendering process errors. The scheme for the intercept module to identify whether the instruction stream from the gaming application is a semitransparent particle rendering related instruction stream will be set forth in detail in the description of fig. 7 that follows. For example, the semi-transparent particle rendering-related instruction stream may begin with a fixed instruction and end with a fixed instruction. The interception module may implement interception of the semi-transparent particle related instruction stream by identifying the fixed beginning instruction and the fixed ending instruction.
As shown in fig. 6, for the instruction stream 11, the interception module may directly call back the instruction stream 11 to the graphics library, thereby calling a corresponding API, and instructing the GPU to perform rendering of the main scene. It will be appreciated that, before the nth frame image starts to be rendered, the electronic apparatus may determine that the frame corresponding to the main scene is buffered as the frame buffer 11 (e.g. FB 0) according to the foregoing scheme. Thus, after the rendering on FB0 is completed, the rendering of the main scene in the nth frame image is completed. Thus, the electronic device can acquire the rendering result of the main scene in the nth frame image, that is, the main scene 11 stored in the frame buffer 11.
The foregoing description of the instruction flow regarding the nth frame image is incorporated. The gaming application may also issue an instruction stream 12 indicating that translucent particle rendering is to take place.
Fig. 7 illustrates an exemplary rendering process of semitransparent particles in an nth frame image according to an embodiment of the present application.
As shown in fig. 7, during the rendering of the nth frame image, the gaming application may issue a command stream 12 instructing the electronic device to render the translucent particles. In the present application, the instruction stream 12 may be an instruction segment that starts with a glEnable instruction and ends with a glDisable instruction in an instruction stream issued by a game application. The glEnable instruction may correspond to the start instruction in the description, and the glDisable instruction may correspond to the end instruction in the description.
It should be understood that in the rendering process of the nth image, the rendering environment is exemplified as OpenGL. Since OpenGL is a state machine, the corresponding rendering state needs to be changed when rendering translucent particles. In addition, in order to render different transparency levels, an enable color mixing state is required. In this example, the gaming application may enable (enable) the color mixing state via a gleable instruction, i.e., instruct the electronic device to begin rendering the translucent particles via the gleable instruction.
Illustratively, table 1 below shows one illustration of the beginning of instruction stream 12 in this example.
TABLE 1
Instruction ID (EID) Instruction content (Event)
>3245 glEnablei(GL_BLEND,0)
>3246 glBlendFuncSeparate(GL_DST_COLOR,GL_NONE,GL_NONE,GL_LINES)
>3247 glViewport(0,0,1480,720)
In the example of table 1, the game application instructs the electronic device to enable the color mixing state by the issued instruction with ID 3245 by the glEnablei (gl_band, 0), i.e. the glEnablei instruction. Then, in the subsequent instruction, the game application may instruct the electronic device to perform the operation corresponding to the semitransparent particle rendering through a different instruction. For example, the game application may set the blending factor via the glBlendedFuncseparator instruction with ID 3246. The game application may set the viewport parameters via the glViewport instruction with ID 3247.
In the example of table 1, the enabling instruction is a glEnablei instruction. In various implementations of the application, the glEnablei instruction may function similarly to the glEnable instruction. The difference is that when the instruction issued by the game application is data addressed in an indexed manner, an index (index) parameter (e.g., 0 carried in the glEnablei instruction in table 1) is added to the instruction, and then the glEnablei instruction is used for enabling. In contrast, when the instruction issued by the game application does not address data in an index manner, the index is not required to be carried, and then the instruction is enabled by using the glEnable instruction. In the following examples, enabling using the gleable instruction is illustrated as an example.
One or more specific indications may be included at each instruction in table 1. For example, multiple drawing elements (dragelements) may be included in an instruction. By way of example, table 2 below shows one illustration of specific DrawElements included in an instruction.
TABLE 2
6101 glDrawElements(1020)
6142 glDrawElements(231)
6161 glDrawElements(231)
6162 glDiscardFramebufferEXT(Framebuffer 557)
As shown in table 2, the gaming application may instruct the electronic device to perform the corresponding drawing operation through the functions 6101, 6142, 6161 in order. After all dragelements have been executed, the full rendering of the semi-transparent particles can be completed.
In the example of table 2, after rendering instructions for all semi-transparent particles are issued, the game application may call the glDiscardFramebufferEXT () function. For example, the gaming application may issue a gldiscardframebuffer ext (Framebuffer 557) with ID 6162 as shown in table 2 after the last dragelements is issued. The interface is mainly used for informing the display card driver of the fact that the next frame of the frame buffer of the current frame is not needed under the frame of the bottom layer block delay rendering (Tile-Based Deferred Rendering, TBRT) rendering frame of the mobile terminal in order to reduce bandwidth consumption caused by the data synchronization of the block (Tile) and the video memory, and reducing the bandwidth consumption of synchronizing the Tile of the current frame with the video memory and synchronizing the next frame with the Tile. In this example, the glDiscardFramebufferEXT () function may be used to indicate that the rendering instruction of the semi-transparent particle has been issued to completion.
The gaming application may instruct the electronic device to completely shut down the current color mixing operation for the translucent particles by issuing a glDisable instruction after completing the issuing of glDiscardFramebufferEXT () before issuing the next dragwelements for other objects (i.e., other objects than the translucent particles).
Illustratively, table 3 below shows one indication of the ending portion of instruction stream 12 in this example.
TABLE 3 Table 3
Instruction ID (EID) Instruction content (Event)
>6167 glBindBuffer(GL_UNIFORM_BUFFER,Buffer 15245)
>6168 glBufferSubData(Buffer 15254,(96bytes))
>6169 glBindBuffer(GL_UNIFORM_BUFFER,Buffer 21484)
>6170 glBufferSubData(Buffer 21484,(48bytes))
>6171 glDisablei(GL_BLEND,0)
In the instruction stream example output in Table 3, the game application can bind the frame Buffer with ID 15245 through the glBindBuffer () instruction with ID 6167 and pass the data into Buffer 15254 through the glBufferSubData () instruction with ID 6168. The game application may also bind frame buffers with ID 21484 through a glBindBuffer () instruction with ID 6169 and pass data into Buffer 21484 through a glBufferSubData () instruction with ID 6170. Thus, the rendering instruction of the semitransparent particles of the frame image is issued. The game application may issue a gldisable () instruction with ID 6171, indicating that the rendering instruction issue by the electronic device for the translucent particle is completed, and closing the color mixing operation.
Similar to the previous description regarding the relationship of the glEnable instruction to the glEnable instruction, when the color mixing operation is turned off, the electronic device may use the glDisable () instruction or the glDisable () instruction to implement its function when different data addressing is employed. In the following example, the electronic device turns off color mixing using the glDisable () instruction.
From the above description, it can be seen that, in the rendering instruction stream of the current frame image (e.g., the nth frame image), the instruction stream 12 starting with the glEnable () instruction and ending with the glDisable () instruction may include all the rendering instructions of the semi-transparent particles.
In the above analysis, the instruction stream 12 is identified by the glEnable () instruction as the beginning and the glDisable () instruction as the end, which is only one example. In other implementation environments (e.g., rendering environments other than OpenGL), the beginning instructions and/or ending instructions of the instruction stream 12 may also be different.
With continued reference to FIG. 7, in an embodiment of the application, the interception module may intercept the instruction stream 12 for other modules to perform corresponding operations. For example, the interception module may begin to perform interception when identifying a glEnable () instruction in the instruction stream of the nth frame image until identifying a glDisable () instruction in the instruction stream. Thereby enabling interception of the instruction stream 12.
Whereas for instructions outside of the instruction stream 12, the interception module may go directly back to the graphics library to perform the corresponding operation. For example, as shown in the main scene rendering process of fig. 6, the interception module may directly call back the instruction stream 11 to the graphics library, so as to implement the rendering of the main scene.
In this example, the interception module may transmit the intercepted instruction stream 12 to the creation module.
Illustratively, the creation module may create a frame buffer in the memory of the electronic device after receiving the instruction stream 12 for storing the rendering result of the semitransparent particles of the nth frame image. For example, the creation module may create a frame buffer 12 for storing the rendering results of the semi-transparent particles. The frame buffer 12 may also be referred to as a first frame buffer.
It should be appreciated that in the prior art, the instruction stream 12 issued by the game application may instruct the GPU to render the semi-transparent particles by invoking an interface in the graphics library and store the same in a corresponding frame buffer (e.g., the original frame buffer). The data stored in the original frame buffer is generally invisible to the CPU and therefore multiplexing of the content in the original frame buffer during the subsequent frame image rendering process cannot be achieved. In the embodiment of the application, by creating a new frame buffer, such as the frame buffer 12, the data (such as the rendering result of the semitransparent particles) stored in the frame buffer 12 can be continuously called by the electronic device, so that multiplexing of the data is realized.
In some embodiments of the application, the creation module may also create other frame buffers. For example, the creation module may create the frame buffer 13. The frame buffer 13 may be used to perform other rendering operations. For example, after the GPU completes rendering the main scene and the semitransparent particles, a synthesizing operation of the main scene and the semitransparent particles may be performed on the frame buffer 13.
In the example shown in fig. 7, the creation module creates the frame buffer 12 and the frame buffer 13 after receiving the instruction stream 12 sent by the interception module. In other examples of the application, the timing of creating frame buffer 12 and/or frame buffer 13 may also be different. For example, the creation module may create the frame buffer 12 and/or the frame buffer 13 in advance when the nth frame image starts rendering. The creation module may record the ID of the pre-created frame buffer 12 and/or frame buffer 13 for subsequent direct use.
It can be seen that the creation module creates the frame buffer 12 and the frame buffer 13 for performing data storage in the subsequent rendering process. The purpose of not directly using the original frame buffer is to facilitate subsequent repeated calls. Thus, in other embodiments of the present application, if there is an available frame buffer that can be invoked by the CPU, the creation module may not create a new frame buffer any more, but rather directly use the already created frame buffer.
As an example, take an nth frame image as an example of a 1 st frame image after game loading is completed. Since the current frame image is the 1 st frame image, there is generally no idle frame buffer that has been created and can be invoked by the CPU, the creation module can create the frame buffer 12 and the frame buffer 13 after receiving the instruction stream 12 sent by the interception module according to the mechanism shown in fig. 7.
In the embodiment of the present application, in order to ensure the subsequent normal use of the frame buffer 12 and the frame buffer 13, the creation module may also perform the related processing in addition to creating the frame buffer 12 and the frame buffer 13. Such as creating a corresponding map, binding a map, etc.
The process by which the creation module creates a new frame buffer is illustrated below.
For example, take the ID of frame buffer 12 as alpha_A as an example. Then frame buffer 12 may be identified as FB (alpha_A). The creation module may implement the creation of FB (alpha_a) and related processing by the following procedure.
1. Texture map texture_a for saving the semi-transparent particle rendering is created by glgen textures (1, & (texture_a)).
2. This FB (alpha_A) is created by glGenFramebuffers (1, & alpha_A).
3. Frame buffering is bound by glBindFramebuffer (gl_framebuffer, alpha_a).
4. The texture_A is bound to FB (alpha_A) by glFramebufferTexture2D (GL_framebuffer, GL_COLOR_ ATTACHMENT0, GL_TEXTURE_2D, texture_A, 0).
In this way, rendering of the image can be performed on the newly created FB (alpha_a). For example, data to be rendered is rendered onto the map texture_a and stored in a storage space corresponding to FB (alpha_a).
Similarly, the creation module may also perform creation of the frame buffer 13 and related processing. Taking the frame buffer 13 ID as alpha _ B as an example. Then frame buffer 13 may be identified as FB (alpha_b). The creation module may implement the creation of FB (alpha_b) and related processing by the following procedure.
1. Texture map texture_b for saving the semi-transparent particle rendering is created by glgen textures (1, & (texture_b)).
2. This FB (alpha_B) is created by glGenFramebuffers (1, & alpha_B).
3. Frame buffering is bound by glBindFramebuffer (gl_framebuffer, alpha_b).
4. The texture_B is bound to FB (alpha_B) by glFramebufferTexture2D (GL_framebuffer, GL_COLOR_ ATTACHMENT0, GL_TEXTURE_2D, texture_B, 0).
In this way, rendering of the image can be performed on the newly created FB (alpha_b). For example, data to be rendered is rendered onto the map texture_b and stored in a storage space corresponding to FB (alpha_b).
In an embodiment of the application, the creation module may also be used to transmit the instruction stream 12 and the ID of the newly created frame buffer to the replacement module. For example, continuing to take the newly created frame buffer 12 as FB (alpha_A) and the frame buffer 13 as FB (alpha_B). The creation module may send the ID (e.g., alpha_A) of the instruction stream 12, FB (alpha_A), and the ID (e.g., alpha_B) of FB (alpha_B) to the replacement module.
The replacing module may be configured to replace the ID of the original frame buffer in the instruction stream 12 issued by the game application according to the ID of the newly created frame buffer, so that the subsequent relevant rendering operation may be performed on the newly created frame buffer that the CPU can call.
Illustratively, the replacement module may replace the frame buffer ID for semi-transparent particle rendering in the instruction stream 12 with alpha_A. This allows the rendering result to be stored on the map texture_a of FB (alpha_a) in the rendering process of the translucent particles later.
Taking as an example the frame buffer ID indicated in the instruction stream 12 for rendering of semi-transparent particles as beta_a. The replacement module may replace the command with binding frame buffer ID beta_a with the command with binding frame buffer ID alpha_a in the instruction stream 12. The replacement module may also replace other instructions bound to beta_A in the instruction stream 12 with instructions bound to alpha_A. Thereby fetching an instruction stream 13 directed to FB (alpha_a). The instruction stream 13 after this replacement may also be referred to as a fifth instruction segment.
As shown in fig. 7, the replacement module may transmit the instruction stream 13 to the graphics library, so that the graphics library invokes a corresponding interface, instructing the GPU to perform rendering of the semitransparent particles in the nth frame image. Since the frame buffer bound in the instruction stream 13 points to the frame buffer 12 (FB (alpha_a) in the above example), the rendering result of the GPU on the semitransparent particles in the nth frame image can be saved on the map texture_a of FB (alpha_a).
Rendering of the semitransparent particles in the nth frame image is thus completed, and the rendering result may be stored in the frame buffer 12.
It should be noted that, in different implementations, the rendering process of the main scene as shown in fig. 6 and the rendering process of the semitransparent particles as shown in fig. 7 may be different in the sequence of the two processes. For example, in some implementations, the gaming application may first instruct the electronic device via instruction stream 11 to perform a primary scene rendering as shown in FIG. 6. Thereafter, the gaming application may instruct the electronic device via instruction stream 12 to perform semi-transparent particle rendering as shown in FIG. 7.
In the embodiment of the application, after the main scene rendering and the semitransparent particle rendering are completed, since the two rendering results are stored on different maps, the main scene and the semitransparent particle need to be combined on one map through a combining operation.
Illustratively, the main scene rendering result (e.g., main scene 11) of the nth frame image is stored in the frame buffer 11, and the rendering result of the semitransparent particles is stored in the frame buffer 12.
The composition module in the electronic device may instruct the GPU to perform a composition action of the two rendering results after rendering of the main scene and the semi-transparent particles is completed.
As a possible implementation, taking the example that the electronic device first completes the rendering of the main scene as shown in fig. 6, and then completes the rendering of the semitransparent particles as shown in fig. 7. In connection with fig. 7, after the replacement module issues the instruction stream 13 to the graphics library, the GPU may sequentially complete the rendering of the semi-transparent particles. As shown in fig. 8, the replacement module may send a semitransparent particle rendering completion indication to the synthesis module after issuing the instruction stream 13 to the graphics library. Thereby making the synthesis module aware that the semitransparent particle rendering is complete. Next, the composition module may send an instruction stream 14 to the GPU for instructing the GPU to perform a composition of the primary scene rendering result and the semi-transparent particle rendering result.
It will be appreciated that in this example, when the synthesis module sends the instruction stream 14, even though the GPU has not completely completed the rendering operation of the semitransparent particles, since the instruction stream 14 arrives at the GPU later than the instruction stream 13, the execution of the instruction stream 14 will be after the instruction stream 13 in the instruction queue of the GPU, so that it can be ensured that both the main scene rendering result and the semitransparent particle rendering result are already stored in the corresponding frame buffer when the synthesis operation is performed.
In other implementations of the application, the trigger mechanism by which the synthesis module issues the instruction stream 14 may also be different from that of FIG. 8. For example, the GPU may return a semitransparent particle rendering completion indication to the composition module after completing execution of instruction stream 13. Then, in response to the semi-transparent particle rendering completion indication, the composition module may issue instruction stream 14 to instruct the GPU to perform a composition of the primary scene rendering result and the semi-transparent particle rendering result.
In response to the instruction stream 14 issued by the composition module, the gpu may read the stored data from the frame buffer 11 and the stored data from the frame buffer 12. It will be appreciated that, during the rendering of the nth frame image, the data in the frame buffer 11 is the rendering result of the main scene of the nth frame image (i.e. the main scene 11), and the data in the frame buffer 12 is the rendering result of the semitransparent particles of the nth frame image. The GPU may perform this compositing operation on the frame buffer 13 created by the creation module. Illustratively, the GPU may copy (copy) the main scene 11 onto the map of the frame buffer 13, and then synthesize the rendering results of the semitransparent particles in the frame buffer 12 onto the map of the frame buffer 13, thereby completing the synthesizing operation.
As an example, an example of an algorithm for a composition operation provided by the present application is given below.
"highp vec4 color1=texture(FB0_main,TexCoord);\n"
"highp vec4 color2=texture(Texture_A,TexCoord);\n"
"if(color1.a<0.001)\n"
"outColor.rgb=color1.rgb+color2.rgb;\n"
"else\n"
"outColor.rgb=color1.a*color1.rgb+(1.0f-color1.a)*color2.rgb;\n"
Thus, a rendering result including the main scene and the translucent particles can be obtained on the frame buffer 13. The electronic device may then proceed with processing the data on the frame buffer 13 according to other instructions in the nth frame image issued by the gaming application. For example, user Interface (UI) rendering is performed on the map of the frame buffer 13, and then the transmission data is acquired. According to the sending and displaying data, the Nth frame of image can be displayed on a display screen.
Thus, the rendering process of the Nth frame image can be completed. It can be seen that in the embodiment of the present application, in the rendering process of the nth frame image, the rendering result of the semitransparent particles may be stored in the newly built frame buffer, so as to implement the multiplexing function of the subsequent frame image.
In order to more clearly describe the scheme provided by the embodiment of the present application, the following is a description of the rendering process of the nth frame image with reference to the interactive flowchart shown in fig. 9. As shown in fig. 9, the scheme may include:
and S901, after the game application issues the instruction stream 11, the interception module calls back the instruction stream 11 to the graphic library.
Wherein the instruction stream 11 is used to instruct the electronic device to perform rendering of the main scene. With reference to the foregoing description, the interception module may perform interception of the corresponding instruction stream according to a preset beginning instruction and an ending instruction. For other instruction streams, the instruction stream can be directly recalled to the graphics library. For example, the instruction stream 11 may be directly recalled to the graphics library, that is, S901 is executed.
S902, the graphic library instructs the GPU to execute corresponding rendering operation.
Illustratively, the graphics library invokes a corresponding API according to instruction stream 11, instructing the GPU to perform a master scene dependent rendering operation. Rendering results (e.g., the main scene 11) may be stored in a frame buffer 11 indicated by the instruction stream 11.
S903, the GPU renders and acquires the main scene 11 data.
S904, the GPU stores the main scene 11 data in the frame buffer 11.
Thereby, rendering of the main scene is completed, and the obtained map corresponding to the main scene is stored in the frame buffer 11.
The specific implementation process of S901 to S904 may refer to the description of fig. 6, and will not be repeated here.
S905, after the game application issues the instruction stream 12, the interception module intercepts the instruction stream 12.
Wherein the instruction stream 12 may be instructions that instruct the electronic device to perform a rendering of the semi-transparent particles.
For example, the intercept module may monitor whether a preset beginning instruction appears in the game application issuing instruction stream. For example, the start instruction may be a glEnable () instruction. The interception module may start interception of the instruction after monitoring the gleable () instruction. The interception module may monitor whether a preset ending instruction appears in the game application issuing instruction stream. For example, the end instruction may be a glDisable () instruction. The interception module may stop interception of the instruction after monitoring the glDisable () instruction. The instruction thus intercepted may be the instruction stream 12.
S906, the interception module sends the instruction stream 12 to the creation module.
S907, the creation module creates the frame buffer 12 and the frame buffer 13.
S908, the creation module sends the instruction stream 12 and the new frame buffer ID to the replacement module.
Illustratively, the newly created frame buffer ID may include the IDs of frame buffer 12 and frame buffer 13. Wherein, the timing of transmitting the ID of the frame buffer 12 may be before the replacement module performs the following S909. The timing of transmitting the ID of the frame buffer 13 may be before the composition operation is performed. For example, the ID of the frame buffer 13 is transmitted before the following S914 is performed. In different implementations, the ID of frame buffer 12 and the ID of frame buffer 13 may be sent together or separately.
S909, the replacing module replaces the frame buffer bound in the instruction stream 12 with the frame buffer 12.
Wherein the instruction stream 12 is used to indicate the rendering of the semi-transparent particles, then the frame buffer bound in the instruction stream 12 is used to store the rendering results of the semi-transparent particles. In this example, the replacement module may replace the frame buffer bound in the instruction stream 12 with the newly created frame buffer 12 to store the rendering result of the semi-transparent particles in the frame buffer 12.
After the replacement module completes the operation of S909, the corresponding instruction stream 13 can be acquired. The instruction stream 13 is used to instruct the electronic device to render the semitransparent particles as the instruction stream 12, except that the frame buffer holding the semitransparent particles is replaced with the frame buffer 12.
S910, the replacement module sends an instruction stream 13 to the graphics library.
S911, the graphic library instructs the GPU to execute corresponding rendering operation.
Illustratively, the graphics library invokes a corresponding API according to instruction stream 13, instructing the GPU to perform semi-transparent particle related rendering operations. The rendering result of such semi-transparent particles may be stored in a frame buffer 12 indicated by the instruction stream 13.
S912, GPU rendering obtains a rendering result of the semitransparent particles.
S913, the GPU stores the rendering result of the semitransparent particles in the frame buffer 12.
The specific implementation process of S905-S913 may refer to the description of fig. 7, and will not be repeated here.
S914, the replacing module sends a semitransparent particle rendering completion instruction to the synthesizing module.
Illustratively, the replacement module may perform this S914 after the instruction stream 13 is sent. In some embodiments, the replacement module may send the ID of the frame buffer 13 to the composition module when performing this S914, in order to perform the composition operation subsequently on the frame buffer 13.
S915, the synthesis module sends the instruction stream 14 to the GPU. Wherein the instruction stream 14 is used to instruct the GPU to perform a composition operation on the frame buffer 13.
S916, the GPU reads the main scene 11 data from the frame buffer 11.
S917, the GPU reads the rendering result of the semitransparent particles from the frame buffer 12.
In some embodiments, the GPU is able to know the storage locations of the main scene and the semi-transparent particles, as the GPU has completed rendering the main scene and the semi-transparent particles. Then the GPU may execute S914 and S915 after receiving instruction stream 14. In other embodiments, in executing S915, the instruction stream 14 sent by the synthesizing module to the GPU may also carry IDs of frame buffers to be synthesized, for example, the IDs of the frame buffer 11 and the frame buffer 12 may be carried in the instruction stream 14, so that the GPU executes subsequent S916 and S917 according to the frame buffer indicated by the instruction stream 14. As a possible implementation, the frame buffer ID to be synthesized carried in the instruction stream 14 by the synthesis module may be sent to the synthesis module by the replacement module in S914.
S918, the GPU synthesizes the main scene 11 data and the semitransparent particle data.
S919, the GPU stores the result of the synthesis in the frame buffer 13. For the specific execution of S914-S919, reference may be made to the description of fig. 8, which is not repeated here.
In the example shown in fig. 9, a map of the rendering result in which the main scene and the translucent particles are synthesized is stored in the frame buffer 13. Accordingly, the electronic device may also replace the frame buffer 13 pointed to by other rendering instructions (such as an instruction instructing the electronic device to perform UI rendering) based on the rendering results including the main scene and the semitransparent particles in the current frame image rendering process. So that the electronic device can continue to perform rendering of elements such as UI on the map of the frame buffer 13 according to the subsequent rendering instruction, thereby obtaining a complete rendering result of the current frame image.
Thus, the rendering of the nth frame image can be completed through the descriptions of fig. 6 to 9. The rendering result of the semitransparent particles of the nth frame image may be stored in the frame buffer 12.
The following describes a scheme of multiplexing the rendering result of the translucent particles in the N-th frame image with the n+1-th frame image.
For example, please refer to fig. 10. During the rendering of the n+1 frame image, the gaming application may issue a command stream 21 for instructing the electronic device to render the main scene of the n+1 frame image.
Similar to the interception mechanism in the nth frame image, the interception module may call the instruction stream 21 directly back to the graphics library. Correspondingly, the graphics library may call an API corresponding to instruction stream 21 to instruct the GPU to perform rendering operations. The GPU may implement rendering operations for the n+1st frame main scene according to the instruction of the instruction stream 21. In the case where the main scene is unchanged, the instruction stream 21 instructs the GPU to store the rendering result of the main scene of the n+1st frame image (e.g., the main scene 21) in the frame buffer 11. Correspondingly, the GPU may perform rendering operations of the main scene 21 on the map of the frame buffer 11 (e.g., FB 0).
Thus, after the rendering flow shown in fig. 10 is completed, the rendering result of the main scene, which is the n+1st frame image, can be updated in the frame buffer 11. For example, the map of FB0 may store data corresponding to the main scene 21.
In this example, the main scene rendering process of the n+1st frame image shown in fig. 10 is similar to the main scene rendering process of the N frame image shown in fig. 6, and specific execution processes thereof may be referred to each other and will not be described herein.
In this example, in the rendering of the n+1st frame image, a rendering process of the semi-transparent particles may be further included.
For example, please refer to fig. 11. The gaming application may issue a stream of instructions 22 for instructing the electronic device to render the translucent particles. Similar to the instruction stream 12 of the nth frame image, the beginning and ending instructions of the instruction stream 22 may be relatively fixed. For example, the beginning instruction of instruction stream 22 may be a glEnsable () instruction. As another example, the end instruction of instruction stream 22 may be a glDisable () instruction. The instruction issued between the glEnable () instruction and the glDisable () instruction is a rendering instruction of the semitransparent particles that the game application instructs the electronic device to execute in the n+1st frame image.
In this example, the intercept module, upon identifying the instruction stream 22, no longer passes the instruction stream 22 to other modules. That is, the interception module may monitor the instruction stream issued by the game application, and when the glcreate () instruction is monitored, return all subsequent instructions up to the glDisable () instruction, without issuing the subsequent instructions to the GPU or other modules for response processing.
Thus, in the rendering process of the n+1st frame image, although the rendering instruction of the semitransparent particles is issued by the game application, the electronic device does not actually perform the rendering process of the semitransparent particles. Thereby saving the rendering overhead in the n+1st frame image rendering process.
As shown in fig. 11, the interception module may also send a composition trigger indication to the composition module after receiving the instruction stream 22. For example, in some embodiments, the intercept module may send a composition trigger indication to the composition module after monitoring for a glEnsable () instruction. In other embodiments, the interception module may send a composition trigger indication to the composition module after detecting the glDisable () instruction. The composition trigger indication may be used to instruct the composition module to trigger a composition instruction.
It will be appreciated that at the start of rendering of the n+1th frame image, the rendering result of the semitransparent particles stored during rendering of the N-th frame image may be stored in the frame buffer 12.
In performing the rendering of the n+1st frame image, the data stored in the frame buffer 11 may be updated to the main scene 21 of the n+1st frame image based on the main scene rendering scheme as shown in fig. 10. For the frame buffer 12, since the interception module returns the instruction stream 22, the electronic device does not perform the rendering of the corresponding semitransparent particles, and thus the rendering result of the semitransparent particles of the nth frame image may still be stored in the frame buffer 12.
In this example, the electronic device may synthesize the rendering result of the main scene (e.g., the main scene 21) in the n+1st frame image based on the rendering result of the semitransparent particles of the n+1st frame image to obtain the rendering result of the n+1st frame image.
By way of example, reference is continued to FIG. 11. The composition module may send an instruction stream 23 to the GPU after receiving the composition trigger indication, for instructing the GPU to perform composition of rendering results in the frame buffer 11 and the frame buffer 12. Illustratively, the GPU may read the main scene 21 from the frame buffer 11, read the rendering results of the semi-transparent particles from the frame buffer 12, and synthesize the rendering results of the main scene 21 and the semi-transparent particles onto the frame buffer 13 in response to the instruction stream 23. Thus, the rendering result of the n+1st frame image can be acquired in the frame buffer 13. It should be noted that, in this example, the instruction stream 23 is similar to the instruction stream 14 shown in fig. 8 or fig. 9, the synthesis operation performed by the GPU in response to the instruction stream 23 is similar to the synthesis operation performed by the GPU in response to the instruction stream 14 shown in fig. 8 or fig. 9, and the execution processes thereof are referred to each other, which is not repeated herein.
In this way, in the rendering process of the n+1th frame image, the rendering result of the semitransparent particles of the N frame image is multiplexed, thereby reducing the rendering overhead of the n+1th frame image.
In order to more clearly describe the scheme provided by the embodiment of the present application, the following description is continued on the rendering process of the n+1st frame image in conjunction with the interactive flowchart shown in fig. 12. As shown in fig. 12, the scheme may include:
s1201, after the game application issues the instruction stream 21, the interception module calls back the instruction stream 21 to the graphic library.
Wherein the instruction stream 21 may be used to instruct the electronic device to perform rendering of the main scene of the n+1st frame image.
S1202, the graphic library instructs the GPU to execute corresponding rendering operation.
S1203, GPU rendering obtains the main scene 21 data.
S1204, the GPU stores the main scene 21 data in the frame buffer 11.
In this example, the rendering process of the main scene in the n+1st frame image is similar to that of the N frame image, and the execution process of S1201 to S1204 may correspond to the explanation as shown in fig. 10. It should be understood that, in some embodiments, the execution of S1201-S1204 may refer to S901-S904 shown in fig. 9, and specific implementation processes may refer to each other, which is not described herein. Through the S1201-S1204, the rendering result of the main scene of the n+1st frame image, such as the main scene 21, can be acquired in the frame buffer 11.
S1205, after the game application issues the instruction stream 22, the interception module returns the instruction stream 22 and sends a composition trigger instruction to the composition module. Wherein the instruction stream 22 may be used to instruct the electronic device to render the translucent particles of the n+1st frame image.
S1206, the synthesis module sends an instruction stream 23 to the GPU. The instruction stream 23 may be used to instruct the GPU to perform a merge operation.
Through the operations of S1205-S1206, the electronic device can realize the return of the instruction stream 22 and the effect of instructing the GPU to perform multiplexing of the semitransparent particle rendering results. For its specific implementation reference may be made to the description as in fig. 11.
S1207, the GPU reads the main scene 21 data from the frame buffer 11.
S1208, the GPU reads the rendering result of the semitransparent particles from the frame buffer 12.
S1209, the GPU synthesizes the main scene 21 data and the semitransparent particle data.
S1210, the GPU stores the result of the synthesis in the frame buffer 13.
The merging process of S1207 to S1210 may refer to S914 to S919 as shown in fig. 9, whereby the rendering result of the n+1st frame image is obtained by multiplexing the rendering results of the translucent particles in the nth frame image. Similar to the foregoing description of fig. 9, the electronic device may replace the frame buffer pointed by the instruction based on the rendering result of the main scene and the semitransparent particles in the subsequent n+1st frame rendering into the frame buffer 13, thereby implementing a complete rendering process and obtaining the rendering result of the complete n+1st frame image.
Through the above description as shown in fig. 6 to 12, the electronic device can store the rendering result of the semitransparent particles in the newly created frame buffer during the rendering of the nth frame image. So that the rendering of the semitransparent particles is not performed in the n+1st frame image any more, but the rendering result stored in the newly built frame buffer is multiplexed and combined with the main scene of the n+1st frame image, so that the rendering result of the n+1st frame image can be obtained. Thereby saving at least the rendering overhead of the semitransparent particles of the n+1st frame image.
In the above examples, the scheme provided by the embodiment of the application is described from the aspect of interaction between modules. The following will continue to describe the scheme provided by the embodiment of the present application from the perspective of the electronic device.
For example, please refer to fig. 13, which is a schematic diagram illustrating a flow chart of image rendering according to an embodiment of the present application. As shown in fig. 13, the scheme may include:
s1301, the electronic device determines the frame buffer 11 corresponding to the main scene.
In connection with the foregoing description, the electronic device may determine the frame buffer 11 corresponding to the main scene before the nth frame image starts processing. For example, it may be determined that the frame buffer of the rendering pipeline with the largest number of drawcall among the frame images that have been rendered is the frame buffer of the main scene.
S1302, the electronic device performs main scene rendering of the Nth frame image on the frame buffer 11 to obtain a main scene 11.
Beginning at S1302, the electronic device can perform rendering of an nth frame image according to an instruction stream issued by the gaming application.
S1303, the electronic apparatus stores the rendering result of the semitransparent particles of the nth frame image on the newly created frame buffer 12.
S1304, the electronic device determines a rendering result of the nth frame image according to the rendering result of the main scene 11 and the semitransparent particles.
The execution of S1302-S1304 may correspond to the scheme illustrated in fig. 6-9, and the implementation may be referred to each other.
S1305, the electronic device performs main scene rendering of the n+1st frame image on the frame buffer 11 to obtain a main scene 21.
S1306, the electronic device determines a rendering result of the n+1st frame image from the main scene 21 and the semitransparent particle data.
The execution of S1305-S1306 may correspond to the rendering of the n+1st frame image by the electronic device, and the specific implementation may refer to the descriptions of fig. 10-12.
It should be understood that the above descriptions as shown in fig. 6 to 13 are each described by taking the semitransparent particle rendering result of multiplexing the N-th frame image with the n+1-th frame image as an example. The nth frame image may be any frame image after the game starts to run. For the n+1st frame image, the semitransparent particle rendering results of the N frame image may be further multiplexed in some embodiments, and the semitransparent particle rendering results of other frame images may be multiplexed in other embodiments, or the semitransparent particle rendering of the current frame image may be re-executed, so as to update the semitransparent particle rendering results and obtain more accurate rendering results.
In the embodiment of the application, a corresponding strategy can be preset in the electronic equipment, and the strategy is used for determining the frame image which needs to be rendered by the semitransparent particles and multiplexing the frame image of the semitransparent particles.
For example, the electronic device may determine whether to perform multiplexing of the semitransparent particles according to characteristics of a frame image currently being rendered (e.g., what frame image is a frame image after a game starts to run, etc.).
As an example, a counter may be provided in the electronic device, the counter executing ++1 when each frame image starts rendering. For example, when the 1 st frame image starts rendering after the game starts running, the counter++ 1, with a result of 1, is used to identify the current frame image as the 1 st frame image. For another example, when the nth frame image starts to be rendered, the counter++ 1, with the result of N, is used to identify the current frame image as the nth frame image. In this way, the electronic device can determine whether to perform the rendering or multiplexing of the semitransparent particles on the current frame image according to the value of the counter in combination with a preset rule.
When the rendering of the 1 st frame image is performed after the game starts to run, the semi-transparent particles are not rendered, and therefore, the multiplexing of the semi-transparent particle rendering results is not performed. Then, in the preset rule, the case may be covered, for example, the preset rule may be: if the value of the counter is even, rendering of the semi-transparent particles is performed, if so, and if not (i.e. the value of the counter is odd), multiplexing of the semi-transparent particles is performed. Then, when rendering is performed on the 1 st frame image, since the value of the counter is 1, that is, odd, rendering of the translucent particles can be performed on the newly created frame buffer. Correspondingly, when rendering of the 2 nd frame image is performed, the counter++ 1, the result is 2, i.e., an even number. Multiplexing of the translucent particles can thus be performed.
For example, referring to fig. 14 in combination with the flowchart of fig. 13, a flowchart of still another image rendering according to an embodiment of the present application is shown. As shown in fig. 14, the scheme may include:
s1401, determining a frame buffer 11 corresponding to the main scene.
In connection with the foregoing description, the process of determining the main scene may be performed before starting the rendering of the current frame image.
S1402, when rendering of the current frame image is started, the counter is incremented by 1.
In this example, the counter is incremented by 1 so that the value of the counter can be used to identify the characteristics of the current frame image. In this way, different frame images can have different characteristics, and correspondingly, different frame images can be distinguished through the value of the counter.
S1403, performing main scene rendering of the current frame image on the frame buffer 11 to obtain a main scene 11.
For example, the process may refer to the main scene rendering process for the nth frame image or the n+1st frame image in the foregoing example.
S1404, judging whether the value of the counter accords with a preset rule.
In this example, whether to perform rendering or multiplexing of the translucent particles may be determined according to a preset rule. For example, the preset rules are as follows: whether the value of the counter is even. If the value of the counter is even, multiplexing of the semitransparent particles is performed, that is, S1407 is performed. On the contrary, if the value of the counter is not even, i.e., is odd, the rendering of the semitransparent particles is performed, i.e., the following S1405 to S1406 are performed.
S1405, performing rendering results of the semitransparent particles of the current frame image on the newly created frame buffer 12.
S1406, determining the rendering result of the current frame image according to the rendering result of the main scene 11 and the semitransparent particles.
This process may refer to the rendering process of the nth frame image in the foregoing example. Thereby saving the rendering result of the corresponding semi-transparent particles on the frame buffer 12 while achieving the rendering result of the current frame image. So that other frame images multiplex the rendering results of the semi-transparent particles.
S1407, determining a rendering result of the current frame image from the main scene 11 data and the semitransparent particle rendering result stored in the frame buffer 12.
This process may refer to the rendering process of the n+1st frame image in the foregoing example. Multiplexing of the current frame map to the semitransparent particle rendering result is thereby achieved.
It will be appreciated that the flow chart shown in fig. 14 is a possible implementation of the present application and may be applied to a rendering process including an nth frame and an n+1st frame image, thereby supporting implementation of the scheme as shown in fig. 6-13.
In the above example, in the process of executing the rendering of the current frame image, whether to trigger multiplexing of the rendering result of the existing semitransparent particles may be determined according to a preset rule. In other embodiments of the present application, the electronic device may further determine, in combination with other determination conditions, whether to trigger multiplexing of the rendering results of the semi-transparent particles, so that multiplexing of the rendering results of the semi-transparent particles is more strict, and thus more accurate rendering results are obtained.
For example, the electronic device may determine whether rendering results of the semi-transparent particles in the two frame images can be multiplexed in combination with a change in positions of the semi-transparent particles in the current frame image and the semi-transparent particles in the previous frame image.
It will be appreciated that the translucent particles belong to the high frequency signal (i.e. correspond to the strongly varying detail of the image). The human eye is sensitive to high frequency signals, so that when the game piece's viewing angle is greatly shaky, the semitransparent particles need to be updated in real time.
In this example, whether or not a character View is largely shaky can be determined by a change in a Model-View-Projection (MVP) matrix.
The MVP matrix is briefly described below in connection with fig. 15. In performing image rendering, the electronic device needs to determine vertex positions of one or more objects included in the current frame image. For example, the vertex coordinates of the object may be included in the rendering command issued by the application. In some implementations, the vertex coordinates included in the rendering command may be coordinates based on the local coordinate system of the object itself. In the present application, a distribution Space of an object based on a Local coordinate system may be referred to as a Local Space (Local Space). In order for the electronic device to be able to determine the coordinates of the respective vertices of the object on the display screen, a matrix transformation may be performed based on the coordinates of the object in the local space. The coordinates of the object in a Screen-based Space (e.g., screen Space) coordinate system are thus obtained.
As one example, the electronic device may convert local coordinates of respective vertices of an object under the local Space into coordinates under the Screen Space through a matrix transformation process of the local Space to World Space (World Space) to View Space (View Space) to Clip Space (Clip Space) to Screen Space (Screen Space).
Illustratively, as shown in FIG. 15, a logical process schematic of a matrix transformation of coordinates from local space to world space to viewing space to crop space is shown. In this example, the rendering of object 1 may be included in the rendering command issued by the game application. As shown in fig. 15, in the local space, the coordinate system may be based on the object 1. For example, the origin of the coordinate system in the local space may be a position set at the center of the object 1, or a vertex may be located, or the like. The game application may carry the coordinates of the respective vertices of the object 1, i.e. the local coordinates, in the coordinate system of the local space in issuing the rendering command to the object 1. The electronic device may convert coordinates in local space to coordinates in world space through an M matrix issued by the gaming application. Wherein world space may be a larger area relative to local space. For example, a rendering command issued by a game application is used to render a game image. The local space may correspond to a smaller area that is able to cover a certain object, such as object 1. While world space may correspond to a map area of the game that includes object 1 as well as other objects, such as object 2. The electronic device may perform M-matrix transformation on the local coordinates in the local space in combination with the M-matrix, thereby obtaining coordinates of the object 1 in the world space. Similarly, in case the game application issues a rendering command for the object 2 in the frame image, the electronic device may also acquire coordinates of the object 2 in world space through the above-described M matrix transformation.
After acquiring coordinates of vertices of respective objects in the world space in the current frame image, the electronic device may convert the coordinates in the world space into coordinates in the observation space according to the V matrix issued by the game application. It is understood that the coordinates in world space may be coordinates in three-dimensional space. While the electronic device displays the frame image to the user, each object (such as object 1, object 2, etc.) is displayed on a two-dimensional display screen. When objects in world space are viewed using different viewing angles, different two-dimensional pictures are seen. The viewing angle may be related to the position of the camera (or observer) arranged in world space. In this example, the coordinate space corresponding to the camera position may be referred to as the viewing space. Illustratively, the positive y-axis direction in which the camera is disposed in world space is taken as an example. Then the coordinates of the respective vertices of the object 1 and the object 2 in the viewing space corresponding to the camera position can be obtained based on the transformation of the V matrix. As shown in fig. 15, since the camera is located in the y-axis forward direction, shooting is performed downward, and thus the object 1 and the object 2 corresponding to the observation space can be presented as a top view effect.
After the electronic device acquires the coordinates of the respective objects in the viewing space, they may be projected to the clipping coordinates. The coordinate space to which the clipping coordinates correspond may be referred to as a clipping space. It will be appreciated that in doing the V-matrix transformation, there may be a transformation of a larger area in world space, and thus the acquired image range may be relatively large. And because of the limited size of the electronic device display, it may not be possible to display all objects in the viewing space simultaneously. In this example, the electronic device may project the coordinates of the various objects in the viewing space into the crop space. After projection into the crop space, the coordinates of the objects that can be displayed on the display screen may be in the range of-1.0 to 1.0. And the coordinates for the part of the object that cannot be displayed on the display screen may be outside the range of-1.0 to 1.0. Thus, the electronic device can perform corresponding display according to the vertex coordinates with coordinates in the range of-1.0 to 1.0. For example, the electronic device may perform P-matrix transformation on each coordinate in the observation space according to the P-matrix issued by the game application, so as to obtain a clipping coordinate in the clipping space corresponding to each coordinate.
Thus, the MVP matrix, when significantly changed, then identifies that the view of the character in the game has significantly deflected. In this way, the position of the previous semitransparent particles is obviously not applicable in the current frame image. In this example, the electronic device may also determine whether to perform multiplexing of the semi-transparent particles according to whether the change in viewing angle is within a preset viewing angle threshold. In some embodiments, for example, the change in perspective in different frame images can be determined, and as shown in fig. 16, a reference camera based on the viewing space can be constructed. And converting to obtain the reference sight direction of the reference camera in world space based on the MVP matrix of the current frame image. Similarly, the reference line-of-sight direction of the corresponding frame image may also be obtained by scaling based on the MVP matrix of the previous frame image. The electronic device determines the change in viewing angle by comparing the two reference time directions. In some embodiments, the view angle variation may be an angle of a reference line of sight direction of different frame images.
As a possible implementation, a camera viewing direction that can be referenced may be constructed in the electronic device, as represented by a matrix (10,0,0,0), which may be a viewing space based direction. When the rendering of the current frame image is executed, the electronic device can determine the MVP matrix of the current frame image according to the instruction issued by the game application. For example, the electronic device may acquire data of the MVP matrix according to the unimorph matrix transferred by the instruction such as glBufferSubData () in the instruction issued to the CPU by the game application. Thus, the electronic device may obtain the P matrix (e.g., denoted as P_N), the VP inverse matrix (e.g., denoted as VP_INV_N), and the M matrix (e.g., denoted as M_N) of the N-th frame image. Similarly, the electronic device may obtain MVP matrices for other frame images. For example, the electronic device may obtain a P matrix (e.g., denoted as p_n+1), a VP inverse matrix (e.g., denoted as vp_inv_n+1), and an M matrix (e.g., denoted as m_n+1) of the n+1-th frame image during the n+1-th frame image rendering process.
In this way, in the process of rendering the n+1st frame image, the electronic device can determine whether the rendering result of the semitransparent particles of the n+1st frame image can be multiplexed according to the change condition of the viewing angles of the n+1st frame image and the N frame image.
For example, the electronic device may determine the change in viewing angle according to the following calculation method:
cameratoworld= (10,0,0,0) p_n vp_inv_n; in the// nth frame image, the camera's position in world coordinates;
pre-camaratoworld= (10,0,0,0) p_n+1 (vp_inv_n+1); in the (n+1) th frame image, the position of the camera in world coordinates;
alpha = camel toworld-m_n; a camera direction matrix in the// nth frame image;
beta=pre-camara toworld- (m_n+1); a camera direction matrix in the// n+1st frame image;
aProductb=alpha[0]*beta[0]+alpha[1]*beta[1]+alpha[2]*beta[2];
aMode=std::sqrt(alpha[0]*alpha[0]+alpha[1]*alpha[1]+alpha[2]*alpha[2]);
bMode=std::sqrt(beta[0]*beta[0]+beta[1]*beta[1]+beta[2]*beta[2]);
cosRes=aProductb/(aMode*bMode);
turnTheta=(std::acos(cosRes)*180)/PI。
the final acquired turn theta angle may be the change in viewing angle from the n+1st frame image to the N-th frame image.
The electronic device may determine whether a rendering result of the semitransparent particles of the nth frame image may be multiplexed according to a size relationship between the turn theta and a preset angle threshold. For example, when turn theta is smaller than a preset angle threshold, it indicates that the difference between two frames of images is smaller, and the rendering result of the semitransparent particles can be multiplexed. Correspondingly, when the turn theta is larger than a preset angle threshold value, the difference between the two frames of images is larger, and the rendering result of the semitransparent particles cannot be multiplexed.
In view of this, please refer to fig. 17, which is a flowchart of another image rendering method according to an embodiment of the present application. The scheme adds a judging step for the change of the visual angle on the basis of fig. 14, thereby obtaining a more accurate rendering result. As shown in fig. 17, the scheme in this example differs from the scheme of fig. 14 in that after S1404 is performed, if it is determined that the preset rule is met, S1701 is performed to continue the determination. I.e. determining whether the change in viewing angle is less than a viewing angle threshold. Prior to the determination of S1701, S1702 may be performed, i.e., the view angle change is determined according to the MVP matrix of the current frame image and the backup MVP matrix. Specific implementations may refer to the determination of the change in view angle in the above examples. In the judgment of S1702, when the change in the viewing angle is smaller than the viewing angle threshold, it is indicated that multiplexing of the translucent particles is possible, that is, S1407 is performed. Correspondingly, when the viewing angle change is greater than the viewing angle threshold, it indicates that multiplexing of the semitransparent particles is impossible, and the process returns to S1405.
In this way, before multiplexing is performed, whether multiplexing of the semitransparent particles can be performed can be further determined according to the change of the viewing angle, so that multiplexing accuracy of the semitransparent particles is improved, and quality of a finally obtained image is improved.
The above description mainly describes the scheme provided by the embodiment of the application from the perspective of each service module. To achieve the above functions, it includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
Fig. 18 shows a schematic diagram of the composition of an electronic device 1800. As shown in fig. 18, the electronic device 1800 may include: a processor 1801 and a memory 1802. The memory 1802 is used for storing computer-executable instructions. For example, in some embodiments, the processor 1801, when executing instructions stored in the memory 1802, can cause the electronic device 1800 to perform the image rendering methods shown in any of the above embodiments.
It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
Fig. 19 shows a schematic diagram of the composition of a chip system 1900. The chip system 1900 may include: a processor 1901 and a communication interface 1902 for supporting the relevant devices to implement the functions referred to in the above embodiments. In one possible design, the system on a chip also includes memory to hold the necessary program instructions and data for the terminal. The chip system can be composed of chips, and can also comprise chips and other discrete devices. It should be noted that in some implementations of the application, the communication interface 1902 may also be referred to as an interface circuit.
It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
The functions or acts or operations or steps and the like in the embodiments described above may be implemented in whole or in part by software, hardware, firmware or any combination thereof. When implemented using a software program, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more servers, data centers, etc. that can be integrated with the medium. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Although the application has been described in connection with specific features and embodiments thereof, it will be apparent that various modifications and combinations can be made without departing from the spirit and scope of the application. Accordingly, the specification and drawings are merely exemplary illustrations of the present application as defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the application. It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (19)

1. An image rendering method, characterized by being applied to an electronic device in which an application program is installed, comprising:
the application program issues a first instruction stream, wherein the first instruction stream is used for instructing the electronic equipment to execute rendering operation of a first frame image, and the first frame image comprises a first main scene and first semi-transparent particles;
the electronic equipment synthesizes a first rendering result and a second rendering result to obtain the first frame image; the first rendering result is a rendering result of the first main scene, the second rendering result is a rendering result of the first semi-transparent particles, and the second rendering result is stored in a first frame buffer of the electronic device.
2. The method of claim 1, wherein the first instruction stream includes a first instruction segment and a second instruction segment, the first instruction segment being configured to instruct the electronic device to render the first main scene to obtain the first rendering result, and the second instruction segment being configured to instruct the electronic device to render the first semi-transparent particle;
before the electronic device synthesizes the first rendering result and the second rendering result, the method further includes:
the electronic equipment performs rendering according to the first instruction segment to obtain the first rendering result;
the electronic device obtains the second rendering result from the first frame buffer.
3. The method according to claim 1 or 2, wherein the second rendering result is that the electronic device renders a second frame image, which is stored in the first frame buffer, before the first frame image.
4. The method of claim 1, wherein prior to the application issuing the first instruction stream, the method further comprises:
the application program issues a second instruction stream, and the second instruction stream is used for instructing the electronic device to execute a rendering operation of a second frame image, wherein the second frame image comprises a second main scene and the first semi-transparent particles.
5. The method of claim 4, wherein the second instruction stream includes a third instruction segment and a fourth instruction segment, the third instruction segment being configured to instruct the electronic device to render the second main scene to obtain a third rendering result, the fourth instruction segment being configured to instruct the electronic device to render the first semi-transparent particles in the second frame image;
before the application issues the second instruction stream, the method further comprises:
the electronic equipment performs rendering according to the third instruction segment to obtain a third rendering result;
and the electronic equipment acquires a fourth rendering result according to the fourth instruction segment.
6. The method of claim 5, wherein the method further comprises:
the electronic device creating the first frame buffer;
the electronic device obtaining the fourth rendering result according to the fourth instruction segment, including:
the electronic equipment replaces the frame buffer indicated by the fourth instruction segment with the first frame buffer to obtain a fifth instruction segment;
the electronic device performs a rendering operation of the fifth instruction segment to obtain a second rendering result of the first semi-transparent particles, and stores the second rendering result in the first frame buffer.
7. The method of claim 2, wherein the step of determining the position of the substrate comprises,
and the electronic equipment determines the second instruction segment according to a preset beginning instruction and a preset ending instruction in the first instruction stream.
8. The method of claim 6, wherein the step of providing the first layer comprises,
and the electronic equipment determines the fourth instruction section according to a preset beginning instruction and a preset ending instruction in the second instruction stream.
9. The method according to any one of claims 5-8, wherein an interception module, a creation module and a replacement module are provided in the electronic device, the method comprising:
the interception module is used for intercepting a fourth instruction segment;
the creation module is used for creating the first frame buffer;
the replacing module is used for replacing the frame buffer ID in the fourth instruction segment according to the identification ID of the first frame buffer and the intercepted fourth instruction segment so as to obtain a fifth instruction segment pointing to the first frame buffer;
and the GPU of the electronic device executes the rendering of the first semi-transparent particles according to the fifth instruction segment and stores the acquired second rendering result in the first frame buffer.
10. The method of claim 9, wherein a merge module is further provided in the electronic device, the method further comprising:
the merging module is used for indicating the GPU to merge the second rendering result and the third rendering result so as to obtain the rendering result of the second frame image.
11. The method according to claim 1, wherein the method further comprises:
and determining the frame buffer ID of the main scene according to the process of the third frame image, wherein the frame buffer of the main scene is the frame buffer with the largest drawing command Drawcall number in the process of the third frame image.
12. The method according to any one of claims 1 or 2 or 4-8, wherein a counter is provided in the electronic device, the counter being incremented by 1 for each rendering of a frame image performed by the electronic device;
before the electronic device synthesizes the first rendering result and the second rendering result and obtains the first frame image, the method further includes:
and when the electronic equipment determines that the first frame image is rendered, the value of the counter accords with a preset rule.
13. The method of claim 12, wherein in the event that the electronic device determines that the first frame image is rendered, the value of the counter does not meet a preset rule, the method further comprises:
The electronic device creates the first frame buffer, replaces the frame buffer pointed by the instruction segment in the first instruction stream for indicating that the first semi-transparent particle rendering is performed with the first frame buffer,
the electronic device performs rendering of the first semi-transparent particles and stores in the first frame buffer.
14. The method of claim 13, wherein the preset rule is: the value of the counter is even.
15. The method according to claim 1 or 13 or 14, wherein,
before the electronic device synthesizes the first rendering result and the second rendering result and obtains the first frame image, the method further includes:
and the electronic equipment determines that the visual angle change of the first frame image when rendering is smaller than a preset visual angle threshold value.
16. The method of claim 15, wherein the electronic device determines the change in view from a model-view-projection MVP matrix of the first frame image and a MVP matrix of a second frame image, the second frame image rendered earlier than the first frame image.
17. The method according to claim 15, wherein in case that the change in the viewing angle at the time of the first frame image rendering is greater than a preset viewing angle threshold, the method further comprises:
The electronic device creates the first frame buffer, replaces the frame buffer pointed by the instruction segment in the first instruction stream for indicating that the first semi-transparent particle rendering is performed with the first frame buffer,
the electronic device performs rendering of the first semi-transparent particles and stores in the first frame buffer.
18. An electronic device comprising one or more processors and one or more memories; the one or more memories coupled to the one or more processors, the one or more memories storing computer instructions;
the computer instructions, when executed by the one or more processors, cause the electronic device to perform the image rendering method of any one of claims 1-17.
19. A computer readable storage medium, characterized in that the computer readable storage medium comprises computer instructions which, when run, perform the image rendering method of any one of claims 1-17.
CN202210713538.1A 2022-02-22 2022-02-22 Image rendering method and electronic equipment Pending CN116672702A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210713538.1A CN116672702A (en) 2022-02-22 2022-02-22 Image rendering method and electronic equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210713538.1A CN116672702A (en) 2022-02-22 2022-02-22 Image rendering method and electronic equipment
CN202210159851.5A CN114210055B (en) 2022-02-22 2022-02-22 Image rendering method and electronic equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202210159851.5A Division CN114210055B (en) 2022-02-22 2022-02-22 Image rendering method and electronic equipment

Publications (1)

Publication Number Publication Date
CN116672702A true CN116672702A (en) 2023-09-01

Family

ID=80709191

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210713538.1A Pending CN116672702A (en) 2022-02-22 2022-02-22 Image rendering method and electronic equipment
CN202210159851.5A Active CN114210055B (en) 2022-02-22 2022-02-22 Image rendering method and electronic equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210159851.5A Active CN114210055B (en) 2022-02-22 2022-02-22 Image rendering method and electronic equipment

Country Status (1)

Country Link
CN (2) CN116672702A (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116095413B (en) * 2022-05-30 2023-11-07 荣耀终端有限公司 Video processing method and electronic equipment
CN117635789A (en) * 2022-08-18 2024-03-01 华为技术有限公司 Coloring method, coloring device and electronic equipment
CN116704075A (en) * 2022-10-14 2023-09-05 荣耀终端有限公司 Image processing method, device and storage medium
CN116664375B (en) * 2022-10-17 2024-04-12 荣耀终端有限公司 Image prediction method, device, equipment and storage medium
CN117745604A (en) * 2023-05-26 2024-03-22 荣耀终端有限公司 Image processing method and electronic equipment
CN116450363B (en) * 2023-06-13 2023-11-14 荣耀终端有限公司 Resource scheduling method and electronic equipment
CN117710548A (en) * 2023-07-28 2024-03-15 荣耀终端有限公司 Image rendering method and related equipment thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5977977A (en) * 1995-08-04 1999-11-02 Microsoft Corporation Method and system for multi-pass rendering
GB2545507B (en) * 2015-12-18 2019-07-17 Imagination Tech Ltd Controlling scheduling of a GPU
AU2018100891A4 (en) * 2016-11-15 2018-08-02 Chanby Pty Ltd A compostable tableware
CN111508055B (en) * 2019-01-30 2023-04-11 华为技术有限公司 Rendering method and device
CN112837402A (en) * 2021-03-01 2021-05-25 腾讯科技(深圳)有限公司 Scene rendering method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN114210055B (en) 2022-07-01
CN114210055A (en) 2022-03-22

Similar Documents

Publication Publication Date Title
CN114210055B (en) Image rendering method and electronic equipment
CN114708369B (en) Image rendering method and electronic equipment
US20230360337A1 (en) Virtual image displaying method and apparatus, electronic device and storage medium
KR20220035380A (en) System and method for augmented reality scenes
CN113244614B (en) Image picture display method, device, equipment and storage medium
JP2016509245A (en) Low latency image display on multi-display devices
KR20100004119A (en) Post-render graphics overlays
CN112933599A (en) Three-dimensional model rendering method, device, equipment and storage medium
TW201706834A (en) Methods and systems for communications between apps and virtual machines
KR101090981B1 (en) 3d video signal processing method and portable 3d display apparatus implementing the same
CN112053370A (en) Augmented reality-based display method, device and storage medium
CN115018692B (en) Image rendering method and electronic equipment
CN114419213A (en) Image processing method, device, equipment and storage medium
CN114570020A (en) Data processing method and system
WO2024027231A1 (en) Image rendering method and electronic device
CN112005209B (en) Mechanism for atomically rendering a single buffer covering multiple displays
CN111640191A (en) Projection screen picture acquisition processing method based on VR (virtual reality) all-in-one machine
CN114222185B (en) Video playing method, terminal equipment and storage medium
CN116112617A (en) Method and device for processing performance picture, electronic equipment and storage medium
WO2024051471A1 (en) Image processing method and electronic device
CN116095250B (en) Method and device for video cropping
CN117710548A (en) Image rendering method and related equipment thereof
CN116680019B (en) Screen icon moving method, electronic equipment and storage medium
EP4187905A1 (en) Method and system for live multicasting performances to devices
CN118283344A (en) Image rendering method, device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination