WO2024027231A1 - 一种图像渲染方法和电子设备 - Google Patents
一种图像渲染方法和电子设备 Download PDFInfo
- Publication number
- WO2024027231A1 WO2024027231A1 PCT/CN2023/091006 CN2023091006W WO2024027231A1 WO 2024027231 A1 WO2024027231 A1 WO 2024027231A1 CN 2023091006 W CN2023091006 W CN 2023091006W WO 2024027231 A1 WO2024027231 A1 WO 2024027231A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- rendering
- shadow
- normal
- result
- instruction
- Prior art date
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 819
- 238000000034 method Methods 0.000 title claims abstract description 108
- 230000015654 memory Effects 0.000 claims abstract description 110
- 238000012545 processing Methods 0.000 claims abstract description 91
- 230000008569 process Effects 0.000 claims abstract description 49
- 239000000872 buffer Substances 0.000 claims description 186
- 238000004422 calculation algorithm Methods 0.000 claims description 19
- 230000004044 response Effects 0.000 claims description 6
- 230000003111 delayed effect Effects 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 13
- 238000010586 diagram Methods 0.000 description 25
- 230000006870 function Effects 0.000 description 19
- 230000003993 interaction Effects 0.000 description 17
- 230000007246 mechanism Effects 0.000 description 10
- 238000007726 management method Methods 0.000 description 9
- 239000000203 mixture Substances 0.000 description 8
- 238000013461 design Methods 0.000 description 7
- 238000004590 computer program Methods 0.000 description 6
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000005055 memory storage Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 208000033748 Device issues Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000002269 spontaneous effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/60—Shadow generation
Definitions
- Embodiments of the present application relate to the field of image processing, and in particular, to an image rendering method and electronic device.
- some images displayed by the electronic device may include shadow effects.
- the shadow effect can be displayed based on the shadow rendering results obtained by shadow rendering.
- the graphics processor needs to read depth information, normal information, etc. from the memory as input.
- the amount of data such as normal information that the GPU needs to read from the memory has also increased significantly. This places higher requirements on the read and write bandwidth between the GPU and memory.
- Embodiments of the present application provide an image rendering method and electronic device, which enable efficient execution of shadow rendering through on-chip storage. Avoid large amounts of data reading and writing between the GPU and memory during shadow rendering, thereby reducing the pressure on data reading and writing between the GPU and memory.
- an image rendering method is provided, which is applied to an electronic device.
- a first application program is running in the electronic device.
- the first application program instructs the electronic device to execute the processing of the first frame image by issuing a rendering instruction stream.
- Rendering process the first frame of the image includes the shadow area.
- the rendering instruction stream includes a first rendering instruction and a second rendering instruction, and the method includes: rendering and obtaining a depth rendering result of the first frame image according to the first rendering instruction.
- the depth rendering results are stored in the memory of the electronic device.
- the normal rendering result is stored in an on-chip storage area of the graphics processing module of the electronic device.
- the shadow area is the area displayed as a shadow effect in the first frame of the image.
- the shadow rendering result matching the shadow area can be understood as the display of the shadow area can be performed according to the shadow rendering result.
- the read and write overhead between the GPU and memory can be avoided.
- the GPU does not need to write the normal rendering results into memory after performing normal rendering.
- the GPU performs shadow rendering, it does not need to read the normal rendering results from memory.
- the rendering instruction stream also includes a third rendering instruction, which is used to instruct the electronic device to create a first frame buffer in the memory.
- the first frame buffer is used to store the depth rendering result.
- the method further includes: creating a first frame buffer in the memory according to the third rendering instruction.
- the depth rendering results are stored in the memory of the electronic device, Including: the depth rendering result is stored on the first framebuffer. In this way, in response to the third rendering instruction, the electronic device can create a frame buffer in memory for depth rendering.
- the rendering instruction stream also includes a fourth rendering instruction, which is used to instruct the electronic device to create a second frame buffer.
- the second frame buffer is used to store the normal rendering result.
- the method further includes: creating the second frame buffer in the on-chip storage area of the graphics processing module according to the fourth rendering instruction.
- the normal rendering result is stored in an on-chip storage area of the graphics processing module of the electronic device, including: the normal rendering result is stored in the second frame buffer.
- the electronic device can perform a normal rendering process in the on-chip storage area, and store the normal rendering result in the on-chip storage.
- the writing overhead of the GPU to the memory during normal rendering can be saved.
- the GPU does not need to read it from the memory.
- the rendering instruction stream also includes a fifth rendering instruction, which is used to instruct the electronic device to perform the rendering operation on the shadow information.
- Obtaining the shadow rendering result according to the depth rendering result and the normal rendering result includes: in response to the fifth rendering instruction, reading the depth rendering result from the memory and obtaining it from the on-chip storage area of the graphics processing module The normal render result.
- the shadow rendering result is obtained.
- the electronic device can perform shadow rendering. It can be understood that shadow rendering can take depth information and normal information as input. Since the rendering process of normal information is performed on-chip, the GPU does not need to interact with the memory to obtain the normal information.
- obtaining a shadow rendering result based on the depth rendering result and the normal rendering result includes: triggering an instruction to the graphics processing module to perform a shadow rendering operation when the normal rendering operation is completed.
- the shadow rendering operation includes: reading the depth rendering result from the memory, and obtaining the normal rendering result from the on-chip storage area of the graphics processing module.
- the shadow rendering result is obtained.
- the method before the trigger instructs the graphics processing module to perform a shadow rendering operation, the method also includes: when the normal rendering operation is completed, generating a first message, the first message being used to indicate that the normal rendering operation is completed. .
- the trigger instructs the graphics processing module to perform a shadow rendering operation, including: when the first message is generated, the trigger instructs the graphics processing module to perform a shadow rendering operation.
- a scheme for an electronic device to determine that normal rendering has been completed is given. This triggers the electronics-based shadow rendering process.
- rendering and obtaining the normal rendering result of the first frame image according to the second rendering instruction includes: issuing a sixth rendering instruction to the graphics processing module according to the second rendering instruction, and the sixth rendering instruction is used After instructing the graphics processing module to execute the normal rendering operation of the first frame image on the first deferred rendering pipeline SubPass, the graphics processing module executes the sixth rendering instruction on the first SubPass to obtain the normal rendering result.
- the sixth rendering instruction may correspond to the second rendering instruction.
- the sixth rendering instruction may have the same function as the second rendering instruction, such as instructing the GPU to perform normal rendering through SubPass.
- the sixth rendering instruction may be a variant based on the second rendering instruction. For example, when the second rendering instruction instructs the electronic device to perform normal rendering, the sixth rendering instruction obtained by the GPU can be used to instruct normal rendering on SubPass. dye.
- obtaining a shadow rendering result based on the depth rendering result and the normal rendering result includes: creating a second SubPass in an on-chip cache of the graphics processing module, and the second SubPass is used to perform a shadow rendering operation.
- the rendering result of the first SubPass includes the normal rendering result.
- the depth rendering result is read from the memory and input into the second SubPass, and the normal rendering result and the depth rendering result are processed according to the preset ray tracing algorithm to obtain the shadow rendering result.
- SubPass-based shadow rendering operations can be implemented on-chip. It is understandable that SubPass provides the ability to directly obtain the rendering results of the previous SubPass. Then, the shadow rendering process performed on the second SubPass can directly obtain the normal rendering result performed on the previous SubPass. This improves the efficiency of obtaining normal rendering results and saves the reading and writing overhead between the GPU and memory.
- the shadow rendering result includes: first normal information, second normal information, shadow information, and distance information.
- the first normal information and the second normal information may be normal information corresponding to different directions in the normal information.
- the first normal information may be x-direction normal information
- the second normal information may be y-direction normal information.
- the method also includes: outputting the shadow rendering result to a third frame buffer on the memory, the third frame buffer including a texture in the first format, the first Format maps include at least four channels.
- the shadow rendering results including multiple sets of data can be stored in the same location, such as on the same map.
- the full shadow rendering result can be obtained through only one data read. This saves unnecessary reading and writing overhead.
- outputting the shadow rendering result to a third frame buffer on the memory includes: outputting the first normal information, the second normal information, the shadow information, and the distance information to the first frame buffer respectively. format on different channels of the map.
- the first format is RGBA16F.
- the graphics processing module is a graphics processor GPU.
- the function of the graphics processing module can also be implemented by other components or circuits with image rendering capabilities.
- an electronic device in a second aspect, includes one or more processors and one or more memories; the one or more memories are coupled to the one or more processors, and the one or more memories store computer instructions; When the one or more processors execute the computer instructions, the electronic device is caused to execute the image rendering method as in any one of the above-mentioned first aspect and various possible designs.
- a chip system in a third aspect, includes an interface circuit and a processor; the interface circuit and the processor are interconnected through lines; the interface circuit is used to receive signals from the memory and send signals to the processor, and the signals include data stored in the memory.
- Computer instructions when the processor executes the computer instructions, the chip system executes the image rendering method as described in the above first aspect and any one of various possible designs.
- a computer-readable storage medium includes computer instructions.
- the image rendering method in any one of the above-mentioned first aspect and various possible designs is executed.
- a computer program product includes instructions.
- the computer program product When the computer program product is run on a computer, the computer can execute any one of the above-mentioned first aspect and various possible designs according to the instructions. Image rendering method.
- Figure 1 is a logical schematic diagram of image rendering
- Figure 2 is a logical schematic diagram of depth information rendering
- Figure 3 is a logical schematic diagram of normal information rendering
- Figure 4 is a schematic diagram of a shadow
- Figure 5 is a schematic diagram of shadow rendering through ray tracing in the image rendering process
- Figure 6 is a logical schematic diagram of shadow rendering
- Figure 7 is a logical schematic diagram of shadow rendering provided by an embodiment of the present application.
- Figure 8 is a schematic diagram of the software composition of an electronic device provided by an embodiment of the present application.
- Figure 9 is a schematic diagram of module interaction of an image rendering method provided by an embodiment of the present application.
- Figure 10 is a schematic diagram of module interaction of yet another image rendering method provided by an embodiment of the present application.
- Figure 11 is a schematic diagram of module interaction of yet another image rendering method provided by an embodiment of the present application.
- Figure 12 is a schematic diagram of module interaction of yet another image rendering method provided by an embodiment of the present application.
- Figure 13 is a schematic diagram of a storage scheme for shadow rendering results provided by an embodiment of the present application.
- Figure 14 is a schematic diagram of module interaction of yet another image rendering method provided by an embodiment of the present application.
- Figure 15 is a schematic flowchart of yet another image rendering method provided by an embodiment of the present application.
- Figure 16 is a schematic diagram of the composition of an electronic device provided by an embodiment of the present application.
- Figure 17 is a schematic diagram of the composition of a chip system provided by an embodiment of the present application.
- an application may be installed in the electronic device.
- the application can send an instruction to the electronic device, so that the electronic device renders the corresponding image according to the instruction, and then displays the image obtained by rendering through the display screen of the electronic device.
- the electronic device may be provided with a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphic Processing Unit, GPU), a memory, etc.
- CPU Central Processing Unit
- GPU Graphic Processing Unit
- the CPU can be used for instruction processing and control.
- the GPU can render images under the control of the CPU.
- Memory can be used to provide storage functions, such as storing rendering results obtained by GPU rendering.
- the application can issue rendering instructions to instruct the electronic device to render a frame of image.
- This rendering instruction can correspond to a drawing command (ie Drawcall).
- the CPU can receive the rendering instruction and call the corresponding graphics rendering application programming interface (Application Programming Interface, API) to instruct the GPU to perform the rendering operation corresponding to the rendering instruction.
- the GPU can execute rendering instructions and store the rendering results in memory.
- the application program can control the electronic device to render the depth information, normal information, etc. of the frame image through rendering instructions, thereby obtaining complete frame image information.
- the application is a game application. It is understandable that the game application can pass The electronic device displays video footage to the user.
- the video picture may be composed of multiple continuously played frame images.
- the game application can issue a rendering instruction 21 to instruct the electronic device to render the depth information of the current frame image.
- the CPU can call the corresponding API interface according to the rendering instruction 21 to instruct the GPU to perform a rendering operation corresponding to the depth information.
- the GPU can perform this rendering operation and store the rendering results (i.e., the depth rendering results) in memory.
- the memory may include multiple pre-created frame buffers (FrameBuffer, FB), such as frame buffer 21, frame buffer 22, frame buffer 23, etc. Different framebuffers can be used to store different information during image rendering.
- the GPU may store the depth rendering results in frame buffer 21.
- the game application can issue a rendering instruction 22 to instruct the electronic device to render the normal information of the current frame image.
- the CPU can call the corresponding API interface according to the rendering instruction 22 to instruct the GPU to perform a rendering operation corresponding to the normal information.
- the GPU can perform this rendering operation and store the rendering results (i.e. the normal rendering results) in memory.
- the GPU may store normal rendering results in frame buffer 22.
- the scene may include object 41.
- the object 41 can cast a shadow on the ground.
- the game application can also instruct the electronic device to render the shadow of the object in the current frame image, so that the displayed frame image can include the shadow of the object, which is more realistic.
- the electronic device can render the shadow through the visual tracking algorithm, and obtain the display information of the frame image including the shadow (i.e., the rendering result).
- the GPU can be based on the ray tracing algorithm, which can split the rendering task of a scene into the impact of several rays starting from the camera (view ray as shown in Figure 5) on the scene. .
- Each observation line will intersect with the scene in parallel, obtain the material, texture and other information of the scene object to be displayed based on the intersection position, and calculate the lighting combined with the light source information.
- the light source can illuminate the object to form a shadow (such as through a shadow ray as shown in Figure 5).
- we can also determine the position of the pixels on the image corresponding to the shadow of the object and related information. In this way, the display information of objects and shadows can be obtained on the image.
- the game application can issue a rendering instruction 23 to instruct the electronic device to render the shadow information of the current frame image.
- the CPU can call the corresponding API interface according to the rendering instruction 23 to instruct the GPU to perform the rendering operation corresponding to the shadow information.
- the rendering of shadows needs to combine the depth information and normal information of the current frame image.
- the GPU can read the depth rendering result from the frame buffer 21 and the normal rendering result from the frame buffer 22, and based on this, through the ray tracing algorithm, the rendering obtains the shadow information (that is, the shadow rendering result ).
- the GPU can store the shadow rendering results in a frame buffer 23 in memory.
- the shadow rendering process corresponding to this ray tracing algorithm can be executed in the forward rendering pipeline.
- the geometric information of objects in the scene is completed by drawing each object in the scene individually. Then, in the actual implementation process, in order to balance the overhead in the rendering process, it is necessary to minimize the draw calls of each object. Therefore, the geometric information obtained during the rendering process of each object is very limited.
- the rendering process of shadows (such as the acquisition of shadow rendering results, and the optimization of noise reduction for shadow rendering results, etc.) depends on the geometric information of each object. Then, the limited geometric information will cause the quality of shadow rendering to decrease.
- ray tracing can also be implemented through the deferred rendering (Deferred Rendering) mechanism. That is, the shadow rendering process is performed on the deferred rendering pipeline.
- the geometric information of the object can be processed first, and then the shadow calculation process of the pixels covered by each light source is performed based on the geometric information. Get the shadow rendering results from this.
- the electronic device can obtain the geometric information of the object according to the solution shown in Figure 2 and Figure 3.
- geometric information may include depth information and normal information.
- the depth information can be obtained from the depth rendering result
- the normal information can be obtained from the normal rendering result.
- the GPU of the electronic device can perform the method shown in Figure 6, such as reading the depth rendering result from the frame buffer 21 set in the memory, and reading the normal rendering result from the frame buffer 22.
- the GPU can execute a ray tracing algorithm based on the depth rendering result and the normal rendering result, and obtain the shadow rendering result and store it in the frame buffer 23 of the memory.
- This ray tracing based on the deferred rendering pipeline can separate the object geometric data and the shadow calculation process, thereby obtaining richer object geometric information. This avoids the problem of missing shadow rendering results caused by less geometric information in the forward rendering pipeline.
- the GPU i.e., the computing body
- the GPU first needs to write the depth rendering results and normal rendering results to the memory, and then needs to read the depth rendering results and normal rendering results from the memory.
- the GPU also needs to write the calculated shadow rendering results into the memory.
- embodiments of the present application provide an image rendering method that can reduce data reading and writing between the GPU and the memory and improve the efficiency of shadow rendering during the process of shadow rendering based on the deferred rendering pipeline.
- the CPU can call the corresponding API in response to the rendering instruction 23 to instruct the GPU to perform shadow rendering.
- the GPU can read the already rendered normal rendering results from the new frame buffer G1 set in its on-chip storage space.
- the GPU does not need to obtain the normal rendering results through read and write interaction with the memory, thereby saving time and reading and writing bandwidth overhead. Take, for example, performing shadow rendering on a newly created framebuffer G2 on the GPU.
- the GPU can also read depth rendering results from the framebuffer 21 in memory.
- the GPU can render and obtain shadow rendering results based on the obtained depth rendering results and normal rendering results based on the ray tracing algorithm.
- the normal information, shadow information and distance information included in the shadow rendering result can be stored in different channels on the map included in a frame buffer in the memory respectively. That is, the shadow rendering results can be saved to a map. This achieves the effect of streamlining the storage overhead of shadow rendering results.
- the normal rendering process may also be performed on the newly created frame buffer G1 of the GPU. of. Then, compare the existing normal rendering process (shown in Figure 3). After normal rendering is completed, the normal rendering result can be directly stored in the new frame buffer G1 of the GPU without writing to the frame buffer 22 of the memory. Therefore, the storage process of normal rendering results can also save the latency between the GPU and memory and the cost of read and write bandwidth.
- the image rendering method provided by the embodiment of the present application can be applied in the user's electronic device.
- the electronic device can be a portable mobile device such as a mobile phone, a tablet computer, a personal digital assistant (PDA), an augmented reality (AR)/virtual reality (VR) device, a media player, etc.
- PDA personal digital assistant
- AR augmented reality
- VR virtual reality
- the electronic device may also be a wearable electronic device capable of providing display capabilities, such as a smart watch.
- the embodiments of the present application do not place any special restrictions on the specific form of the device.
- the electronic device may have different compositions.
- the electronic device involved in the embodiments of the present application may include a processor, an external memory interface, an internal memory, a universal serial bus (USB) interface, a charging Management module, power management module, battery, antenna 1, antenna 2, mobile communication module, wireless communication module, audio module, speaker, receiver, microphone, headphone interface, sensor module, buttons, motor, indicator, camera, display, As well as subscriber identification module (SIM) card interface, etc.
- a processor an external memory interface
- an internal memory a universal serial bus (USB) interface
- a charging Management module power management module
- battery antenna 1, antenna 2
- mobile communication module wireless communication module
- audio module audio module
- speaker speaker
- receiver microphone
- headphone interface headphone interface
- sensor module buttons, motor, indicator, camera, display
- SIM subscriber identification module
- the sensor module can include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc.
- the processor may include multiple processors such as a CPU and a GPU.
- the GPU can be configured with on-chip storage space. During the operation of the GPU, data in its on-chip storage space can be quickly recalled.
- the frame buffer set in the GPU on-chip storage space can also be called TileBuffer.
- the above hardware composition does not constitute a specific limitation on electronic equipment.
- the electronic device may include more or fewer components, some components may be combined, some components may be separated, or different components may be arranged.
- the electronic device involved in the embodiments of the present application may also have software partitioning.
- software partitioning To run in electronic equipment there are (Android) operating system as an example.
- Android operating system In the Android operating system, it is possible to have layered software partitioning.
- FIG. 8 is a schematic diagram of the software composition of an electronic device provided by an embodiment of the present application.
- the electronic device may include an application (Application, APP) layer, a framework (Framework) layer, a system library, a hardware (HardWare) layer, etc.
- application Application, APP
- framework Framework
- HDWare hardware
- the application layer can also be called the application layer.
- the application layer may include a series of application packages.
- Application packages can include camera, gallery, calendar, calling, map, navigation, WLAN, Bluetooth, music, video, SMS and other applications.
- the application package may also include applications that need to display images or videos to users by rendering images.
- video can be understood as the continuous playback of multiple frames of images.
- the application that needs to render an image may include a game application, for example wait.
- the framework layer can also be called the application framework layer.
- the framework layer can provide applications for applications in the application layer Programming interface (application programming interface, API) and programming framework.
- the framework layer includes some predefined functions.
- the framework layer may include a window manager, a content provider, a view system, a resource manager, a notification manager, an activity manager, an input manager, etc.
- the window manager provides window management service (Window Manager Service, WMS).
- WMS can be used for window management, window animation management, surface management, and as a transfer station for the input system.
- Content providers are used to store and retrieve data and make this data accessible to applications. This data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
- the view system includes visual controls, such as controls that display text, controls that display pictures, etc.
- a view system can be used to build applications.
- the display interface can be composed of one or more views.
- a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
- the resource manager provides various resources to applications, such as localized strings, icons, pictures, layout files, video files, etc.
- the notification manager allows applications to display notification information in the status bar, which can be used to convey notification-type messages and can automatically disappear after a short stay without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
- the notification manager can also be notifications that appear in the status bar at the top of the system in the form of charts or scroll bar text, such as notifications for applications running in the background, or notifications that appear on the screen in the form of conversation windows. For example, text information is prompted in the status bar, a beep sounds, the electronic device vibrates, the indicator light flashes, etc.
- the Activity Manager can provide Activity Management Service (AMS), which can be used for the startup, switching, and scheduling of system components (such as activities, services, content providers, and broadcast receivers) as well as the management and scheduling of application processes.
- the input manager can provide input management service (Input Manager Service, IMS), and IMS can be used to manage system input, such as touch screen input, key input, sensor input, etc. IMS takes out events from the input device node and distributes the events to appropriate windows through interaction with WMS.
- one or more functional modules may be provided in the framework layer to implement the solution provided by the embodiment of the present application.
- the framework layer may be provided with a creation module, a processing module, a shadow rendering module, etc.
- the creation module can be used to create frame buffers in memory and GPU on-chip storage space. For example, create a framebuffer in memory to store depth rendering results. Another example is creating a TileBuffer on the GPU for normal rendering and shadow rendering.
- the processing module can be used to process the rendering commands issued by the application and call the corresponding API to instruct the GPU to perform rendering operations. For example, when the application issues a rendering command instructing depth rendering, the processing module can control the GPU to render the depth information of the current frame image, and store the depth rendering result in the memory. For another example, when the application issues a rendering command instructing normal rendering, the processing module can control the GPU to render the normal information of the current frame image, and store the normal rendering results in the TileBuffer of the GPU.
- the processing module can control the GPU to obtain the depth rendering result from the memory and the normal rendering result from the TileBuffer, so as to perform the rendering operation according to the ray tracing algorithm to obtain the corresponding Shadow rendering results.
- the creation module and processing module can respond to the rendering commands issued by the application.
- an interception module may also be provided at the framework layer.
- the interception module can be used to receive the rendering command issued by the application program, and according to the information indicated by the rendering command, the corresponding rendering command Sent to the corresponding module for processing.
- the interception module may send a command indicating the creation of the frame buffer to the creation module for processing.
- the command for instructing to create a frame buffer may include: glCreateFrameBuffer function.
- the interception module may also send a command for instructing a rendering operation to the processing module for processing.
- the command for instructing to perform a rendering operation may include a command to instruct a rendering operation to perform depth information, a command to instruct a rendering operation to perform normal information, and a command to instruct a rendering operation to perform shadow rendering.
- the interception module can determine the content indicated by the rendering command based on the instructions carried in the rendering command.
- a command instructing to perform a rendering operation of depth information may include the keyword depthMap
- a command instructing to perform a rendering operation of normal information may include the keyword Vertex or Vertex and Normal. It can be understood that the normal information can be included in the vertex (Vertex) information.
- the relevant data of the normal vector is included in the Vertex command and is identified by Normal.
- commands instructing shadow rendering may include the keyword shadow.
- a shadow rendering module may also be provided in the frame layer.
- the shadow rendering module can instruct the GPU to render the shadow information after the GPU completes the rendering of normal information and obtains the normal rendering results.
- the rendering pipeline of normal information can be set in the TileBuffer of the GPU.
- the rendering pipeline that performs normal information may be based on the SubPass system.
- the SubPass system as a rendering pipeline mechanism provided by most current rendering platforms, is different from the traditional MultiPass system in that it can directly obtain the rendering results of the current SubPass during the execution of the next SubPass.
- the rendering results need to be stored in the memory, and the next pipeline needs to read the results from the memory to obtain the rendering results obtained in the current pipeline.
- the shadow rendering pipeline executed by the GPU instructed by the shadow rendering module can also be based on the SubPass system.
- the shadow rendering pipeline may be set in the TileBuffer of the GPU, and the shadow rendering pipeline may be correspondingly indicated by the next rendering command of the normal rendering pipeline.
- Drawcall A instructs the GPU to perform SubPass-based normal rendering.
- the next Drawcall B can be a rendering command issued by the shadow rendering module, instructing the GPU to perform SubPass-based shadow rendering.
- Drawcall B is also based on the SubPass system, when the GPU performs the shadow rendering instructed by Drawcall B, it can directly obtain the rendering result of the previous SubPass (that is, the rendering operation corresponding to Drawcall A, that is, the rendering operation of the normal rendering pipeline) (That is, the normal rendering result). In this way, the GPU can obtain the normal rendering results without reading and writing interaction with the memory. In addition, the GPU can read the depth rendering results from the memory according to Drawcall B, and then perform rendering operations according to the ray tracing algorithm to obtain the shadow rendering results of the current frame image.
- the shadow rendering module can directly instruct the GPU to perform shadow rendering after the normal rendering operation is completed. That is to say, in this example, the electronic device can complete shadow rendering by itself without receiving a rendering instruction issued by an application program for instructing shadow rendering. After the application issues the rendering instruction for instructing shadow rendering, the electronic The device can directly call back the shadow rendering results executed by the GPU and feed them back to the application.
- the application program may also sequentially perform SubPass-based shadow rendering operations after instructing the electronic device to perform normal rendering. Then, the GPU of the electronic device can also directly obtain the rendering result of the previous SubPass during the shadow rendering process, that is, directly obtain the normal rendering result. In this example, the shadow rendering module can no longer be provided in the electronic device.
- the rendering command issuance mechanism of different applications can achieve the effect of directly obtaining the normal rendering results through the shadow rendering process of SubPass. This can save the data reading and writing overhead between the GPU and memory when obtaining normal rendering results during the shadow rendering process.
- the electronic device may also be provided with a system library including a graphics library.
- the graphics library may include at least one of the following: Open Graphics Library (Open GL), Open GL for Embedded Systems (OpenGL ES), Vulkan, etc.
- other modules may also be included in the system library. For example: surface manager (surface manager), media framework (Media Framework), standard C library (Standard C library, libc), SQLite, Webkit, etc.
- the surface manager is used to manage the display subsystem and provides the integration of two-dimensional (2D) and three-dimensional (3D) layers for multiple applications.
- the media framework supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc.
- the media library can support a variety of audio and video encoding formats, such as: Moving Pictures Experts Group 4 (MPEG4), H.264, Moving Pictures Experts Compression Standard Audio Layer 3 (Moving Pictures Experts Group Audio Layer3, MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR), Joint Photographic Experts Group (JPEG, or JPG), Portable Network Graphics , PNG) etc.
- OpenGL ES and/or Vulkan provide the drawing and manipulation of 2D graphics and 3D graphics in applications. SQLite provides a lightweight relational database for electronic device 400 applications.
- each module in the framework layer can call the corresponding API in the graphics library to instruct the GPU to perform the corresponding rendering operation.
- a hardware layer may also be included in the electronic device.
- This hardware layer can include CPU, GPU, and memory with storage function (such as memory).
- the CPU can be used to control each module in the framework layer to implement their respective functions
- the GPU can be used to execute the API in the graphics library (such as OpenGL ES) called according to the instructions processed by each module in the framework layer. Corresponding rendering processing.
- the solutions provided by the embodiments of this application can be applied to the electronic device as shown in FIG. 8 . It should be noted that the example in Figure 8 does not constitute a restriction on electronic equipment. In other embodiments, the electronic device may include more or fewer components. The embodiments of this application do not limit the specific composition of the electronic device.
- the application program is a game application (corresponding to the first application program), and the game application issues a rendering instruction stream to instruct the electronic device to execute the first frame image (or the Nth frame image), the first frame image may include a shadow area as an example.
- the shadow rendering scheme involved in the scheme provided by the embodiment of the present application will be described in detail.
- the shadow rendering result obtained through the shadow rendering scheme provided by the embodiment of the present application can be used to display the shadow area in the first frame image.
- FIG. 9 is a schematic diagram of module interaction of an image rendering method provided by an embodiment of the present application.
- the solution shown in Figure 9 can be used to create a frame buffer.
- the rendering command 901 can be issued.
- the rendering command 901 may include at least one glCreateFrameBuffer function, which is used to instruct the electronic device to perform a frame buffer required for subsequent image rendering.
- the rendering command 901 may include a third rendering instruction (corresponding to the third rendering instruction) for instructing to create a first frame buffer on the memory of the electronic device.
- the first frame buffer may be used to store the depth rendering result.
- the first frame buffer may correspond to the frame buffer 93 in the subsequent description.
- the rendering command 901 may include a fourth rendering instruction for instructing to create a second frame buffer on the on-chip storage space of the GPU of the electronic device.
- the second frame buffer may be used to store normal rendering results.
- the second frame buffer may correspond to the frame buffer 91 in the subsequent description.
- the interception module provided in the frame layer of the electronic device can intercept the rendering command 901, and determine according to the glCreateFrameBuffer function carried therein that the rendering command 901 is used to instruct the electronic device to create a frame buffer. Then, the interception module can transmit the rendering command 901 to the creation module for subsequent processing.
- the creation module may create a corresponding frame buffer in response to the rendering command 901.
- the creation module can create multiple frame buffers simultaneously or in batches.
- these frame buffers may include frame buffers set in the memory and frame buffers set in the GPU on-chip storage space, that is, the TileBuffer in the aforementioned description.
- the creation module may create a frame buffer 91 on the GPU's cache.
- the creation module can also create framebuffers 92 on top of the GPU's cache.
- the frame buffer 91 and the frame buffer 92 can be caches set on the GPU chip, and are used to perform normal rendering and shadow rendering in the subsequent rendering process.
- the frame buffer 91 used for normal rendering may also be called G-Buffer.
- the creation module can also create framebuffer 93 and framebuffer 94 in memory.
- the frame buffer set in memory can be used for rendering storage of depth information. For example, depth rendering can be performed on the frame buffer 93.
- the GPU can also store the shadow rendering results in the memory.
- the frame buffer 94 may include a texture map, which may be in RGBA16F format and used to store shadow rendering results in different channels.
- the corresponding created frame buffer can be called through the IDs of the above frame buffers 91 to 94 to perform corresponding rendering operations.
- the game application can instruct the electronic device to render the depth information in the Nth frame image by issuing a rendering command.
- FIG. 10 it is a module interaction diagram of yet another image rendering method provided by an embodiment of the present application.
- the solution shown in Figure 10 can be used to render depth information, that is, perform depth rendering to obtain depth information (or depth rendering result).
- the game application when it needs to render the Nth frame image, it can send a rendering command 902 (corresponding to the first rendering command) to the electronic device.
- the rendering command 902 may include the keyword depthMap to indicate the depthMap
- the sub-device renders the depth information of the current frame image (ie, the Nth frame image).
- the rendering command 902 may also be called the first rendering command.
- the interception module may determine based on the carried keyword depthMap that the rendering command 902 is used to instruct rendering of depth information. Then, the interception module can transmit the rendering command 902 to the processing module for subsequent processing.
- the processing module may instruct the GPU to render depth information according to the rendering command 902. It should be understood that, in conjunction with the foregoing description of the system library, in some embodiments, the processing module can instruct the GPU to render the depth information of the Nth frame image by calling the API corresponding to the rendering command 902 in the system library.
- the processing module when the processing module instructs the GPU to perform depth rendering, it can also instruct the GPU to store the depth rendering result in the memory.
- the frame buffer ID of the frame buffer 93 may be carried, so that the GPU can store the depth information obtained by rendering in the frame buffer 93 .
- the frame buffer ID of the frame buffer 93 may be sent to the processing module after the creation module completes creating the frame buffer.
- the GPU can perform a depth rendering operation and store the obtained depth rendering results in the frame buffer 93 in the memory.
- the game application can also instruct the electronic device to render the normal information in the Nth frame image by issuing a rendering command.
- FIG. 11 is a module interaction diagram of yet another image rendering method provided by an embodiment of the present application.
- the solution shown in Figure 11 can be used to render normal information, that is, perform normal rendering to obtain normal information (or normal rendering results).
- the game application when it needs to render the Nth frame image, it can send a rendering command 903 (corresponding to the second rendering command) to the electronic device.
- the rendering command 903 may include the keyword Vertex, which is used to instruct the electronic device to render geometric information including depth information of the current frame image (ie, the Nth frame image).
- the rendering command 903 may also be called the second rendering command.
- normal information can be rendered and obtained together with other geometric information of the model.
- the rendering process of this geometric information can be performed based on the vertex data (Vertex) issued by the game application.
- the interception module can determine based on the carried keyword Vertex that the rendering command 903 is used to instruct rendering of normal information. Then, the interception module can transmit the rendering command 903 to the processing module for subsequent processing.
- the processing module may instruct the GPU to render the normal information according to the rendering command 903. It should be understood that, in conjunction with the foregoing description of the system library, in some embodiments, the processing module can instruct the GPU to render the normal information of the Nth frame image by calling the API corresponding to the rendering command 903 in the system library.
- the normal rendering pipeline can be set on the TileBuffer of the GPU.
- the normal rendering pipeline upstream of the TileBuffer may be directed by the second rendering instruction.
- the sixth rendering instruction generated according to the second rendering instruction received by the GPU may be the same as or similar to the second rendering instruction.
- the second rendering instruction may instruct the normal rendering pipeline to be executed on memory.
- the processing module can generate a sixth rendering instruction according to the second rendering instruction, and the sixth rendering instruction can instruct the GPU to perform the operation of the normal rendering pipeline on the TileBuffer.
- the rendering pipeline that performs normal rendering operations on the TileBuffer can also be called the first SubPass.
- the processing module may also instruct the normal rendering process to be bound to the frame buffer 91 .
- the frame buffer ID of frame buffer 91 can be the creation module to complete the creation of the frame buffer. and then sent to the processing module. Then, the GPU can execute the normal rendering pipeline on the frame buffer 91.
- the normal rendering pipeline that the processing module instructs the GPU to execute may be a SubPass-based rendering pipeline. So that the subsequent SubPass pipeline can directly obtain the normal rendering results.
- the obtained normal rendering results may be temporarily stored on the frame buffer 91 .
- the subsequent SubPass pipeline can read data from the GPU on-chip cache (ie, frame buffer 91), and quickly obtain the normal rendering results.
- the normal rendering result can be stored in the frame buffer 91.
- the GPU can execute a callback to the upper layer so that the upper layer knows that the normal rendering has been completed. For example, after completing the normal rendering operation and obtaining the normal rendering result, the GPU may call back a message (such as the first message) indicating that the normal rendering has been completed to the processing module. In this way, each module in the framework layer of the electronic device can know the current rendering progress.
- the normal rendering pipeline executed on the frame buffer 91 may be based on SubPass.
- the electronic device can control the GPU to execute the SubPass-based rendering after knowing that the normal rendering has been completed. Shadow rendering, so that the SubPass-based shadow rendering pipeline can directly and quickly obtain the normal rendering results.
- the electronic device can complete SubPass-G rendering and obtain the normal rendering result according to the scheme shown in Figure 11.
- the electronic device can then control the GPU to perform shadow rendering operations.
- the electronic device may instruct the GPU to perform a rendering operation of SubPass-Shadow after determining that the GPU has completed rendering of SubPass-G.
- SubPass-Shadow is a SubPass that is executed sequentially after SubPass-G is completed, when the GPU performs shadow rendering in this SubPass-Shadow, it can directly obtain the rendering result of SubPass-G, that is, the normal rendering result.
- this example uses the characteristics of SubPass to save the reading and writing overhead of the GPU reading the normal rendering results from the memory.
- the electronic device can spontaneously perform shadow rendering after the GPU completes normal rendering; or, the electronic device can perform shadow rendering according to the rendering command issued by the game application after the GPU completes normal rendering.
- FIG. 12 is a module interaction diagram of yet another image rendering method provided by an embodiment of the present application.
- the solution shown in Figure 12 can be used for shadow rendering.
- the electronic device spontaneously performs shadow rendering after the GPU completes normal rendering.
- the processing module can instruct the shadow rendering module that the current rendering progress is: normal rendering completed after the GPU completes normal rendering.
- the processing module may determine that the current GPU has completed normal rendering according to the completed normal rendering message called back by the GPU.
- the shadow rendering module can issue shadow rendering instructions to the GPU.
- This command can bind the TileBuffer of the GPU so that the GPU can perform the shadow rendering operation in the TileBuffer.
- the instruction for performing shadow rendering can be bound to the frame buffer 92 so that the GPU runs the shadow rendering pipeline on the frame buffer 92 and performs the shadow rendering operation.
- the instruction for performing shadow rendering may also carry a frame buffer ID that stores depth rendering results, a frame buffer ID that stores normal rendering results, and a frame buffer ID that stores shadow rendering results. These frame buffer IDs may be obtained from the creation module via the processing module, or these frame buffer IDs may be obtained directly from the creation module by the shadow rendering module.
- the instruction to perform shadow rendering may also instruct the GPU, and the shadow rendering pipeline may be based on the SubPass system.
- the shadow rendering module may deliver the frame buffer ID of frame buffer 91 , the frame buffer ID of frame buffer 93 , and the frame buffer ID of frame buffer 92 to the GPU. This is to facilitate the GPU to obtain the input data required for the shadow rendering process from the frame buffer 91 and the frame buffer 93 .
- the GPU may run a SubPass-based shadow rendering pipeline.
- the GPU can obtain normal rendering results, read depth rendering results, and perform shadow rendering operations.
- the GPU can obtain the normal rendering result from the frame buffer 91 .
- the shadow rendering pipeline (such as SubPass-Shadow) can be SubPass after SubPass-G, so the normal rendering results can be obtained directly.
- SubPass-G since SubPass-G is executed on the frame buffer 91, it can also be considered that the normal rendering result is obtained by SubPass-Shadow from the frame buffer 91.
- the GPU can read depth rendering results from the framebuffer 93 in memory. This allows the GPU to perform shadow rendering operations in SubPass-Shadow on framebuffer 92.
- the shadow rendering operation performed in SubPass-Shadow may be performed according to a ray tracing algorithm preset in the electronic device.
- the GPU only needs one data read interaction with the memory to perform shadow rendering operations.
- the GPU can store the shadow rendering results in memory after completing the shadow rendering operation for call by other pipelines.
- the electronic device can perform noise reduction (denoising) operations on the shadow rendering results in order to obtain better shadow rendering results, etc.
- the GPU can store the shadow rendering result in the frame buffer 94 of the memory after completing the shadow rendering operation.
- the shadow rendering result may include normal information of the shadow, shadow information of each pixel (ShadowMask), distance information of the shadow (Distance), etc.
- the normal information may include normal information in both x and y directions. That is, the normal information may include two parts: normal information (x) and normal information (y).
- the normal information (x) may also be called Normal(x), and the normal information (y) may also be called Normal(y).
- This pre-formatted texture can include at least 4 channels. Two of the channels can be used to store normal information, another channel can be used to store shadow information, and the other channel can be used to store distance information.
- the shadow rendering pipeline can output the shadow rendering result to the RGBA16F format map on the frame buffer 94 .
- the normal information (x) can be output and stored in the R channel in the RGBA16F format on the frame buffer 94;
- the normal information (x) ie, Normal (x)
- the normal information (y) i.e.
- Normal(y)) can be output and stored in the G channel of the RGBA16F format on the frame buffer 94; the shadow information (ShadowMask) can be output and stored in the RGBA16F format on the frame buffer 94 In the B channel; the distance information (Distance) can be output and stored in the A channel in the RGBA16F format on the frame buffer 94.
- the normal information (x) may also be called the first normal information.
- the normal information (y) may also be called second normal information.
- the solution provided in this example can not only save memory storage overhead, but also make it more convenient for other pipelines to call the shadow rendering results.
- the electronic device can trigger shadow rendering by itself after the GPU completes normal rendering and stores it in the memory. Then, the game application can also instruct the electronic device to perform shadow rendering operations in the subsequent rendering command stream.
- the game application may issue a rendering command 904 (corresponding to the fifth rendering command) to instruct the electronic device to perform shadow rendering of the current frame image.
- the rendering command 904 may include the keyword Shadow.
- the interception module can intercept the rendering command 904 according to the keyword Shadow, and send the rendering command 904 to the processing module. After receiving the rendering command 904, the processing module can call back the frame buffer ID of the frame buffer 94 to the game application.
- the shadow rendering result may already be stored in the frame buffer 94 .
- the processing module can directly call back the frame buffer ID of the frame buffer 94 storing the shadow rendering result to the game application, so that the game application can know and use the shadow rendering result.
- the rendering command 904 may also be called the fifth rendering command.
- the electronic device performs self-shadow rendering after the GPU completes normal rendering.
- the shadow rendering process may also be performed under the instructions of the game application.
- the internal mechanism is similar to the logic shown in Figure 12, that is, after instructing the electronic device to perform SubPass-based normal rendering, a rendering instruction can be issued to instruct the electronic device to continue performing SubPass-based shadow rendering operations. .
- the electronic device uses the SubPass-Shadow pipeline for shadow rendering, it can also directly obtain the rendering result of the previous SubPass (that is, the normal rendering result of SubPass-G), thereby achieving a similar effect to the solution shown in Figure 12.
- FIG. 14 is a module interaction diagram of yet another image rendering method provided by an embodiment of the present application.
- the solution shown in Figure 14 can be used for shadow rendering.
- the electronic device issues SubPass-based normal rendering instructions and SubPass-based shadow rendering instructions according to the game application sequence.
- the game application can issue a rendering command 904 to instruct the electronic device to perform shadow rendering of the current frame image.
- the rendering command 904 may include the keyword Shadow.
- the interception module can intercept the rendering command 904 according to the keyword Shadow, and send the rendering command 904 to the processing module.
- the processing module may instruct the GPU to perform shadow rendering according to the rendering command 904.
- the instruction to instruct the GPU to perform shadow rendering may indicate that the shadow rendering pipeline may be based on the SubPass system.
- the instruction instructing the GPU to perform shadow rendering may also include the frame buffer ID of the frame buffer 91 , the frame buffer ID of the frame buffer 93 , and the frame buffer ID of the frame buffer 92 .
- the instruction instructing the GPU to perform shadow rendering may also include a frame buffer ID of the frame buffer 92 used to perform shadow rendering, and a frame buffer ID of the frame buffer 94 used to store the shadow rendering results.
- the GPU can obtain the normal rendering result from the frame buffer 91 and read the depth rendering result from the frame buffer 93 .
- the GPU can run the shadow rendering pipeline on the frame buffer 92 and will render the acquired shadow rendering according to the ray tracing algorithm.
- the rendering results are stored in frame buffer 94.
- the storage mechanism of the shadow rendering results on the frame buffer 94 can refer to the example in FIG. 13 and will not be described again here.
- the electronic device can perform SubPass-based normal rendering in the TileBuffer of the GPU. There is no need to store the normal rendering results in memory, thus saving the reading and writing overhead of the process.
- Electronic devices can also perform SubPass-based shadow rendering in the GPU's TileBuffer, thereby directly obtaining the normal rendering results without reading them from memory, thus saving the reading and writing overhead of the process.
- the electronic device can also store all shadow rendering results in a pre-formatted texture in the memory, thereby saving memory storage overhead.
- Figures 9 to 14 illustrate the rendering method provided by the embodiment of the present application from the perspective of interaction between modules.
- the solution provided by the embodiment of the present application will be described below with reference to the module interaction flow chart shown in Figure 15 .
- the process may include:
- the rendering command 901 may include at least one glCreateFrameBuffer function, which is used to instruct the electronic device to use a frame buffer required for subsequent image rendering.
- the interception module intercepts the rendering command 901 and determines that the rendering command 901 instructs frame buffer creation.
- the interception module may determine, according to the glCreateFrameBuffer function included in the rendering command 901, that the rendering command 901 instructs frame buffer creation.
- the interception module sends the rendering command 901 to the creation module.
- the creation module creates frame buffer 91 and frame buffer 92 on the cache of the GPU.
- the frame buffer 91 and the frame buffer 92 can be TileBuffers in the GPU on-chip storage space.
- the frame buffer 91 can be used for normal rendering
- the frame buffer 92 can be used for shadow rendering.
- the creation module creates frame buffer 93 and frame buffer 94 in the memory.
- the frame buffer 93 and the frame buffer 94 can be frame buffers in memory.
- the frame buffer 93 can be used for depth rendering
- the frame buffer 94 can be used for storing shadow rendering results.
- S1504-S1505 and S1506-S1507 are not limited. For example, in some embodiments, S1504-S1505 may be performed earlier than S1506-S1507. In other embodiments, S1504-S1505 may be executed later than S1506-S1507. In some embodiments, S1504-S1505 may be performed simultaneously with S1506-S1507.
- the creation module sends the frame buffer ID of the newly created frame buffer to the processing module.
- the newly created frame buffer may include frame buffer 91 - frame buffer 94 .
- the frame buffer ID of the newly created frame buffer may include the frame buffer IDs of frame buffer 91 to frame buffer 94 .
- the creation of the frame buffer can be completed, so that it can be called at any time during the subsequent rendering process of the frame image.
- the game application issues a rendering command 902.
- the rendering command 902 may include the keyword depthMap, which is used to instruct the electronic device to render the depth information of the current frame image (such as the Nth frame image).
- the interception module intercepts the rendering command 902 and determines that the rendering command 902 indicates depth rendering.
- the interception module can determine that the rendering command 902 indicates depth rendering according to the keyword depthMap.
- the interception module sends the rendering command 902 to the processing module.
- the processing module sends depth rendering instructions to the GPU.
- the processing module may generate the depth rendering instruction according to the rendering command 902.
- the depth rendering instruction may include the frame buffer ID of the frame buffer 93 to instruct the GPU to store the depth rendering result in the frame buffer 93 .
- the specific implementation of the depth rendering instruction may be that the processing module calls the API in the graphics library through the depth rendering instruction to instruct the GPU to perform the corresponding depth rendering operation.
- the GPU performs a depth rendering operation according to the depth rendering instruction.
- the GPU sends the depth rendering result to the memory.
- the memory stores the depth rendering result in the frame buffer 93.
- the rendering command 903 may include the keyword Vertex, which is used to instruct the electronic device to render geometric information including depth information of the current frame image (ie, the Nth frame image).
- the interception module intercepts the rendering command 903 and determines that the rendering command 903 instructs normal rendering.
- the interception module can determine that the rendering command 903 indicates normal rendering according to the keyword Vertex.
- the interception module sends the rendering command 903 to the processing module.
- the processing module sends a normal rendering instruction to the GPU.
- the processing module may generate the normal rendering instruction according to the rendering command 903.
- the depth rendering instruction may include the frame buffer ID of the frame buffer 91 to instruct the GPU to perform normal rendering on the frame buffer 91 .
- the depth rendering instruction may also include a first identifier for instructing the GPU to perform a SubPass-based rendering operation.
- the specific implementation of the normal rendering instruction can be that the processing module calls the API corresponding to SubPass in the graphics library through the normal rendering instruction, and based on the Vertex data carried in the rendering command 903, Instructs the GPU to perform corresponding geometry rendering operations including normals.
- the GPU performs normal rendering operations according to the normal rendering instructions.
- the GPU can run SubPass-G on the frame buffer 91 according to the normal rendering instruction, so as to perform the normal rendering operation.
- the normal rendering result can be obtained on the frame buffer 91.
- the process from completing the normal rendering to obtaining the normal rendering result in the on-chip cache can be as shown in S1521-S1522.
- the GPU sends the normal rendering result to the GPU on-chip cache.
- the GPU can also provide feedback that the normal rendering is complete after completing the normal rendering operation. For example, as shown in S1523.
- the GPU sends a normal rendering completion instruction to the processing module.
- the processing module sends a normal rendering completion instruction to the shadow rendering module.
- the GPU can directly feed back the normal rendering completion indication to the shadow rendering module, so as to trigger the shadow rendering module to control the GPU to perform shadow rendering on its own.
- the shadow rendering module generates shadow rendering instructions.
- the shadow rendering instruction can be used to instruct the GPU to perform shadow rendering.
- the shadow rendering instruction may also carry a first identifier for instructing the GPU to perform a SubPass-based rendering operation.
- the specific implementation of the shadow rendering instruction may be that the shadow rendering module calls the API corresponding to SubPass in the graphics library through the shadow rendering instruction to instruct the GPU to perform the corresponding shadow rendering operation.
- the shadow rendering module sends a shadow rendering instruction to the GPU.
- the GPU obtains the normal rendering results from the GPU on-chip cache.
- the GPU can obtain the normal rendering result from the frame buffer 91 . It can be understood that since the shadow rendering pipeline (such as SubPass-Shadow) is a SubPass after SubPass-G, the rendering result of SubPass-G, that is, the normal rendering result, can be directly obtained.
- the shadow rendering pipeline such as SubPass-Shadow
- the GPU reads the depth rendering result from the memory.
- the GPU may read the depth rendering results from the frame buffer 93 .
- the GPU performs a shadow rendering operation.
- the GPU can calculate and obtain shadow rendering results based on the preset ray tracing algorithm and the obtained normal rendering results and depth rendering results.
- the GPU sends the shadow rendering results to the memory.
- the method of storing the shadow rendering results on the frame buffer 94 can refer to the solution shown in FIG. 13 , which will not be described again here.
- the electronic device can directly call back the shadow rendering results to the game application.
- the game application can issue a rendering command 904 to instruct the electronic device to perform shadow rendering of the current frame image.
- the rendering command 904 may include the keyword Shadow.
- the interception module may determine according to the keyword Shadow that the rendering command 904 indicates shadow rendering.
- the interception module may send the rendering command 904 to the processing module (such as executing S1534).
- the processing module can directly send the frame buffer ID of the frame buffer 94 to the game application (such as executing S1535), so that the game application can directly obtain the shadow rendering result in the frame buffer 94.
- the electronic device may also perform shadow rendering of SubPass-Shadow after performing normal rendering of SubPass-G according to the rendering command issued by the game application.
- shadow rendering of SubPass-Shadow after performing normal rendering of SubPass-G according to the rendering command issued by the game application.
- Figure 16 shows a schematic diagram of the composition of an electronic device 1600.
- the electronic device 1600 may include: a processor 1601 and a memory 1602.
- the memory 1602 is used to store computer execution instructions.
- the processor 1601 executes instructions stored in the memory 1602
- the electronic device 1600 can be caused to execute the image rendering method shown in any of the above embodiments.
- FIG 17 shows a schematic diagram of the composition of a chip system 1700.
- the chip system 1700 may include: a processor 1701 and a communication interface 1702, used to support related devices to implement the functions involved in the above embodiments.
- the chip system also includes a memory for saving necessary program instructions and data for the terminal.
- the chip system may be composed of chips, or may include chips and other discrete devices.
- the communication interface 1702 may also be called an interface circuit.
- the functions, actions, operations, steps, etc. in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
- a software program When implemented using a software program, it may be implemented in whole or in part in the form of a computer program product.
- the computer program product includes one or more computer instructions. When computer program instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are generated in whole or in part.
- the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
- the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center through wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means.
- the computer-readable storage medium can be any available medium that can be accessed by a computer or include one or more data storage devices such as servers and data centers that can be integrated with the medium.
- the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, solid state disk (SSD)), etc.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Image Generation (AREA)
Abstract
本申请实施例公开了一种图像渲染方法和电子设备,涉及图像处理领域,通过片上存储,实现阴影渲染的高效执行。避免阴影渲染过程中,GPU与内存之间的大量数据读写,从而减轻GPU与内存之间的数据读写压力。具体方案为:根据第一渲染指令渲染获取第一帧图像的深度渲染结果。深度渲染结果存储在电子设备的内存中。根据第二渲染指令渲染获取第一帧图像的法线渲染结果。法线渲染结果存储在电子设备的图形处理模块的片上存储区中。根据深度渲染结果和法线渲染结果,获取与阴影区域匹配的阴影渲染结果。其中,阴影区域即第一帧图像中,显示为阴影效果的区域。与阴影区域匹配的阴影渲染结果可以理解为,阴影区域的显示可以根据阴影渲染结果进行。
Description
本申请要求于2022年8月3日提交国家知识产权局、申请号为202210929017.X、发明名称为“一种图像渲染方法和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请实施例涉及图像处理领域,尤其涉及一种图像渲染方法和电子设备。
为了向用户提供更加真实的显示效果,电子设备显示的一些图像中可以包括阴影效果。该阴影效果可以基于阴影渲染获取的阴影渲染结果显示的。
在目前的渲染机制中,阴影渲染过程中,图形处理器需要从内存读取深度信息、法线信息等作为输入。而随着图像显示帧数以及图像画质的不断提升,GPU需要从内存中读取的法线信息等信息的数据量也就随之大幅上涨。这就对GPU与内存之间的读写带宽提出了较高的要求。
如果读写带宽无法满足需求,就可能导致阴影渲染延迟,图像中的阴影效果显示效果不佳的情况出现。
发明内容
本申请实施例提供一种图像渲染方法和电子设备,通过片上存储,实现阴影渲染的高效执行。避免阴影渲染过程中,GPU与内存之间的大量数据读写,从而减轻GPU与内存之间的数据读写压力。
为了达到上述目的,本申请实施例采用如下技术方案:
第一方面,提供一种图像渲染方法,应用于电子设备中,该电子设备中运行有第一应用程序,该第一应用程序通过下发渲染指令流指示该电子设备执行对第一帧图像的渲染处理,该第一帧图像中包括阴影区域。该渲染指令流中包括第一渲染指令和第二渲染指令,该方法包括:根据该第一渲染指令渲染获取该第一帧图像的深度渲染结果。该深度渲染结果存储在该电子设备的内存中。根据该第二渲染指令渲染获取该第一帧图像的法线渲染结果。该法线渲染结果存储在该电子设备的图形处理模块的片上存储区中。根据该深度渲染结果和该法线渲染结果,获取与该阴影区域匹配的阴影渲染结果。其中,阴影区域即第一帧图像中,显示为阴影效果的区域。与阴影区域匹配的阴影渲染结果可以理解为,阴影区域的显示可以根据阴影渲染结果进行。这样,通过在片上存储中执行法线以及阴影的渲染,能够避免GPU与内存之间的读写开销。例如,GPU执行法线渲染后不需要将法线渲染结果写入内存中。又如,GPU执行阴影渲染时,不需要从内存中读取法线渲染结果。
可选的,该渲染指令流还包括第三渲染指令,该第三渲染指令用于指示该电子设备在内存中创建第一帧缓冲。该第一帧缓冲用于存储该深度渲染结果。在根据该第一渲染指令渲染获取该第一帧图像的深度渲染结果之前,该方法还包括:根据该第三渲染指令,在该内存中创建第一帧缓冲。该深度渲染结果存储在该电子设备的内存中,
包括:该深度渲染结果存储在该第一帧缓冲上。这样,响应于第三渲染指令,电子设备可以在内存中创建用于进行深度渲染的帧缓冲。
可选的,该渲染指令流还包括第四渲染指令,该第四渲染指令用于指示该电子设备创建第二帧缓冲。该第二帧缓冲用于存储该法线渲染结果。在根据该第二渲染指令渲染获取该第一帧图像的法线渲染结果之前,该方法还包括:根据该第四渲染指令,在该图形处理模块的片上存储区中创建该第二帧缓冲。该法线渲染结果存储在该电子设备的图形处理模块的片上存储区中,包括:该法线渲染结果存储在该第二帧缓冲上。这样,基于第四渲染指令,电子设备可以在片上存储区域中进行法线渲染过程,并将该法线渲染结果存储在片上存储中。由此,能够节省法线渲染中GPU向内存的写入开销。同时,在后续流程中如果需要调用该法线渲染结果,GPU也不需要从内存中进行读取。
可选的,该渲染指令流还包括第五渲染指令,该第五渲染指令用于指示该电子设备执行该对该阴影信息的渲染操作。该根据该深度渲染结果和该法线渲染结果,获取阴影渲染结果,包括:响应于该第五渲染指令,从该内存中读取该深度渲染结果,从该图形处理模块的片上存储区中获取该法线渲染结果。根据预设的光线追踪算法,处理获取该阴影渲染结果。这样,基于第五指令,电子设备可以在执行阴影渲染。可以理解的是,阴影渲染可以深度信息以及法线信息作为输入,由于法线信息的渲染过程是在片上执行的,因此GPU不需要与内存交互即可获取法线信息。
可选的,该根据该深度渲染结果和该法线渲染结果,获取阴影渲染结果,包括:在完成法线渲染操作的情况下,触发指示该图形处理模块执行阴影渲染操作。该阴影渲染操作包括:从该内存中读取该深度渲染结果,从该图形处理模块的片上存储区中获取该法线渲染结果。根据预设的光线追踪算法,处理获取该阴影渲染结果。这样,提供了一种电子设备自发进行阴影渲染的方案示例。这样,电子设备不需要依托于上层应用下发的渲染指令,即可完成阴影渲染。在一些实现中,在上层应用下发阴影渲染指令后,如果电子设备已经完成阴影渲染,则可以直接将存储阴影渲染结果的相关信息(如地址等)反馈给应用。
可选的,在该触发指示该图形处理模块执行阴影渲染操作之前,该方法还包括:在完成该法线渲染操作时,生成第一消息,该第一消息用于指示该法线渲染操作完成。该触发指示该图形处理模块执行阴影渲染操作,包括:在该第一消息生成时,触发指示该图形处理模块执行阴影渲染操作。这样,给出了一种电子设备确定法线渲染已经完成的方案示例。由此即可触发进行基于电子设备的阴影渲染处理。
可选的,该根据该第二渲染指令渲染获取该第一帧图像的法线渲染结果,包括:根据该第二渲染指令,向图形处理模块下发第六渲染指令,该第六渲染指令用于指示该图形处理模块,在第一延迟渲染管线SubPass上执行该第一帧图像的法线渲染操作,该图形处理模块在该第一SubPass上执行该第六渲染指令,获取该法线渲染结果。其中,第六渲染指令可以是与第二渲染指令相应的。在一些实现中,第六渲染指令的功能可以与第二渲染指令相同,如指示GPU通过SubPass进行法线渲染。在另一些实现中,第六渲染指令可以是基于第二渲染指令的变形。例如,第二渲染指令指示电子设备进行法线渲染时,GPU获取的第六渲染指令可以用于指示在SubPass上进行法线渲
染。
可选的,该根据该深度渲染结果和该法线渲染结果,获取阴影渲染结果,包括:在该图形处理模块的片上缓存中创建第二SubPass,该第二SubPass用于执行阴影渲染操作。获取该第一SubPass的渲染结果输入该第二SubPass,该第一SubPass的渲染结果包括该法线渲染结果。从该内存中读取该深度渲染结果输入该第二SubPass,根据预设的光线追踪算法,对该法线渲染结果和该深度渲染结果进行处理,获取该阴影渲染结果。这样,即可实现在片上进行的基于SubPass的阴影渲染操作。可以理解的是,SubPass提供了能够直接获取上一个SubPass的渲染结果的能力。那么,在该第二SubPass上执行的阴影渲染过程,即可直接获取上一个SubPass上执行的法线渲染结果。从而提高法线渲染结果的获取效率,同时节省GPU与内存之间的读写开销。
可选的,该阴影渲染结果包括:第一法线信息,第二法线信息,阴影信息,以及距离信息。其中,第一法线信息与第二法线信息可以是法线信息中不同方向对应的法线信息。例如,第一法线信息可以是x向的法线信息,第二法线信息可以y向的法线信息。
可选的,在获取该阴影渲染结果之后,该方法还包括:将该阴影渲染结果输出到该内存上的第三帧缓冲上,该第三帧缓冲上包括第一格式的贴图,该第一格式的贴图包括至少四个通道。这样,在包括多组数据的阴影渲染结果就可以被存储在同一个位置,如同一张贴图上。由此使得在后续调用时,仅通过一次数据读取,即可获取全量的阴影渲染结果。从而节省不必要的读写开销。
可选的,该将该阴影渲染结果输出到该内存上的第三帧缓冲上,包括:将该第一法线信息,第二法线信息,阴影信息,以及距离信息分别输出到该第一格式的贴图的不同通道上。这样,提供了一种具体的阴影渲染结果的存储方案实现。可选的,该第一格式为RGBA16F。
可选的,该图形处理模块是图形处理器GPU。当然,在其他一些实现中,该图形处理模块的功能也可以由其他具有图像渲染能力的部件或电路实现。
第二方面,提供一种电子设备,电子设备包括一个或多个处理器和一个或多个存储器;一个或多个存储器与一个或多个处理器耦合,一个或多个存储器存储有计算机指令;当一个或多个处理器执行计算机指令时,使得电子设备执行如上述第一方面以及各种可能的设计中任一种的图像渲染方法。
第三方面,提供一种芯片系统,芯片系统包括接口电路和处理器;接口电路和处理器通过线路互联;接口电路用于从存储器接收信号,并向处理器发送信号,信号包括存储器中存储的计算机指令;当处理器执行计算机指令时,芯片系统执行如上述第一方面以及各种可能的设计中任一种的图像渲染方法。
第四方面,提供一种计算机可读存储介质,计算机可读存储介质包括计算机指令,当计算机指令运行时,执行如上述第一方面以及各种可能的设计中任一种的图像渲染方法。
第五方面,提供一种计算机程序产品,计算机程序产品中包括指令,当计算机程序产品在计算机上运行时,使得计算机可以根据指令执行如上述第一方面以及各种可能的设计中任一种的图像渲染方法。
应当理解的是,上述第二方面,第三方面,第四方面,以及第五方面提供的技术方案,其技术特征均可对应到第一方面及其可能的设计中提供的图像渲染方法,因此能够达到的有益效果类似,此处不再赘述。
图1为一种图像渲染的逻辑示意图;
图2为一种深度信息渲染的逻辑示意图;
图3为一种法线信息渲染的逻辑示意图;
图4为一种阴影的示意图;
图5为一种图像渲染过程中通过光线追踪进行阴影渲染的示意图;
图6为一种阴影渲染的逻辑示意图;
图7为本申请实施例提供的一种阴影渲染的逻辑示意图;
图8为本申请实施例提供的一种电子设备的软件组成示意图;
图9为本申请实施例提供的一种图像渲染方法的模块交互示意图;
图10为本申请实施例提供的又一种图像渲染方法的模块交互示意图;
图11为本申请实施例提供的又一种图像渲染方法的模块交互示意图;
图12为本申请实施例提供的又一种图像渲染方法的模块交互示意图;
图13为本申请实施例提供的一种阴影渲染结果的存储方案示意图;
图14为本申请实施例提供的又一种图像渲染方法的模块交互示意图;
图15为本申请实施例提供的又一种图像渲染方法的流程示意图;
图16为本申请实施例提供的一种电子设备的组成示意图;
图17为本申请实施例提供的一种芯片系统的组成示意图。
目前,大多电子设备都可以向用户提供图像显示功能。
示例性的,电子设备中可以安装有应用程序(Application,APP)。在应用程序需要通过电子设备显示图像时,可以向电子设备发送指令,以便电子设备根据指令进行对应图像的渲染,进而通过电子设备的显示屏显示渲染获取的图像。
结合图1,为一种图像渲染的流程示意图。在本示例中,电子设备中可以设置有中央处理器(Central Processing Unit,CPU)、图形处理器(Graphic Processing Unit,GPU)以及内存等。其中,CPU可以用于进行指令处理以及控制。GPU可以在CPU的控制下进行图像的渲染。内存则可以用于提供存储功能,如存储GPU渲染获取的渲染结果。
如图1所示,应用程序可以下发渲染指令,用于指示电子设备进行一帧图像的渲染。该一个渲染指令可以对应到一个绘制命令(即Drawcall)。CPU可以接收该渲染指令,并调用相应的图形绘制应用程序编程接口(Application Programming Interface,API),以便于指示GPU执行与该渲染指令对应的渲染操作。GPU则可以执行渲染指令,并将渲染获取的结果存储在内存中。
需要说明的是,在对一帧图像的绘制中,应用程序可以通过渲染指令控制电子设备对帧图像的深度信息、法线信息等进行渲染,由此获取完整的帧图像信息。以下示例中,以应用程序为游戏应用为例。可以理解的是,游戏应用在运行过程中可以通过
电子设备向用户展示视频画面。该视频画面可以是由多个连续播放的帧图像构成的。
以绘制一帧图像的深度信息为例。结合图1,如图2所示。游戏应用可以下发渲染指令21,用于指示电子设备对当前帧图像的深度信息进行渲染。CPU可以根据该渲染指令21,调用对应的API接口,指示GPU执行与深度信息对应的渲染操作。GPU可以执行该渲染操作,并将渲染结果(即深度渲染结果)存储在内存中。在本示例中,内存中可以包括多个预先创建的帧缓冲(FrameBuffer,FB),如帧缓冲21、帧缓冲22、帧缓冲23等。不同的帧缓冲可以用于存储图像渲染过程中的不同信息。例如,在本示例中,GPU可以将深度渲染结果存储在帧缓冲21中。
以绘制一帧图像的法线信息为例。如图3所示,游戏应用可以下发渲染指令22,用于指示电子设备对当前帧图像的法线信息进行渲染。CPU可以根据该渲染指令22,调用对应的API接口,指示GPU执行与法线信息对应的渲染操作。GPU可以执行该渲染操作,并将渲染结果(即法线渲染结果)存储在内存中。例如,GPU可以将法线渲染结果存储在帧缓冲22中。
应当理解的是,在一些场景下,由于场景光线的变化,场景中的物体可能会有阴影存在。示例性的,如图4所示,在该场景下可以包括物体41。在光源位于物体41的斜上方时,则物体41可以在地面上投射出阴影。
那么,为了能够向用户提供更加真实的观感,游戏应用也可以指示电子设备对当前帧图像中的物体进行阴影的渲染,从而使得在显示的帧图像中可以包括该物体的阴影,更加真实。
示例性的,如图5所示,为一种阴影渲染方案的示例。该示例中,电子设备可以通过观想追踪算法,实现对阴影的渲染,获取包括阴影在内的帧图像的显示信息(即渲染结果)。
作为一种实现,GPU可以基于光线追踪算法,可以把一个场景的渲染任务拆分成从相机(camera)出发的若干条光线(如图5所示的观察线(view ray))对场景的影响。每条观察线会和场景并行地求交,根据交点位置获取所要显示物体(scene object)的材质、纹理等信息,并结合光源(light source)信息计算光照。这样,通过计算观察线在图像(image)上各个像素点的信息,即可确定物体在图像上的投射情况。此外,在该场景中,光源可以照射到物体上形成阴影(如通过如图5所示的阴影线(shadow ray)照射到物体上形成阴影)。那么,通过上述光线追踪算法,还可以确定物体的阴影对应在图像上的像素点的位置以及相关信息。由此,即可在图像上获取物体以及阴影的显示信息。
结合图1-图3的示例,以绘制一帧图像的阴影信息为例。如图6所示,游戏应用可以下发渲染指令23,用于指示电子设备对当前帧图像的阴影信息进行渲染。CPU可以根据该渲染指令23,调用对应的API接口,指示GPU执行与阴影信息对应的渲染操作。一般而言,对阴影的渲染需要结合当前帧图像的深度信息以及法线信息。那么,在本示例中,GPU可以从帧缓冲21中读取深度渲染结果,从帧缓冲22中读取法线渲染结果,并基于此,通过光线追踪算法,渲染获取阴影信息(即阴影渲染结果)。GPU可以将该阴影渲染结果存储在内存的帧缓冲23中。
目前,为了实现光线追踪,可以通过前向渲染(Forward Rendering)机制实现。
也就是说,该光线追踪算法对应的阴影渲染过程,可以在前向渲染管线中执行。在该前向渲染管线中,场景中包括物体的几何信息是通过单独绘制场景中的每一个物体完成的。那么,在实际实施过程中,为了平衡渲染过程中的开销,需要尽量减少每个物体的绘制调用。因此,每个物体渲染过程中获取的几何信息也就非常有限。而阴影的渲染过程(如阴影渲染结果的获取、以及针对阴影渲染结果的降噪优化处理等)需要依赖于各个物体的几何信息,那么,有限的几何信息就会造成阴影渲染质量的下降。
此外,还可以通过延迟渲染(Deferred Rendering)机制实现光线追踪。也即,在延迟渲染管线上执行阴影渲染过程。在延迟渲染管线中,可以首先完成对物体的几何信息处理,然后根据该几何信息,执行每个光源所覆盖像素的阴影计算过程。由此获取阴影渲染结果。
作为一种示例,结合前述说明,电子设备可以根据如图2以及图3所示的方案,获取物体的几何信息。例如,几何信息可以包括深度信息以及法线信息。其中,深度信息可以从深度渲染结果中获取,法线信息可以从法线渲染结果中获取。接着,电子设备的GPU可以执行如图6所示的方法,如从内存中设置的帧缓冲21中读取深度渲染结果,从帧缓冲22中读取法线渲染结果。GPU可以根据该深度渲染结果以及法线渲染结果,执行光线追踪算法,获取阴影渲染结果存储在内存的帧缓冲23中。
该基于延迟渲染管线的光线追踪,能够将物体几何数据和阴影计算的过程分离,从而获取较丰富的物体的几何信息。由此避免前向渲染管线中出现的几何信息较少导致的阴影渲染结果不加的问题。
然而,延迟渲染管线中,对于GPU(即计算主体)与内存之间的数据读写带宽提出了较高的要求。例如,GPU需要先向内存写入深度渲染结果以及法线渲染结果,接着需要从内存读取深度渲染结果以及法线渲染结果,GPU还需要将计算获取的阴影渲染结果写入内存中。
那么,在GPU与内存之间的数据读写带宽有限时,就会造成阴影计算的延迟,从而导致渲染时间的延长。
为了解决上述问题,本申请实施例提供一种图像渲染方法,能够在基于延迟渲染管线执行阴影渲染的过程中,减少GPU与内存之间的数据读写,提升阴影渲染效率。
基于本申请实施例提供的方案,以游戏应用下发渲染指令23指示电子设备进行阴影渲染为例。如图7所示,CPU可以响应于该渲染指令23调用对应的API指示GPU进行阴影渲染。对应的,GPU可以从其片上(on-chip)存储空间中设置的新建帧缓冲G1中读取已经渲染好的法线渲染结果。相较于如图6所示的现有的法线渲染结果获取机制,GPU不需要通过与内存的读写交互获取法线渲染结果,从而节省时间和读写带宽的开销。以在GPU上的新建帧缓冲G2上执行阴影渲染为例。GPU还可以从内存的帧缓冲21上读取深度渲染结果。GPU可以根据获取的深度渲染结果以及法线渲染结果,基于光线追踪算法,渲染获取阴影渲染结果。在本示例中,阴影渲染结果所包括的法线信息、阴影信息以及距离信息可以分别被存储到内存的一个帧缓冲中包括的贴图上的不同通道中。也就是说,阴影渲染结果可以被保存到一个贴图上。由此达到精简阴影渲染结果存储开销的效果。
此外,在一些实施例中,法线渲染的过程也可以是在GPU的新建帧缓冲G1上执行
的。那么,对比现有的法线渲染过程(如图3所示)。在完成法线渲染之后,由于法线渲染结果可以直接被存储在GPU的新建帧缓冲G1上,而不需要写入内存的帧缓冲22中。因此,对于法线渲染结果的存储过程,也能够节省GPU与内存之间的时延和读写带宽的开销。
以下结合附图对本申请实施例提供的方案进行详细说明。
需要说明的是,本申请实施例提供的图像渲染方法,可以应用在用户的电子设备中。比如,该电子设备可以是手机、平板电脑、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备、媒体播放器等便携式移动设备,该电子设备也可以是智能手表等能够提供显示能力的可穿戴电子设备。本申请实施例对该设备的具体形态不作特殊限制。
在不同实施例中,该电子设备可以具有不同的组成。
示例性的,在一些实施例中,从硬件组成的角度,本申请实施例涉及的电子设备可以包括处理器,外部存储器接口,内部存储器,通用串行总线(universal serial bus,USB)接口,充电管理模块,电源管理模块,电池,天线1,天线2,移动通信模块,无线通信模块,音频模块,扬声器,受话器,麦克风,耳机接口,传感器模块,按键,马达,指示器,摄像头,显示屏,以及用户标识模块(subscriber identification module,SIM)卡接口等。其中,传感器模块可以包括压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器,骨传导传感器等。作为一种可能的实现,该处理器可以包括CPU以及GPU等多个处理器。其中GPU可以设置有片上存储空间。在GPU运行过程中,可以快速调用其片上存储空间中的数据。设置在GPU片上存储空间中的帧缓冲也可以称为TileBuffer。
需要说明的是,上述硬件组成并不构成对电子设备的具体限定。在另一些实施例中,电子设备可以包括更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。
在另一些实施例中,本申请实施例涉及的电子设备还可以具有软件划分。以电子设备中运行有(安卓)操作系统为例。在该安卓操作系统中,可以具有分层的软件划分。
示例性的,图8为本申请实施例提供的一种电子设备的软件组成的示意图。如图8所示,该电子设备可以包括应用(Application,APP)层,框架(Framework)层,系统库,以及硬件(HardWare)层等。
其中,应用层也可以称为应用程序层。在一些实现中,应用程序层可以包括一系列应用程序包。应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。在本申请实施例中,应用程序包还可以包括需要通过渲染图像向用户展示图像或者视频的应用程序。其中,视频可以理解为多帧图像的连续播放。在需要渲染的图像中,可以包括具有阴影的帧图像。示例性的,该需要渲染图像的应用程序可以包括游戏类应用程序,例如
等。
框架层也可以称为应用程序框架层。该框架层可以为应用层的应用程序提供应用
编程接口(application programming interface,API)和编程框架。框架层包括一些预先定义的函数。示例性的,框架层可以包括窗口管理器,内容提供器,视图系统,资源管理器,通知管理器,活动管理器,输入管理器等。窗口管理器提供窗口管理服务(Window Manager Service,WMS),WMS可以用于窗口管理、窗口动画管理、surface管理以及作为输入系统的中转站。内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。该数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。活动管理器可以提供活动管理服务(Activity Manager Service,AMS),AMS可以用于系统组件(例如活动、服务、内容提供者、广播接收器)的启动、切换、调度以及应用进程的管理和调度工作。输入管理器可以提供输入管理服务(Input Manager Service,IMS),IMS可以用于管理系统的输入,例如触摸屏输入、按键输入、传感器输入等。IMS从输入设备节点取出事件,通过和WMS的交互,将事件分配至合适的窗口。
在本申请实施例中,在框架层中可以设置有一个或多个功能模块,用于实现本申请实施例提供的方案。示例性的,框架层中可以设置有创建模块,处理模块,以及阴影渲染模块等。
其中,创建模块可以用于在内存以及GPU片上存储空间中创建帧缓冲。例如,在内存中创建用于存储深度渲染结果的帧缓冲。又如,在GPU上创建TileBuffer,用于进行法线渲染,以及进行阴影渲染。
处理模块,可以用于对应用程序下发的渲染命令进行处理,并调用相应的API指示GPU执行渲染操作。例如,处理模块可以在应用程序下发指示进行深度渲染的渲染命令时,控制GPU进行当前帧图像的深度信息的渲染操作,并将深度渲染结果存储在内存中。又如,处理模块可以在应用程序下发指示进行法线渲染的渲染命令时,控制GPU进行当前帧图像的法线信息的渲染操作,并将法线渲染结果存储在GPU的TileBuffer中。又如,处理模块可以在应用程序下发指示进行阴影渲染的渲染命令时,控制GPU从内存中获取深度渲染结果,从TileBuffer中获取法线渲染结果,从而根据光线追踪算法进行渲染操作获取对应的阴影渲染结果。
可以看到,创建模块以及处理模块可以对应用程序下发的渲染命令进行响应。在本申请实施例中,为了使得创建模块以及处理模块能够顺利获取应用程序下发的渲染命令,如图8所示,在框架层还可以设置有拦截模块。在本申请中,拦截模块可以用于接收应用程序下发的渲染命令,并根据渲染命令所指示的信息,将对应的渲染命令
发送给相应的模块进行处理。
在一些实施例中,拦截模块可以将用于指示创建帧缓冲的命令发送给创建模块进行处理。作为一种可能的实现,该用于指示创建帧缓冲的命令可以包括:glCreateFrameBuffer函数。
在另一些实施例中,拦截模块还可以将用于指示进行渲染操作的命令发送给处理模块进行处理。作为一种可能的实现,该用于指示进行渲染操作的命令可以包括指示进行深度信息的渲染操作的命令、指示进行法线信息的渲染操作的命令、指示进行阴影渲染的渲染操作的命令。类似于上述对创建帧缓冲的命令的拦截机制,拦截模块可以根据渲染命令中携带的指令,确定渲染命令所指示的内容。例如,指示进行深度信息的渲染操作的命令可以包括关键字depthMap,指示进行法线信息的渲染操作的命令可以包括关键字Vertex或者Vertex以及Normal。可以理解的是,法线信息可以包括在顶点(Vertex)信息中,在一些实现中,法线向量的相关数据包括在Vertex命令,以Normal来进行标识。此外,指示进行阴影渲染的命令可以包括关键字shadow。
继续参考图8,在本申请实施例中,框架层中还可以设置有阴影渲染模块。作为一种可能的实现,该阴影渲染模块可以在GPU完成法线信息的渲染,获取法线渲染结果之后,指示GPU进行阴影信息的渲染。
结合图7的示意,在本申请实施例中,法线信息的渲染管线可以设置在GPU的TileBuffer中。示例性的,执行法线信息的渲染管线可以是基于SubPass系统的。应当理解的是,SubPass系统作为当前大多数渲染平台提供的一种渲染管线机制,不同于传统的MultiPass系统,能够使得下一个SubPass在执行过程中,直接获取当前SubPass的渲染结果。而不需要如MultiPass系统中,当前管线渲染的渲染完成后,需要将渲染结果存储在内存中,下一个管线需要从内存中读取该结果才能获取当前管线中获取的渲染结果。
那么,在本示例中,阴影渲染模块所指示GPU执行的阴影渲染管线,也可以是基于SubPass系统的。该阴影渲染管线可以设置在GPU的TileBuffer中,该阴影渲染管线可以是法线渲染管线的下一个渲染命令所对应指示的。例如,Drawcall A指示GPU进行基于SubPass的法线渲染。在Drawcall A执行结束后,紧邻的下一个Drawcall B可以是阴影渲染模块下发的,指示GPU进行基于SubPass的阴影渲染的渲染命令。
这样,GPU在完成Drawcall A的渲染获取法线渲染结果之后,就可以执行Drawcall B。由于Drawcall B也是基于SubPass系统的,因此,GPU在执行Drawcall B指示的阴影渲染时,可以直接获取上一个SubPass(即Drawcall A对应的渲染操作,也即法线渲染管线的渲染操作)的渲染结果(也就是法线渲染结果)。这样,GPU可以不需与内存进行读写交互,即可获取法线渲染结果。此外,GPU可以根据Drawcall B从内存中读取深度渲染结果,进而根据光线追踪算法进行渲染操作,即可获取当前帧图像的阴影渲染结果。
需要说明的是,在本申请的一些实施例中,如上述对于阴影渲染模块所做的说明,阴影渲染模块可以在法线渲染操作结束后,直接指示GPU执行阴影渲染。也就是说,在该示例中,电子设备可以不需要接收应用程序下发的用于指示阴影渲染的渲染指令,即可自行完成阴影渲染。而在应用程序下发该用于指示阴影渲染的渲染指令后,电子
设备就可以直接回调GPU执行的阴影渲染结果反馈给应用程序。
在本申请的另一些实施例中,应用程序也可以在指示电子设备进行法线渲染之后,顺序执行基于SubPass的阴影渲染操作。那么,电子设备的GPU也可以在进行阴影渲染的过程中,直接获取上一个SubPass的渲染结果,即直接获取法线渲染结果。在该示例中,电子设备中就可以不再设置该阴影渲染模块。
因此,在不同实现中,基于本申请实施例提供的方案,对于不同的应用程序的渲染命令下发机制,都能够实现通过SubPass的阴影渲染过程直接获取法线渲染结果的效果。从而能够节省阴影渲染过程中,获取法线渲染结果时GPU与内存之间的数据读写开销。
如图8所示,电子设备中还可以设置有包括图形库的系统库。在不同实现中,图形库可以包括如下中的至少一种:开放图形库(Open Graphics Library,OpenGL)、嵌入式系统的开放图形库(OpenGL for Embedded Systems,OpenGL ES)、Vulkan等。在一些实施例中,系统库中还可以包括其他模块。例如:表面管理器(surface manager),媒体框架(Media Framework)、标准C库(Standard C library,libc)、SQLite、Webkit等。
其中,表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了二维(2D)和三维(3D)图层的融合。媒体框架支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:动态图像专家组4(Moving Pictures Experts Group,MPEG4),H.264,动态影像专家压缩标准音频层面3(Moving Picture Experts Group Audio Layer3,MP3),高级音频编码(Advanced Audio Coding,AAC),自适应多码解码(Adaptive Multi-Rate,AMR),联合图像专家组(Joint Photographic Experts Group,JPEG,或称为JPG),便携式网络图形(Portable Network Graphics,PNG)等。OpenGL ES和/或Vulkan提供应用程序中2D图形和3D图形的绘制和操作。SQLite为电子设备400的应用程序提供轻量级关系型数据库。
在应用程序下发渲染命令之后,框架层中的各个模块就可以调用图形库中对应的API,指示GPU执行相应的渲染操作。
在如图8的示例中,电子设备中还可以包括硬件层。该硬件层中可以包括CPU,GPU,以及具有存储功能的存储器(如内存)。在一些实现中,CPU可以用于控制框架层中的各个模块实现其各自的功能,GPU可以用于根据框架层中各个模块处理后的指令所调用的图形库(如OpenGL ES)中的API执行相应的渲染处理。
本申请实施例提供的方案均能够应用于如图8所示的电子设备中。需要说明的是,如图8的示例,并不构成对电子设备的限制。在另一些实施例中,电子设备还可以包括更多或更少的组件。本申请实施例对于电子设备的具体组成不做限制。
以下说明中,以电子设备具有如图8所述的软件划分,应用程序为游戏应用(对应第一应用程序),游戏应用下发渲染指令流指示电子设备执行对第一帧图像(或第N帧图像)的渲染处理,第一帧图像中可以包括阴影区域为例,对本申请实施例提供的方案所涉及的阴影渲染方案进行详细说明。通过本申请实施例提供的阴影渲染方案所获取的阴影渲染结果,可以用于显示第一帧图像中的阴影区域。应当理解的是,基于图8中关于图形库的说
明,可以理解的是,CPU在需要指示GPU执行渲染操作时,可以通过调用图形库中的API指示向GPU传达该渲染指令。以下说明中,不再对该渲染过程中调用API的过程进行详细说明。
作为一种示例,请参考图9,为本申请实施例提供的一种图像渲染方法的模块交互示意图。该如图9所示的方案可以用于进行帧缓冲的创建。
示例性的,如图9所示。在游戏应用开始运行后,比如游戏应用处于加载画面等状态下,可以下发渲染命令901。该渲染命令901可以包括至少一个glCreateFrameBuffer函数,用于指示电子设备进行后续图像渲染过程中所需要的帧缓冲。
例如,该渲染命令901中可以包括用于指示在电子设备的内存上创建第一帧缓冲的第三渲染指令(对应第三渲染指令),该第一帧缓冲可以用于存储深度渲染结果。该第一帧缓冲可以对应到后续说明中的帧缓冲93。
又如,该渲染命令901中可以包括用于指示在电子设备的GPU的片上存储空间上创建第二帧缓冲的第四渲染指令,该第二帧缓冲可以用于存储法线渲染结果。该第二帧缓冲可以对应到后续说明中的帧缓冲91。
电子设备的框架层中设置的拦截模块可以拦截该渲染命令901,根据其中携带的glCreateFrameBuffer函数确定该渲染命令901用于指示电子设备进行帧缓冲的创建。那么,拦截模块可以将该渲染命令901传输给创建模块进行后续处理。
创建模块可以响应于该渲染命令901,创建对应的帧缓冲。
在本申请实施例中,创建模块可以同时或分次创建多个帧缓冲。其中,这些帧缓冲可以包括设置在内存中的帧缓冲,以及设置在GPU片上存储空间中的帧缓冲,即前述说明中的TileBuffer。
作为一种示例,如图9所示,创建模块可以在GPU的缓存上创建帧缓冲91。创建模块还可以在GPU的缓存上创建帧缓冲92。该帧缓冲91和帧缓冲92就可以为设置在GPU片上的缓存,用于执行后续渲染过程中的法线渲染以及阴影渲染。在后续说明中,该用于进行法线渲染的帧缓冲91还可以被称为G-Buffer。
创建模块还可以在内存中创建帧缓冲93以及帧缓冲94。该设置在内存中的帧缓冲可以用于进行深度信息的渲染存储,例如,可以在帧缓冲93上执行深度渲染。为了使得阴影渲染结果可以便于其他管线调用,阴影渲染完成之后,GPU也可以将阴影渲染结果存储在内存中。例如,该帧缓冲94中可以包括一张贴图,该贴图可以为RGBA16F格式,用于在不同通道存储阴影渲染结果。
这样,在后续各个帧图像的渲染过程中,就可以通过上述帧缓冲91-帧缓冲94的ID调用对应的已经创建的帧缓冲,执行相应的渲染操作。
示例性的,以渲染第N帧图像为例。游戏应用可以通过下发渲染命令指示电子设备进行第N帧图像中深度信息的渲染。
如图10所示,为本申请实施例提供的又一种图像渲染方法的模块交互示意。该如图10所示的方案可以用于进行深度信息的渲染,即执行深度渲染,获取深度信息(或称为深度渲染结果)。
如图10所示,游戏应用在需要进行第N帧图像的渲染时,可以向电子设备发送渲染命令902(对应第一渲染指令)。该渲染命令902可以包括关键字depthMap,用于指示电
子设备进行当前帧图像(即第N帧图像)的深度信息的渲染。在本申请中,该渲染命令902也可以称为第一渲染指令。
对应的,拦截模块可以根据该携带的关键字depthMap,确定该渲染命令902用于指示进行深度信息的渲染。那么,拦截模块可以将该渲染命令902传输给处理模块进行后续处理。
处理模块,可以根据渲染命令902,指示GPU进行深度信息的渲染。应当理解的是,结合前述关于系统库的说明,在一些实施例中,处理模块可以通过调用系统库中,与渲染命令902相应的API,指示GPU进行第N帧图像的深度信息的渲染。
需要说明的是,在本申请实施例中,处理模块在指示GPU进行深度渲染时,还可以指示GPU将深度渲染结果存储在内存中。例如,在处理模块指示GPU进行深度渲染时,可以携带有帧缓冲93的帧缓冲ID,以便于GPU可以将渲染获取的深度信息存储在帧缓冲93上。其中,帧缓冲93的帧缓冲ID可以是创建模块完成创建帧缓冲后发送给处理模块的。
那么,在处理模块的控制下,GPU就可以执行深度渲染操作,并将获取的深度渲染结果存储在内存中的帧缓冲93。
此外,游戏应用还可以通过下发渲染命令指示电子设备进行第N帧图像中法线信息的渲染。
示例性的,请参考图11,为本申请实施例提供的又一种图像渲染方法的模块交互示意。该如图11所示的方案可以用于进行法线信息的渲染,即执行法线渲染,获取法线信息(或称为法线渲染结果)。
如图11所示,游戏应用在需要进行第N帧图像的渲染时,可以向电子设备发送渲染命令903(对应第二渲染指令)。该渲染命令903可以包括关键字Vertex,用于指示电子设备进行当前帧图像(即第N帧图像)的包括深度信息在内的几何信息的渲染。在本申请中,该渲染命令903也可以称为第二渲染指令。
可以理解的是,在一些情况下,法线信息可以与模型的其他几何信息一同渲染获取。该几何信息的渲染过程可以根据游戏应用下发的顶点数据(Vertex)执行。
对应的,拦截模块可以根据该携带的关键字Vertex,确定该渲染命令903用于指示进行法线信息的渲染。那么,拦截模块可以将该渲染命令903传输给处理模块进行后续处理。
处理模块,可以根据渲染命令903,指示GPU进行法线信息的渲染。应当理解的是,结合前述关于系统库的说明,在一些实施例中,处理模块可以通过调用系统库中,与渲染命令903相应的API,指示GPU进行第N帧图像的法线信息的渲染。
需要说明的是,不同于深度渲染结果在内存中的存储,在本申请实施例中,法线渲染管线可以设置在GPU的TileBuffer上。在一些实现中,该在TileBuffer上行的法线渲染管线可以是第二渲染指令指示的。那么,GPU接收到的根据第二渲染指令生成的第六渲染指令,就可以与第二渲染指令相同或类似。在另一些实现中,第二渲染指令可以指示在内存上执行法线渲染管线。那么,在本申请中,处理模块可以根据第二渲染指令生成第六渲染指令,该第六渲染指令可以指示GPU在TileBuffer上执行该法线渲染管线的操作。该在TileBuffer上执行法线渲染操作的渲染管线也可以称为第一SubPass。
示例性的,处理模块在指示GPU进行法线渲染时,还可以指示该法线渲染过程绑定(Bind)在帧缓冲91上。其中,帧缓冲91的帧缓冲ID可以是创建模块完成创建帧缓冲
后发送给处理模块的。那么,GPU就可以在帧缓冲91上执行法线渲染管线。在一些实施例中,处理模块指示GPU执行的法线渲染管线可以为基于SubPass的渲染管线。以便于后续SubPass管线能够直接获取该法线渲染结果。
在一些角度,该在帧缓冲91上执行的基于SubPass的法线渲染过程执行后,获取的法线渲染结果可以被临时存储在帧缓冲91上。那么,后续SubPass管线就可以通过GPU片上缓存(即帧缓冲91)上读取数据,就可以快速获取法线渲染结果。
那么,如图11所示,在执行完成法线渲染操作之后,法线渲染结果就可以被存储在帧缓冲91上。
在本申请的一些实施例中,GPU在完成法线渲染后,可以向上层执行回调,以便于上层知晓法线已经渲染完成。例如,GPU可以在完成法线渲染操作获取法线渲染结果后,向处理模块回调用于指示已完成法线渲染的消息(如第一消息)。这样,电子设备的框架层中的各个模块就能够知晓当前的渲染进度。在本申请中,结合上述基于SubPass的渲染管线的特性说明,该在帧缓冲91上执行的法线渲染管线可以是基于SubPass的。那么,为了后续渲染过程在需要使用该法线渲染那结果(如阴影渲染过程)时,能够快速调用该SubPass的渲染结果,电子设备可以在知晓当前已经完成法线渲染后,控制GPU执行基于SubPass的阴影渲染,从而使得该基于SubPass的阴影渲染管线可以直接快速获取法线渲染结果。
作为第一种示例,以法线渲染管线为SubPass-G(即第一SubPass),阴影渲染管线为SubPass-Shadow(或者称为第二SubPass)为例。电子设备可以根据如图11所示的方案,完成SubPass-G的渲染,获取法线渲染结果。接着,电子设备可以控制GPU执行阴影渲染操作。例如,电子设备可以在确定GPU完成SubPass-G的渲染之后,指示GPU执行SubPass-Shadow的渲染操作。那么,由于SubPass-Shadow是SubPass-G完成后顺序执行的SubPass,因此GPU在执行该SubPass-Shadow中的阴影渲染时,可以直接获取SubPass-G的渲染结果,即法线渲染结果。结合图6的说明,相比于目前的从内存中读取法线渲染结果的方案,本示例中利用SubPass的特性,能够节省GPU从内存中读取法线渲染结果的读写开销。
在本申请的不同实现中,电子设备可以自发的在GPU完成法线渲染后,执行阴影渲染;或者,电子设备可以在GPU完成法线渲染之后,根据游戏应用下发的渲染命令执行阴影渲染。
示例性的,请参考图12,为本申请实施例提供的又一种图像渲染方法的模块交互示意。该如图12所示的方案可以用于进行阴影渲染。该示例中,以电子设备自发的在GPU完成法线渲染后,执行阴影渲染为例。
如图12所示,处理模块可以在GPU完成法线渲染后,指示阴影渲染模块当前的渲染进度为:已完成法线渲染。在一些实施例中,结合图11,处理模块可以根据GPU回调的已完成法线渲染的消息,确定当前GPU完成法线渲染。
对应的,阴影渲染模块可以向GPU下发进行阴影渲染的指令。该指令可以绑定(Bind)有GPU的TileBuffer,以便于GPU在该TileBuffer执行该阴影渲染操作。例如,该进行阴影渲染的指令可以绑定有帧缓冲92,以便于GPU在该帧缓冲92上运行阴影渲染管线,执行阴影渲染操作。
在该进行阴影渲染的指令中还可以携带有存储深度渲染结果的帧缓冲ID,存储法线渲染结果的帧缓冲ID,以及存储阴影渲染结果的帧缓冲ID。这些帧缓冲ID可以是经由处理模块从创建模块获取的,或者,这些帧缓冲ID可以是阴影渲染模块直接从创建模块获取的。
在本申请实施例中,该进行阴影渲染的指令还可以指示GPU,该阴影渲染管线可以是基于SubPass系统的。
例如,阴影渲染模块可以将GPU下发帧缓冲91的帧缓冲ID,帧缓冲93的帧缓冲ID,以及帧缓冲92的帧缓冲ID。以便于GPU从帧缓冲91、帧缓冲93中获取阴影渲染过程所需要的输入数据。
响应于该进行阴影渲染的指令,GPU可以运行基于SubPass的阴影渲染管线。GPU可以获取法线渲染结果,读取深度渲染结果,进行阴影渲染操作。
示例性的,GPU可以从帧缓冲91上获取法线渲染结果。结合前述说明,该阴影渲染管线(如SubPass-Shadow)可以为SubPass-G之后的SubPass,因此能够直接获取法线渲染结果。本申请中,由于SubPass-G是在帧缓冲91上执行的,因此也可以认为该法线渲染结果是SubPass-Shadow从帧缓冲91上获取的。此外,GPU还可以从内存中的帧缓冲93读取深度渲染结果。这样,GPU就可以在帧缓冲92上的SubPass-Shadow中执行阴影渲染操作。在一些实现中,该在SubPass-Shadow中执行的阴影渲染操作可以是根据预设在电子设备中的光线追踪算法执行的。
由此,GPU只需要与内存进行一次数据读取交互,即可执行阴影渲染操作。
在本申请中,GPU可以在完成阴影渲染操作后,将阴影渲染结果存储在内存中,以便其他管线调用。例如,电子设备可以对阴影渲染结果进行降噪(去噪)操作,以便获取较好的阴影渲染结果等。
示例性的,如图12所示,GPU可以在完成阴影渲染操作之后,存储阴影渲染结果到内存的帧缓冲94中。
作为一种可能的实现,该阴影渲染结果可以包括阴影的法线信息、各个像素的阴影信息(ShadowMask)、以及阴影的距离信息(Distance)等。其中,法线信息可以包括x、y两个方向上的法线信息。也即,法线信息可以包括法线信息(x)以及法线信息(y)两部分。该法线信息(x)也可以被称为Normal(x),该法线信息(y)也可以被称为Normal(y)。
在本申请中,GPU在将阴影渲染结果存储到内存的帧缓冲94上时,可以将所有的阴影渲染结果存储到帧缓冲94上的一个预设格式的贴图上。该预设格式的贴图可以包括至少4个通道。其中的两个通道可以用于存储法线信息,另外一个通道可以用于存储阴影信息,另外一个通道可以用户存储距离信息。
作为一种可能的实现,以预设格式可以为RGBA16F格式为例。结合图13,GPU在帧缓冲92上完成阴影渲染操作之后,阴影渲染管线可以将阴影渲染结果输出到帧缓冲94上的RGBA16F格式的贴图上。例如,法线信息(x)可以被输出存储到帧缓冲94上RGBA16F格式的R通道中;法线信息(x)(即Normal(x))可以被输出存储到帧缓冲94上RGBA16F格式的R通道中;法线信息(y)(即Normal(y))可以被输出存储到帧缓冲94上RGBA16F格式的G通道中;阴影信息(ShadowMask)可以被输出存储到帧缓冲94上RGBA16F格式
的B通道中;距离信息(Distance)可以被输出存储到帧缓冲94上RGBA16F格式的A通道中。在本申请中,法线信息(x)也可以称为第一法线信息。法线信息(y)也可以称为第二法线信息。
这样,就实现了在同一个贴图上保存阴影渲染结果的目的。相比于将阴影渲染结果存储在两个或更多贴图上,本示例提供的方案能够在节省内存的存储开销之外,更加方便其他管线对阴影渲染结果的调用。
可以看到,该如图12的示例中,电子设备可以在GPU完成法线渲染之后自行触发阴影渲染并存储到内存中。那么,游戏应用也可以在后续下发的渲染命令流中指示电子设备进行阴影渲染操作。例如,游戏应用可以下发渲染命令904(对应第五渲染指令),指示电子设备进行当前帧图像的阴影渲染。该渲染命令904中可以包括关键字Shadow。对应的,拦截模块可以根据该关键字Shadow拦截渲染命令904,并将该渲染命令904发送给处理模块。处理模块可以在接收到该渲染命令904后,将帧缓冲94的帧缓冲ID回调给游戏应用。可以理解的是,由于如图12所示的电子设备自行执行阴影渲染的机制,在游戏应用下发渲染命令904之前,在帧缓冲94中就可以已经存储有该阴影渲染结果。那么,处理模块就可以在接收到渲染命令904后,直接将存储有阴影渲染结果的帧缓冲94的帧缓冲ID回调给游戏应用,以便于游戏应用知晓并使用该阴影渲染结果。在本申请中,该渲染命令904也可以称为第五渲染指令。
需要说明的是,上述图12的示例中,是以GPU完成法线渲染之后,电子设备自行阴影渲染为例进行说明的。在本申请的另一些实施例中,该阴影渲染过程也可以是在游戏应用指示下执行的。
对于一些游戏应用而言,其内部机制类似于如图12所示的逻辑,即在指示电子设备进行基于SubPass的法线渲染之后,可以下发渲染指令指示电子设备继续进行基于SubPass的阴影渲染操作。这样,电子设备在使用SubPass-Shadow管线进行阴影渲染时,也可以直接获取上一个SubPass的渲染结果(即SubPass-G的法线渲染结果),从而达到与如图12所示方案类似的效果。
示例性的,请参考图14,为本申请实施例提供的又一种图像渲染方法的模块交互示意。该如图14所示的方案可以用于进行阴影渲染。该示例中,电子设备根据游戏应用顺序下发基于SubPass的法线渲染指令以及基于SubPass的阴影渲染指令为例。
如图14所示,游戏应用可以下发渲染命令904,指示电子设备进行当前帧图像的阴影渲染。该渲染命令904中可以包括关键字Shadow。拦截模块可以根据该关键字Shadow拦截渲染命令904,并将该渲染命令904发送给处理模块。处理模块可以根据该渲染命令904,指示GPU进行阴影渲染。结合图12中的示例,该指示GPU进行阴影渲染的指令可以指示该阴影渲染管线可以是基于SubPass系统的。此外,该指示GPU进行阴影渲染的指令还可以包括帧缓冲91的帧缓冲ID,帧缓冲93的帧缓冲ID,以及帧缓冲92的帧缓冲ID。以便于GPU从帧缓冲91、帧缓冲93中获取阴影渲染过程所需要的输入数据。该指示GPU进行阴影渲染的指令还可以包括用于执行阴影渲染的帧缓冲92的帧缓冲ID,以及用于存储阴影渲染结果的帧缓冲94的帧缓冲ID。
对应的,GPU可以从帧缓冲91中获取法线渲染结果,从帧缓冲93中读取深度渲染结果。GPU可以在帧缓冲92上运行阴影渲染管线,并将根据光线追踪算法渲染获取的阴影渲
染结果存储在帧缓冲94上。阴影渲染结果在帧缓冲94上的存储机制可以参考图13的示例,此处不再赘述。
这样,基于上述图9-图14的说明,电子设备可以在GPU的TileBuffer中进行基于SubPass的法线渲染。而不需要将法线渲染结果存入内存,从而节省该过程的读写开销。电子设备还可以在GPU的TileBuffer中进行基于SubPass的阴影渲染,从而能够直接获取法线渲染那结果,而不需要从内存中读入,由此节省该过程的读写开销。电子设备还可以在内存的一个预设格式的贴图上存储所有的阴影渲染结果,从而节省内存的存储开销。
上述示例图9-图14的示例,是从模块间交互的角度对本申请实施例提供的渲染方法进行说明的。以下结合图15所示的模块交互流程图,继续对本申请实施例提供的方案进行说明。以电子设备自行在完成法线渲染之后执行阴影渲染为例。
如图15所示,该过程可以包括:
S1501、游戏应用开始运行后,下发渲染命令901。
示例性的,该渲染命令901可以包括至少一个glCreateFrameBuffer函数,用于指示电子设备进行后续图像渲染过程中所需要的帧缓冲。
S1502、拦截模块拦截该渲染命令901,确定渲染命令901指示进行帧缓冲创建。
拦截模块可以根据渲染命令901中包括的glCreateFrameBuffer函数,确定该渲染命令901指示进行帧缓冲创建。
S1503、拦截模块向创建模块发送渲染命令901。
S1504、创建模块在GPU的缓存上创建帧缓冲91和帧缓冲92。
S1505、在GPU片上缓存上创建帧缓冲91和帧缓冲92。
这样,帧缓冲91以及帧缓冲92可以为GPU片上存储空间中的TileBuffer。其中,帧缓冲91可以用于进行法线渲染,帧缓冲92则可以用于进行阴影渲染。
S1506、创建模块在内存上创建帧缓冲93和帧缓冲94。
S1507、在内存上创建帧缓冲93和帧缓冲94。
这样,帧缓冲93以及帧缓冲94可以为内存中的帧缓冲。其中,帧缓冲93可以用于进行深度渲染,帧缓冲94则可以用于存储阴影渲染结果。
需要说明的是,S1504-S1505的执行与S1506-S1507的执行先后顺序可以不做限定。例如,在一些实施例中,S1504-S1505可以早于S1506-S1507执行。在另一些实施例中,S1504-S1505可以晚于S1506-S1507执行。在一些实施例中,S1504-S1505可以与S1506-S1507同步执行。
S1508、创建模块向处理模块发送新建帧缓冲的帧缓冲ID。
其中,新建帧缓冲可以包括帧缓冲91-帧缓冲94。那么,该新建帧缓冲的帧缓冲ID可以包括帧缓冲91-帧缓冲94的帧缓冲ID。
由此就可以完成帧缓冲的创建,以便于后续对于帧图像的渲染过程中随时调用。
S1509、游戏应用下发渲染命令902。
示例性的,该渲染命令902可以包括关键字depthMap,用于指示电子设备进行当前帧图像(如第N帧图像)的深度信息的渲染。
S1510、拦截模块拦截该渲染命令902,确定渲染命令902指示进行深度渲染。
其中,拦截模块可以根据关键字depthMap,确定渲染命令902指示进行深度渲染。
S1511、拦截模块向处理模块发送渲染命令902。
S1512、处理模块向GPU发送深度渲染指令。
示例性的,处理模块可以根据渲染命令902生成该深度渲染指令。在一些实施例中,深度渲染指令中可以包括帧缓冲93的帧缓冲ID,以便于指示GPU将深度渲染结果存储在帧缓冲93中。
需要说明的是,类似与前述说明,该深度渲染指令的具体实施,可以是处理模块通过深度渲染指令,调用图形库中的API,指示GPU执行对应的深度渲染操作的。
S1513、GPU根据深度渲染指令执行深度渲染操作。
S1514、GPU向内存发送深度渲染结果。
S1515、内存在帧缓冲93上存储深度渲染结果。
这样,就完成了深度渲染的过程。
S1516、游戏应用下发渲染命令903。
示例性的,该渲染命令903可以包括关键字Vertex,用于指示电子设备进行当前帧图像(即第N帧图像)的包括深度信息在内的几何信息的渲染。
S1517、拦截模块拦截该渲染命令903,确定渲染命令903指示进行法线渲染。
其中,拦截模块可以根据关键字Vertex,确定渲染命令903指示进行法线渲染。
S1518、拦截模块向处理模块发送渲染命令903。
S1519、处理模块向GPU发送法线渲染指令。
示例性的,处理模块可以根据渲染命令903生成该法线渲染指令。在一些实施例中,深度渲染指令中可以包括帧缓冲91的帧缓冲ID,以便于指示GPU在帧缓冲91上执行法线渲染。此外,深度渲染指令中还可以包括第一标识,用于指示GPU执行基于SubPass的渲染操作。
需要说明的是,类似与前述说明,该法线渲染指令的具体实施,可以是处理模块通过法线渲染指令,调用图形库中与SubPass相应的API,并基于渲染命令903中携带的Vertex数据,指示GPU执行对应的包括法线在内的几何渲染操作的。
S1520、GPU根据法线渲染指令执行法线渲染操作。
示例性的,GPU可以根据法线渲染指令,在帧缓冲91上运行SubPass-G,以便于执行法线渲染操作。对应的,在完成SubPass-G中的法线渲染后,就可以在帧缓冲91上获取法线渲染结果。该完成法线渲染到在片上缓存中获取法线渲染那结果的过程可以如S1521-S1522所示。
S1521、GPU将法线渲染结果发送给GPU片上缓存。
S1522、在帧缓冲91获取法线渲染结果。
这样,就可以在SubPass-G中获取法线渲染结果。该法线渲染结果可以直接被下一个SubPass管线获取。
在本示例中,GPU还可以在完成法线渲染操作之后反馈法线渲染已完成。例如,如S1523所示。
S1523、GPU向处理模块发送法线渲染完成指示。
S1524、处理模块向阴影渲染模块发送法线渲染完成指示。
由此触发阴影渲染模块自行控制GPU进行阴影渲染。
在本申请的另一些实施例中,GPU可以直接向阴影渲染模块反馈法线渲染完成指示,以便于触发阴影渲染模块自行控制GPU进行阴影渲染。
S1525、阴影渲染模块生成阴影渲染指令。
示例性的,该阴影渲染指令可以用于指示GPU进行阴影渲染。在该示例中,该阴影渲染指令也可以携带有第一标识,用于指示GPU执行基于SubPass的渲染操作。
需要说明的是,类似与前述说明,该阴影渲染指令的具体实施,可以是阴影渲染模块通过阴影渲染指令,调用图形库中与SubPass相应的API,指示GPU执行对应的阴影渲染操作的。
S1526、阴影渲染模块向GPU发送阴影渲染指令。
S1527、GPU从GPU片上缓存获取法线渲染结果。
示例性的,GPU可以从帧缓冲91上获取法线渲染结果。可以理解的是,由于该阴影渲染管线(如SubPass-Shadow)为SubPass-G后的一个SubPass,因此可以直接获取SubPass-G的渲染结果,即法线渲染结果。
S1528、GPU从内存读取深度渲染结果。
示例性的,GPU可以从帧缓冲93上读取深度渲染结果。
S1529、GPU执行阴影渲染操作。例如,GPU可以根据预设的光线追算法,基于获取的法线渲染结果以及深度渲染结果,计算获取阴影渲染结果。
S1530、GPU向内存发送阴影渲染结果。
S1531、在帧缓冲94上存储阴影渲染结果。
示例性的,在帧缓冲94上存储阴影渲染结果的方式可以参考图13所示的方案,此处不再赘述。
这样,电子设备就可以实现基于TileBuffer的阴影渲染。由于不需要将法线渲染结果存储到内存中,也就不需要在阴影渲染过程中读取法线渲染结果。因此能够响应的读写开销,提升阴影渲染效率。
在后续游戏应用需要使用阴影渲染结果时,则电子设备就可以直接将阴影渲染结果回调给游戏应用。
示例性的,如S1532所示,游戏应用可以下发渲染命令904,指示电子设备进行当前帧图像的阴影渲染。该渲染命令904中可以包括关键字Shadow。接下来,在S1533中,拦截模块可以根据该关键字Shadow确定渲染命令904指示进行阴影渲染。拦截模块可以将渲染命令904发送给处理模块(如执行S1534)。对应的,处理模块可以直接将帧缓冲94的帧缓冲ID发送给游戏应用(如执行S1535),以便于游戏应用可以直接在帧缓冲94中获取阴影渲染结果。
在本申请的另一些实施例中,电子设备也可以根据游戏应用下发的渲染命令,在执行SubPass-G的法线渲染之后,执行SubPass-Shadow的阴影渲染。该过程可以参考如图14的示例,其具体实施过程不再赘述。
上述主要从各个服务模块的角度对本申请实施例提供的方案进行了介绍。为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬
件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
图16示出了的一种电子设备1600的组成示意图。如图16所示,该电子设备1600可以包括:处理器1601和存储器1602。该存储器1602用于存储计算机执行指令。示例性的,在一些实施例中,当该处理器1601执行该存储器1602存储的指令时,可以使得该电子设备1600执行上述实施例中任一种所示的图像渲染方法。
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
图17示出了的一种芯片系统1700的组成示意图。该芯片系统1700可以包括:处理器1701和通信接口1702,用于支持相关设备实现上述实施例中所涉及的功能。在一种可能的设计中,芯片系统还包括存储器,用于保存终端必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。需要说明的是,在本申请的一些实现方式中,该通信接口1702也可称为接口电路。
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
在上述实施例中的功能或动作或操作或步骤等,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式来实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或者数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包括一个或多个可以用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带),光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
尽管结合具体特征及其实施例对本申请进行了描述,显而易见的,在不脱离本申请的精神和范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的本申请的示例性说明,且视为已覆盖本申请范围内的任意和所有修改、变化、组合或等同物。显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包括这些改动和变型在内。
Claims (16)
- 一种图像渲染方法,其特征在于,应用于电子设备中,所述电子设备中运行有第一应用程序,所述第一应用程序通过下发渲染指令流指示所述电子设备执行对第一帧图像的渲染处理,所述第一帧图像中包括阴影区域;所述渲染指令流中包括第一渲染指令和第二渲染指令,所述方法包括:根据所述第一渲染指令渲染获取所述第一帧图像的深度渲染结果;所述深度渲染结果存储在所述电子设备的内存中;根据所述第二渲染指令渲染获取所述第一帧图像的法线渲染结果;所述法线渲染结果存储在所述电子设备的图形处理模块的片上存储区中;根据所述深度渲染结果和所述法线渲染结果,获取与所述阴影区域匹配的阴影渲染结果。
- 根据权利要求1所述的方法,其特征在于,所述渲染指令流还包括第三渲染指令,所述第三渲染指令用于指示所述电子设备在内存中创建第一帧缓冲;所述第一帧缓冲用于存储所述深度渲染结果;在根据所述第一渲染指令渲染获取所述第一帧图像的深度渲染结果之前,所述方法还包括:根据所述第三渲染指令,在所述内存中创建第一帧缓冲;所述深度渲染结果存储在所述电子设备的内存中,包括:所述深度渲染结果存储在所述第一帧缓冲上。
- 根据权利要求1或2所述的方法,其特征在于,所述渲染指令流还包括第四渲染指令,所述第四渲染指令用于指示所述电子设备创建第二帧缓冲;所述第二帧缓冲用于存储所述法线渲染结果;在根据所述第二渲染指令渲染获取所述第一帧图像的法线渲染结果之前,所述方法还包括:根据所述第四渲染指令,在所述图形处理模块的片上存储区中创建所述第二帧缓冲;所述法线渲染结果存储在所述电子设备的图形处理模块的片上存储区中,包括:所述法线渲染结果存储在所述第二帧缓冲上。
- 根据权利要求1-3中任一项所述的方法,其特征在于,所述渲染指令流还包括第五渲染指令,所述第五渲染指令用于指示所述电子设备执行所述对所述阴影信息的渲染操作;所述根据所述深度渲染结果和所述法线渲染结果,获取阴影渲染结果,包括:响应于所述第五渲染指令,从所述内存中读取所述深度渲染结果,从所述图形处理模块的片上存储区中获取所述法线渲染结果;根据预设的光线追踪算法,处理获取所述阴影渲染结果。
- 根据权利要求1-3中任一项所述的方法,其特征在于,所述根据所述深度渲染结果和所述法线渲染结果,获取阴影渲染结果,包括:在完成法线渲染操作的情况下,触发指示所述图形处理模块执行阴影渲染操作;所述阴影渲染操作包括:从所述内存中读取所述深度渲染结果,从所述图形处理 模块的片上存储区中获取所述法线渲染结果;根据预设的光线追踪算法,处理获取所述阴影渲染结果。
- 根据权利要求5所述的方法,其特征在于,在所述触发指示所述图形处理模块执行阴影渲染操作之前,所述方法还包括:在完成所述法线渲染操作时,生成第一消息,所述第一消息用于指示所述法线渲染操作完成;所述触发指示所述图形处理模块执行阴影渲染操作,包括:在所述第一消息生成时,触发指示所述图形处理模块执行阴影渲染操作。
- 根据权利要求1-6中任一项所述的方法,其特征在于,所述根据所述第二渲染指令渲染获取所述第一帧图像的法线渲染结果,包括:根据所述第二渲染指令,向图形处理模块下发第六渲染指令,所述第六渲染指令用于指示所述图形处理模块,在第一延迟渲染管线SubPass上执行所述第一帧图像的法线渲染操作,所述图形处理模块在所述第一SubPass上执行所述第六渲染指令,获取所述法线渲染结果。
- 根据权利要求7所述的方法,其特征在于,所述根据所述深度渲染结果和所述法线渲染结果,获取阴影渲染结果,包括:在所述图形处理模块的片上缓存中创建第二SubPass,所述第二SubPass用于执行阴影渲染操作;获取所述第一SubPass的渲染结果输入所述第二SubPass,所述第一SubPass的渲染结果包括所述法线渲染结果;从所述内存中读取所述深度渲染结果输入所述第二SubPass,根据预设的光线追踪算法,对所述法线渲染结果和所述深度渲染结果进行处理,获取所述阴影渲染结果。
- 根据权利要求1-8中任一项所述的方法,其特征在于,所述阴影渲染结果包括:第一法线信息,第二法线信息,阴影信息,以及距离信息。
- 根据权利要求9所述的方法,其特征在于,在获取所述阴影渲染结果之后,所述方法还包括:将所述阴影渲染结果输出到所述内存上的第三帧缓冲上,所述第三帧缓冲上包括第一格式的贴图,所述第一格式的贴图包括至少四个通道。
- 根据权利要求10所述的方法,其特征在于,所述将所述阴影渲染结果输出到所述内存上的第三帧缓冲上,包括:将所述第一法线信息,第二法线信息,阴影信息,以及距离信息分别输出到所述第一格式的贴图的不同通道上。
- 根据权利要求10或11所述的方法,其特征在于,所述第一格式为RGBA16F。
- 根据权利要求1-12中任一项所述的方法,其特征在于,所述图形处理模块是图形处理器GPU。
- 一种电子设备,其特征在于,所述电子设备包括一个或多个处理器和一个或多个存储器;所述一个或多个存储器与所述一个或多个处理器耦合,所述一个或多个存 储器存储有计算机指令;当所述一个或多个处理器执行所述计算机指令时,使得所述电子设备执行如权利要求1-13中任一项所述的图像渲染方法。
- 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-13中任一项所述的图像渲染方法。
- 一种芯片系统,其特征在于,所述芯片系统包括接口电路和处理器;所述接口电路和所述处理器通过线路互联;所述接口电路用于从存储器接收信号,并向所述处理器发送信号,所述信号包括所述存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,所述芯片系统执行如权利要求1-13中任一项所述的图像渲染方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210929017.X | 2022-08-03 | ||
CN202210929017.XA CN117557701B (zh) | 2022-08-03 | 2022-08-03 | 一种图像渲染方法和电子设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024027231A1 true WO2024027231A1 (zh) | 2024-02-08 |
Family
ID=89821043
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/091006 WO2024027231A1 (zh) | 2022-08-03 | 2023-04-26 | 一种图像渲染方法和电子设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN117557701B (zh) |
WO (1) | WO2024027231A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117745518B (zh) * | 2024-02-21 | 2024-06-11 | 芯动微电子科技(武汉)有限公司 | 一种优化内存分配的图形处理方法及系统 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107209923A (zh) * | 2015-02-10 | 2017-09-26 | 高通股份有限公司 | 图形处理中的混合渲染 |
US20180182153A1 (en) * | 2016-12-22 | 2018-06-28 | Apple Inc. | Mid-Render Compute for Graphics Processing |
CN111033570A (zh) * | 2017-08-22 | 2020-04-17 | 高通股份有限公司 | 使用两个渲染计算装置从计算机图形渲染图像 |
CN111383163A (zh) * | 2018-12-28 | 2020-07-07 | 英特尔公司 | 基于实时光线追踪(rtrt)的自适应多频着色(amfs) |
US20220101479A1 (en) * | 2020-09-30 | 2022-03-31 | Qualcomm Incorporated | Apparatus and method for graphics processing unit hybrid rendering |
CN114419234A (zh) * | 2021-12-30 | 2022-04-29 | 北京三快在线科技有限公司 | 三维场景渲染方法、装置、电子设备及存储介质 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010238110A (ja) * | 2009-03-31 | 2010-10-21 | Sega Corp | 画像処理装置、画像処理方法および画像処理プログラム |
US10628910B2 (en) * | 2018-09-24 | 2020-04-21 | Intel Corporation | Vertex shader with primitive replication |
CN111508055B (zh) * | 2019-01-30 | 2023-04-11 | 华为技术有限公司 | 渲染方法及装置 |
CN114581589A (zh) * | 2020-11-30 | 2022-06-03 | 华为技术有限公司 | 图像处理方法及相关装置 |
CN112381918A (zh) * | 2020-12-03 | 2021-02-19 | 腾讯科技(深圳)有限公司 | 图像渲染方法、装置、计算机设备和存储介质 |
CN114756359A (zh) * | 2020-12-29 | 2022-07-15 | 华为技术有限公司 | 一种图像处理方法和电子设备 |
-
2022
- 2022-08-03 CN CN202210929017.XA patent/CN117557701B/zh active Active
-
2023
- 2023-04-26 WO PCT/CN2023/091006 patent/WO2024027231A1/zh unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107209923A (zh) * | 2015-02-10 | 2017-09-26 | 高通股份有限公司 | 图形处理中的混合渲染 |
US20180182153A1 (en) * | 2016-12-22 | 2018-06-28 | Apple Inc. | Mid-Render Compute for Graphics Processing |
CN111033570A (zh) * | 2017-08-22 | 2020-04-17 | 高通股份有限公司 | 使用两个渲染计算装置从计算机图形渲染图像 |
CN111383163A (zh) * | 2018-12-28 | 2020-07-07 | 英特尔公司 | 基于实时光线追踪(rtrt)的自适应多频着色(amfs) |
US20220101479A1 (en) * | 2020-09-30 | 2022-03-31 | Qualcomm Incorporated | Apparatus and method for graphics processing unit hybrid rendering |
CN114419234A (zh) * | 2021-12-30 | 2022-04-29 | 北京三快在线科技有限公司 | 三维场景渲染方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN117557701B (zh) | 2024-09-20 |
CN117557701A (zh) | 2024-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11347370B2 (en) | Method and system for video recording | |
US20200396496A1 (en) | Method, apparatus for processing video, electronic device and computer-readable storage medium | |
US11082397B2 (en) | Management system and method for remote controller of electronic device | |
JP6700254B2 (ja) | 通話中のリアルタイム共有 | |
WO2022052772A1 (zh) | 多窗口投屏场景下的应用界面显示方法及电子设备 | |
US9881353B2 (en) | Buffers for display acceleration | |
JP7157177B2 (ja) | ビデオ取得方法、装置、端末および媒体 | |
JP2013546043A (ja) | 即時リモートレンダリング | |
CN116672702A (zh) | 一种图像渲染方法和电子设备 | |
WO2024027231A1 (zh) | 一种图像渲染方法和电子设备 | |
CN114708369B (zh) | 一种图像渲染方法和电子设备 | |
WO2023160167A1 (zh) | 一种图像处理方法、电子设备及存储介质 | |
CN111258519B (zh) | 屏幕分屏实现方法、装置、终端和介质 | |
CN116821040B (zh) | 基于gpu直接存储器访问的显示加速方法、装置及介质 | |
WO2024037555A1 (zh) | 页面显示方法、装置、设备及存储介质 | |
CN114302208A (zh) | 视频的发布方法、装置、电子设备、存储介质和程序产品 | |
CN109302636B (zh) | 提供数据对象全景图信息的方法及装置 | |
CN115269886A (zh) | 媒体内容处理方法、装置、设备及存储介质 | |
CN114443189B (zh) | 一种图像处理方法和电子设备 | |
WO2021052488A1 (zh) | 一种信息处理方法及电子设备 | |
CN116740254B (zh) | 一种图像处理方法及终端 | |
CN116112617A (zh) | 演播画面的处理方法、装置、电子设备及存储介质 | |
CN114020375A (zh) | 一种显示方法及装置 | |
WO2024051471A1 (zh) | 一种图像处理方法和电子设备 | |
WO2024022257A1 (zh) | 一种内容显示方法、设备及系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23848942 Country of ref document: EP Kind code of ref document: A1 |