CN116740254A - Image processing method and terminal - Google Patents

Image processing method and terminal Download PDF

Info

Publication number
CN116740254A
CN116740254A CN202211185727.2A CN202211185727A CN116740254A CN 116740254 A CN116740254 A CN 116740254A CN 202211185727 A CN202211185727 A CN 202211185727A CN 116740254 A CN116740254 A CN 116740254A
Authority
CN
China
Prior art keywords
shadow
rendering
terminal
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211185727.2A
Other languages
Chinese (zh)
Inventor
王伟亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211185727.2A priority Critical patent/CN116740254A/en
Publication of CN116740254A publication Critical patent/CN116740254A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

An image processing method and a terminal relate to the technical field of image processing, and utilize a depth rendering result and a shadow rendering result obtained in the current frame image rendering process to realize noise reduction processing, so that a large amount of information is not required to be acquired to participate in operation, the requirement on bandwidth is reduced, and the method and the terminal can be suitable for mobile terminals such as mobile phones and tablets. The method comprises the following steps: and the terminal finishes the depth rendering and obtains a depth rendering result of the first image. And the terminal finishes shadow rendering and obtains a shadow rendering result of the first image. And the terminal completes the noise reduction processing based on the depth rendering result and the shadow rendering result, and a smoothly-changed shadow effect diagram is obtained.

Description

Image processing method and terminal
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and a terminal.
Background
In mobile terminals such as mobile phones and tablets, a ray tracing algorithm can be used for each rendering stage in a graphics rendering pipeline, so that the rendering effect and the accuracy are greatly improved. The ray tracing algorithm calculates the coloring effects of illumination, materials, shadows and the like by emitting rays from a camera and performing collision detection with a geometry in a scene. Meanwhile, since the quantity of light emitted by adopting the ray tracing algorithm is limited, sampling points on a picture are not continuous, and noise points can be generated. Therefore, the quality of an image rendered using a ray tracing algorithm needs to be highly dependent on the noise reduction process.
However, in existing noise reduction schemes, a large amount of information is typically required to participate in the operation. For example, when noise reduction is implemented by using a space-time variance Guided Filter (SVGF) algorithm, not only the information of the current frame but also the information of depth, color, etc. of the previous frame image need to be acquired to participate in the noise reduction operation. Accordingly, the graphics processor (Graphics Processing Unit, GPU) needs to read a large amount of information from the memory. This puts a high requirement on the read-write bandwidth between the GPU and the memory, and mobile terminals such as mobile phones and tablets are very likely to be unable to bear the bandwidth requirement.
Disclosure of Invention
In view of this, the application provides an image processing method and a terminal, which can realize noise reduction processing by using a depth rendering result and a shadow rendering result obtained in the current frame image rendering process, do not need to acquire a large amount of information to participate in operation, reduce the requirement on bandwidth, and can be suitable for mobile terminals such as mobile phones, tablets and the like.
In order to achieve the above purpose, the embodiment of the application adopts the following technical scheme:
in a first aspect, an image processing method is provided, which can be used for a mobile phone, a tablet and other terminals. When the terminal needs to display the first image, the terminal can finish the depth rendering to obtain a depth rendering result of the first image. And completing shadow rendering to obtain a shadow rendering result of the first image. Then, the terminal completes aluminum foil processing of the edges of the shadows in the first image based on the depth rendering result and the shadow rendering result, thereby realizing noise reduction, and a smoothly-varying shadow effect map can be obtained.
In summary, by adopting the embodiment of the application, a smooth shadow effect graph can be obtained according to the depth rendering result and the shadow rendering result, so that noise points of shadow edges can be eliminated. And the noise reduction is realized without obtaining a large number of rendering results, so that the expenditure on the data read-write bandwidth can be reduced. And the shadow edge is filtered to eliminate the noise point of the shadow edge, so that a large number of complex operations are not needed, the noise can be reduced in a targeted manner, and the operations are simplified. Therefore, the image processing method provided by the embodiment of the application can be also applied to mobile terminals with limited data read-write bandwidth and operation resources such as mobile phones, tablets and the like, and the universality of noise reduction processing is improved.
Conventional rendering pipelines include a delayed rendering pipeline and a forward rendering pipeline to which the above-described image processing method may be applied. Specific implementations of the image processing method provided in the embodiments of the present application in a delayed rendering pipeline and a forward rendering pipeline are described below.
First, it is applied to a delayed rendering pipeline.
In one possible design manner, before the terminal finishes shadow rendering and obtains a shadow rendering result of the first image, the method further includes: and the terminal finishes geometric drawing (namely G-Buffer Pass) to obtain a geometric drawing result of the first image, wherein the geometric drawing result comprises normal information. The terminal finishes shadow rendering and obtains a shadow rendering result of the first image, comprising: and the terminal finishes shadow rendering based on the depth rendering result and the normal information to obtain a shadow rendering result of the first image, wherein the shadow rendering result comprises the normal information, the shadow information and the distance information.
With a deferred rendering pipeline, all geometry within the scene may be rendered to a frame buffer (a geometry rendering process as shown in fig. 6) before rendering calculations. The geometry rendering process may calculate information such as the location, color, normal, etc. of the mapping of the geometry to pixels (primitives) in the scene. That is, with a deferred rendering pipeline, normal information may be calculated prior to shading. Accordingly, the normal information can be used for shadow rendering, resulting in a shadow rendering result including the normal information. So that more abundant geometric information can be obtained for subsequent noise reduction processing.
In one possible design, the terminal completes the noise reduction process based on the depth rendering result and the shadow rendering result, including: the terminal completes the noise reduction processing based on the depth rendering result, the normal line information, the shadow information and the distance information. Wherein the depth rendering result is used for determining a discontinuous surface in the first image, the normal information is used for determining the direction of each pixel in the first image, the shadow information is used for determining a shadow area in the first image, and the distance information is used for determining a penumbra area in the first image.
In one possible design, after obtaining the shadow rendering result of the first image, the method further includes: the terminal encodes the shadow rendering result into a first mapping and stores the first mapping into a memory, wherein the first mapping comprises at least four channels, two channels are used for storing normal line information, one channel of the remaining two channels is used for storing shadow information, and the other channel is used for storing distance information. Before the noise reduction processing, the terminal acquires a first mapping from the memory to obtain a shadow rendering result.
By adopting the embodiment, the shadow rendering result is encoded on one map for storage, so that the bandwidth required by writing the shadow rendering result into the memory or reading the shadow rendering result from the memory during the noise reduction processing can be reduced. For example, in the noise reduction processing, most of the information for the noise reduction processing, that is, shadow rendering results, can be obtained only by acquiring the first map.
In one possible design, the first map is in RGBA16F format, and the first map includes four channels, namely, R channel, G channel, B channel, and a channel. Illustratively, the R channel and G channel may be used to store normal information, the B channel may be used to store shadow information, and the A channel may be used to store distance information. In this way, shadow rendering results may be stored in four channels of a map.
In one possible design manner, the terminal completes geometric drawing to obtain a geometric drawing result of the first image, including: and the terminal finishes the geometric drawing in the Buffer on the first sheet of the GPU of the terminal, and stores the geometric drawing result in the first Buffer.
In the geometric drawing process, the GPU needs to finish a large amount of geometric drawing into the frame buffer, so that the cost of reading and writing data bandwidth between the GPU and the frame buffer is large. Based on this, the geometrically rendered frame Buffer is set to the frame Buffer in the Tile Buffer of the GPU (i.e., the first Tile Buffer). Therefore, the bandwidth overhead of the GPU drawing geometric information can be reduced by utilizing the low bandwidth overhead characteristic of the Tile Buffer.
In one possible design manner, before the terminal finishes shadow rendering based on the depth rendering result and the normal line information, the method further includes: and the GPU acquires normal line information from the first Tile Buffer. The terminal completes shadow rendering based on the depth rendering result and normal information, and comprises: and the GPU finishes shadow rendering in a second Tile Buffer of the GPU based on the depth rendering result and the normal line information.
After the geometry drawing result is stored in the first Tile Buffer, the GPU may acquire normal information from the first Tile Buffer during noise reduction processing. Compared with the method for acquiring normal information from the memory: the normal information is obtained from the first Tile Buffer, so that bandwidth consumption can be reduced.
Second, it applies to the forward rendering pipeline.
In one possible design manner, the terminal completes shadow rendering to obtain a shadow rendering result of the first image, including: and the terminal finishes shadow rendering based on the depth rendering result to obtain a shadow rendering result of the first image, wherein the shadow rendering result comprises shadow information and distance information.
With the forward rendering pipeline, there is no geometric rendering process, and a geometric rendering result including normal information is not obtained. However, the terminal may still complete shadow rendering based on the depth rendering results.
In one possible design, the terminal completes the noise reduction process based on the depth rendering result and the shadow rendering result, including: the terminal completes the noise reduction processing based on the depth rendering result, the shadow information and the distance information. Wherein the depth rendering result is used to determine a discontinuous surface in the first image, the shadow information is used to determine a shadow region in the first image, and the distance information is used to determine a penumbra region in the first image.
In one possible design, after obtaining the shadow rendering result of the first image, the method further includes: the terminal encodes the shadow rendering result into a second mapping and stores the second mapping into a memory, wherein the second mapping comprises at least two channels, one channel is used for storing shadow information, and the other channel is used for storing distance information. Before the noise reduction processing, the terminal acquires the second mapping from the memory to obtain a shadow rendering result.
By adopting the embodiment, the shadow rendering result is encoded on one map for storage, so that the bandwidth required by writing the shadow rendering result into the memory or reading the shadow rendering result from the memory during the noise reduction processing can be reduced. For example, in the noise reduction processing, most of the information for the noise reduction processing, that is, shadow rendering results, can be obtained only by acquiring the second map.
In one possible design, the second map is in RG16F format, where the map in RG16F format includes two channels, an R channel and a G channel. Illustratively, the R channel is used to store shadow information and the G channel is used to store distance information.
In one possible design, after obtaining the shadow effect map with smooth variation, the method further includes: and the terminal pastes the shadow effect graph which is changed smoothly on the first image which is rendered, and the first image which is noise-reduced is obtained.
In one possible design, the terminal performs shadow rendering, including: the terminal completes shadow rendering based on a ray tracing algorithm. The shadow rendering is completed by adopting the ray tracing algorithm, so that the rendering effect and the accuracy are greatly improved.
In a second aspect, a terminal is provided, the terminal comprising one or more processors and one or more memories; one or more memories are coupled to the one or more processors, the one or more memories storing computer instructions. The computer instructions, when executed by the one or more processors, cause the terminal to perform the steps of: and the terminal finishes the depth rendering and obtains a depth rendering result of the first image. And the terminal finishes shadow rendering and obtains a shadow rendering result of the first image. And the terminal completes the noise reduction processing based on the depth rendering result and the shadow rendering result, and a smoothly-changed shadow effect diagram is obtained.
In one possible design, the terminal renders the first image using a delayed rendering pipeline, which when executed by one or more processors causes the terminal to perform the steps of: and the terminal finishes geometric drawing to obtain a geometric drawing result of the first image, wherein the geometric drawing result comprises normal information. And the terminal finishes shadow rendering based on the depth rendering result and the normal information to obtain a shadow rendering result of the first image, wherein the shadow rendering result comprises the normal information, the shadow information and the distance information.
In one possible design, the one or more processors, when executing the computer instructions, cause the terminal to perform the steps of: the terminal completes the noise reduction processing based on the depth rendering result, the normal line information, the shadow information and the distance information. Wherein the depth rendering result is used for determining a discontinuous surface in the first image, the normal information is used for determining the direction of each pixel in the first image, the shadow information is used for determining a shadow area in the first image, and the distance information is used for determining a penumbra area in the first image.
In one possible design, the one or more processors, when executing the computer instructions, cause the terminal to perform the steps of: the terminal encodes the shadow rendering result into a first mapping and stores the first mapping into a memory, wherein the first mapping comprises at least four channels, two channels are used for storing normal line information, one channel of the remaining two channels is used for storing shadow information, and the other channel is used for storing distance information. Before the noise reduction processing, the terminal acquires a first mapping from the memory to obtain a shadow rendering result.
In one possible design, the first map is in RGBA16F format, and the first map includes four channels, namely, R channel, G channel, B channel, and a channel.
In one possible design, the one or more processors, when executing the computer instructions, cause the terminal to perform the steps of: and the terminal finishes the geometric drawing in the Buffer on the first sheet of the GPU of the terminal, and stores the geometric drawing result in the first Buffer.
In one possible design, the one or more processors, when executing the computer instructions, cause the terminal to perform the steps of: and the GPU acquires normal line information from the first Tile Buffer. And the GPU finishes shadow rendering in a second Tile Buffer of the GPU based on the depth rendering result and the normal line information.
In one possible design, the terminal renders the first image using a forward rendering pipeline, which when executed by one or more processors causes the terminal to perform the steps of: and the terminal finishes shadow rendering based on the depth rendering result to obtain a shadow rendering result of the first image, wherein the shadow rendering result comprises shadow information and distance information.
In one possible design, the one or more processors, when executing the computer instructions, cause the terminal to perform the steps of: the terminal completes the noise reduction processing based on the depth rendering result, the shadow information and the distance information. Wherein the depth rendering result is used to determine a discontinuous surface in the first image, the shadow information is used to determine a shadow region in the first image, and the distance information is used to determine a penumbra region in the first image.
In one possible design, the one or more processors, when executing the computer instructions, cause the terminal to perform the steps of: the terminal encodes the shadow rendering result into a second mapping and stores the second mapping into a memory, wherein the second mapping comprises at least two channels, one channel is used for storing shadow information, and the other channel is used for storing distance information. Before the noise reduction processing, the terminal acquires the second mapping from the memory to obtain a shadow rendering result.
In one possible design, the second map is in RG16F format, and the second map includes two channels, an R channel and a G channel.
In one possible design, the one or more processors, when executing the computer instructions, cause the terminal to perform the steps of: and the terminal pastes the shadow effect graph which is changed smoothly on the first image which is rendered, and the first image which is noise-reduced is obtained.
In one possible design, the one or more processors, when executing the computer instructions, cause the terminal to perform the steps of: the terminal completes shadow rendering based on a ray tracing algorithm.
In a third aspect, a chip system is provided, the chip system comprising an interface circuit and a processor; the interface circuit and the processor are interconnected through a circuit; the interface circuit is used for receiving signals from the memory and sending signals to the processor, and the signals comprise computer instructions stored in the memory; when the processor executes the computer instructions, the chip system performs the image rendering method as described above in the first aspect and any of various possible designs.
In a fourth aspect, there is provided a computer readable storage medium comprising computer instructions which, when executed, perform the image rendering method of the first aspect and any of the various possible designs described above.
In a fifth aspect, a computer program product is provided, comprising instructions in the computer program product, which when run on a computer, enables the computer to perform the image rendering method of the first aspect and any of the various possible designs as described above according to the instructions.
It should be appreciated that the technical features of the technical solutions provided in the second aspect, the third aspect, the fourth aspect, and the fifth aspect may all correspond to the image rendering method provided in the first aspect and the possible designs thereof, so that the advantages that can be achieved are similar, and are not repeated herein.
Drawings
FIG. 1 is a logical schematic diagram of an image rendering;
FIG. 2 is a schematic diagram of a ray tracing algorithm;
fig. 3 is an input-output schematic diagram of a noise reduction process according to an embodiment of the present application;
fig. 4 is a hardware structure diagram of a terminal according to an embodiment of the present application;
Fig. 5 is a software and hardware architecture diagram of a terminal according to an embodiment of the present application;
FIG. 6 is a schematic diagram of input and output for a delay rendering pipeline noise reduction process according to an embodiment of the present application;
FIG. 7 is a block diagram of shadow rendering according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a storage form of shadow rendering results according to an embodiment of the present application;
FIG. 9A is a block diagram illustrating a noise reduction process according to an embodiment of the present application;
FIG. 9B is a schematic diagram of a portion of a noise reduction process according to an embodiment of the present application;
FIG. 10 is a schematic diagram of input and output for noise reduction processing for a forward rendering pipeline according to an embodiment of the present application;
FIG. 11 is a block diagram illustrating another shadow rendering according to an embodiment of the present application;
FIG. 12 is a schematic diagram of another storage form of shadow rendering results provided by an embodiment of the present application;
fig. 13 is a schematic flow chart of an image processing according to an embodiment of the present application;
fig. 14 is a schematic diagram of a composition of a terminal according to an embodiment of the present application;
fig. 15 is a schematic diagram of a system-on-chip according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are described below with reference to the accompanying drawings in the embodiments of the present application. In the description of embodiments of the application, the terminology used in the embodiments below is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include, for example, "one or more" such forms of expression, unless the context clearly indicates to the contrary. It should also be understood that in the following embodiments of the present application, "at least one", "one or more" means one or more than two (including two). The term "and/or" is used to describe an association relationship of associated objects, meaning that there may be three relationships; for example, a and/or B may represent: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise. The term "coupled" includes both direct and indirect connections, unless stated otherwise. The terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
Currently, most mobile terminals can provide an image display function to a user. For example, cell phones, tablets may provide image display functionality.
By way of example, an Application (APP) may be installed in a mobile terminal. When the application program needs to display the image through the mobile terminal, an instruction can be sent to the mobile terminal, so that the mobile terminal can complete rendering of the image according to the instruction, and finally the image obtained through rendering is displayed through a display screen of the mobile terminal.
With reference to fig. 1, a flow chart of image rendering is shown. The mobile terminal may be provided with a central processing unit (Central Processing Unit, CPU), a GPU, a memory, and the like. Wherein the CPU may be used for instruction processing as well as control. The GPU may complete rendering of the image under control of the CPU. The memory may then be used to provide storage functions, such as storing rendering results obtained by GPU rendering.
As shown in fig. 1, the application may issue a rendering instruction for instructing the mobile terminal to render an image. After receiving the rendering instruction, the CPU may call a corresponding graphics rendering application programming interface (Application Programming Interface, API) to instruct the GPU to perform a rendering operation corresponding to the rendering instruction. The GPU executes the rendering instruction and stores the rendering result in the memory. Finally, the rendering result in the memory can be sent to a display screen for display.
In the process of rendering a frame of image, an application program can control the mobile terminal to perform rendering operations such as depth rendering, geometric drawing, shadow rendering and the like through a rendering instruction, so that complete frame image information is obtained. In the following examples, an application program is taken as an example of a game application. It will be appreciated that the gaming application may present game visuals to a user via the mobile terminal during execution. The game screen may be a video screen, which may be composed of a plurality of frame images that are played in succession.
The image rendering process may be implemented based on ray tracing algorithms. As shown in fig. 2, the GPU may split the rendering task of a scene into several rays (such as the view line (view ray) shown in fig. 2) emanating from a camera (camera) based on a ray tracing algorithm. Each observation line intersects with the scene in parallel, acquires information such as materials, textures and the like of an object to be displayed (scene object) according to the intersection point position, and calculates illumination by combining light source information. In this way, the projection condition of the object on the image can be determined by calculating the information of each pixel point of the observation line on the image. Further, in this scenario, the light source may illuminate the object to form a shadow (e.g., by a shadow line as shown in fig. 2). Then, by the ray tracing algorithm described above, the position of the shadow of the object corresponding to the pixel point on the image and the related information can also be determined. So that the display information of the object and the shadow can be acquired on the image.
That is, by adopting the ray tracing algorithm, information such as illumination, texture, material, shadow and the like can be calculated, so that the rendering effect and the accuracy are greatly improved. Illustratively, with continued reference to fig. 2, the object in the image is shadow-rendered based on a ray tracing algorithm, such that the shadow of the object may be included in the image, which is more realistic.
However, with ray tracing algorithms, the number of rays emitted is typically limited, and thus the sampling points on the image are not continuous. Thus, noise is likely to occur in the rendered image. Therefore, the quality of the image rendered based on the ray tracing algorithm depends greatly on the noise reduction process. The noise reduction treatment effect is good, so that the quality of the image is high; the effect of the noise reduction processing is poor, and the quality of the image is low.
Conventionally, when performing noise reduction processing on a rendering result of a ray tracing algorithm, a GPU needs to obtain a large number of rendering results to participate in noise reduction calculation. Taking the SVGF algorithm to implement the noise reduction process as an example, the GPU needs to obtain not only the result obtained by rendering the current frame, but also the result obtained by rendering the previous frame image, where the rendering result includes information such as depth and color. It should be appreciated that rendering results are typically stored in memory, from which the GPU reads rendering results, requiring bandwidth overhead. Then, the GPU reads a large number of rendering results from the memory, and a high requirement is put on the data read-write bandwidth between the GPU and the memory. And the conventional noise reduction algorithm, the GPU needs to iterate continuously in the time domain after acquiring a large number of rendering results, and the algorithm is complex and has large operation amount.
However, in mobile terminals such as mobile phones and tablet computers, the data read-write bandwidth between the GPU and the memory is limited, and the operation resources are limited. Therefore, mobile terminals such as mobile phones and tablets are very likely to not be burdened with conventional rendering algorithms, so that conventional noise reduction algorithms cannot be applied to the mobile terminals. Eventually, the quality of the image displayed in the mobile terminal such as the mobile phone, the tablet and the like is poor.
Based on the above problems, the embodiment of the application provides an image processing method, which is applicable to a host end with stronger computing capability such as a personal computer (Personal Computer, PC), an Xbox platform and the like, and also applicable to mobile terminals such as a mobile phone, a tablet and the like. For convenience of explanation, the host side and the mobile terminal will be hereinafter collectively referred to as a terminal. Referring to fig. 3, after completing depth rendering and shadow rendering for a current frame image, the terminal may complete filtering processing on edges of shadows in the frame image according to a depth rendering result and a shadow rendering result, so as to obtain a smooth shadow effect diagram. And then, attaching the smooth shadow effect image to the rendered frame image, so that the frame image after noise reduction can be obtained.
In summary, by adopting the embodiment of the application, a smooth shadow effect graph can be obtained according to the depth rendering result and the shadow rendering result, so that noise points of shadow edges can be eliminated. And a large number of rendering results are not required to be acquired to realize noise reduction, so that the expenditure on the data read-write bandwidth can be reduced. And the noise of the shadow edge can be eliminated by filtering the shadow edge, a large amount of complex operation is not needed, and the operation can be simplified. Therefore, the image processing method provided by the embodiment of the application can be also applied to mobile terminals with limited data read-write bandwidth and operation resources such as mobile phones, tablets and the like, and the universality of noise reduction processing is improved.
By way of example, the terminals may be cell phones, tablet computers, desktop computers, laptop computers, handheld computers, notebook computers, PCs, ultra-mobile personal computers (ultra-mobile personal computer, UMPC), netbooks, as well as cellular phones, personal digital assistants (personal digital assistant, PDA), artificial intelligence (artificial intelligence, AI) terminals, wearable terminals, vehicle terminals, smart home terminals, and/or smart city terminals, among others, that may provide image display functionality. The embodiment of the application does not limit the specific form of the terminal.
Referring to fig. 4, a hardware structure diagram of a terminal according to an embodiment of the present application is provided. As shown in fig. 4, taking the example that the terminal is a mobile phone, the terminal may include a processor 410, an external memory interface 420, an internal memory 421 (referred to as a memory for short), a universal serial bus (universal serial bus, USB) interface 440, a charge management module 440, a power management module 441, a battery 442, an antenna 1, an antenna 2, a mobile communication module 450, a wireless communication module 460, an audio module 470, a speaker 470A, a receiver 470B, a microphone 470C, an earphone interface 470D, a sensor module 480, keys 490, a motor 491, an indicator 492, a camera 493, a display screen 494, and a subscriber identity module (subscriber identification module, SIM) card interface 495, etc.
Processor 410 may include a plurality of processors such as a CPU, GPU, and the like. The GPU is a microprocessor for image processing. The GPU is used for performing mathematical and geometric calculations, completing image rendering and noise reduction processing, and the like. Processor 410 may include one or more GPUs that execute program instructions to generate or change display information. In one particular implementation, the GPU may be provided with on-chip memory space. During the operation of the GPU, data in its on-chip memory space can be quickly called. The frame Buffer provided in the on-chip memory space of the GPU may also be referred to as a Tile Buffer. Compared with the method for reading the rendering result from the memory, the method for reading the rendering result from the Tile Buffer by the GPU has lower bandwidth cost, and can reduce the bandwidth cost.
It should be understood that the structure illustrated in this embodiment is not limited to a specific configuration of the mobile phone. In other embodiments, the handset may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The software system of the terminal can adopt a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture or a cloud architecture. The embodiment of the application discloses Android (Android) with a layered architecture TM ) The system is exemplified by the software architecture of the terminal. The layered architecture divides the software system of the terminal into a plurality of layers, each layer has clear roles and division, and the layers are communicated through software interfaces.
Referring to fig. 5, taking a mobile phone as an example, a software and hardware architecture of a terminal may include an Application (APP) layer, a framework (framework) layer, a system library, a hardware layer, and the like.
The application layer may also be referred to as an application layer. In some casesIn implementations, the application layer may include a series of application packages. The application package may include camera, gallery, calendar, talk, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications. In embodiments of the present application, the application package may also include an application that needs to present images or video to a user by rendering the images. By way of example, the application requiring rendering of the image may include a game-like application, such asEtc. Video is understood to mean the continuous play of a plurality of frames of images. Shadow effects may be included in the image to be rendered.
The framework layer may also be referred to as an application framework layer. The framework layer may provide an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The framework layer includes some predefined functions. By way of example, the framework layer may include a window manager, a content provider, a view system, a resource manager, a notification manager, an activity manager, an input manager, and the like. The window manager provides window management services (Window Manager Service, WMS) that may be used for window management, window animation management, surface management, and as a transfer station to the input system. The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc. The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture. The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like. The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the terminal vibrates, and an indicator light blinks. The activity manager may provide activity management services (Activity Manager Service, AMS) that may be used for system component (e.g., activity, service, content provider, broadcast receiver) start-up, handoff, scheduling, and application process management and scheduling tasks. The input manager may provide input management services (Input Manager Service, IMS), which may be used to manage inputs to the system, such as touch screen inputs, key inputs, sensor inputs, and the like. The IMS retrieves events from the input device node and distributes the events to the appropriate windows through interactions with the WMS.
In the embodiment of the present application, one or more functional modules may be disposed in the frame layer, so as to implement the solution provided in the embodiment of the present application. By way of example, a creation module, a processing module, and the like may be provided in the framework layer.
The creation module may be configured to create a Frame Buffer (FB) in the memory and the GPU on-chip storage. For example, the creation module creates a frame buffer for storing a depth rendering result, a frame buffer for storing a shadow rendering result, a frame buffer for storing a noise reduction processing result in a memory. For another example, the creation module creates a TileBuffer on the GPU for shadow rendering. The creation module may also create a TileBuffer for geometric drawing on the GPU when rendering is implemented with a deferred rendering pipeline.
It should be noted that, hereinafter, the use of frame buffering is mainly described, but the creation process of frame buffering is not excessively described, and the description in the related art may be referred to specifically, and is not repeated herein.
The processing module may be configured to process the rendering instruction issued by the application program, and call a corresponding API to instruct the GPU to perform the rendering operation. For example, when the application issues a rendering instruction (hereinafter referred to as a depth rendering instruction) indicating depth rendering, the processing module may control the GPU to complete a depth rendering operation for a current frame image and store the depth rendering result in the frame buffer. The processing module may control the GPU to obtain a depth rendering result when the application issues a rendering instruction (hereinafter referred to as a shadow rendering instruction) indicating shadow rendering, and complete a shadow rendering operation according to a ray tracing algorithm to obtain a shadow rendering result, and then store the shadow rendering result in a frame buffer.
Also, for example, when rendering is implemented by using the deferred rendering pipeline, the processing module may control the GPU to complete the geometric rendering operation of the current frame image and store the geometric rendering result including normal information in a frame buffer (such as TileBuffe of the GPU) when the application issues a rendering instruction (hereinafter referred to as a geometric rendering instruction) indicating geometric rendering. The processing module can control the GPU to acquire a depth rendering result and a normal information result in a geometric rendering result when the application program issues a shadow rendering instruction, for example, the processing module acquires the depth rendering result from a frame buffer of a memory, acquires normal information from a TileBuffer, completes rendering operation according to a ray tracing algorithm to acquire the shadow rendering result, and then stores the shadow rendering result in the frame buffer.
It can be seen that the creation module and the processing module can complete a corresponding response to the rendering instruction issued by the application. In the embodiment of the present application, in order to enable the creation module and the processing module to successfully obtain the rendering command issued by the application program, as shown in fig. 5, an interception module may be further disposed in the framework layer. The interception module can be used for receiving a rendering instruction issued by an application program and sending the corresponding rendering instruction to the corresponding module for processing according to information indicated by the rendering instruction. For example, an instruction to create a frame buffer is sent to the creation module for processing. For another example, a rendering instruction indicating depth rendering is sent to a processing module for processing.
In the above description about the processing module, the processing module is described as controlling the GPU to execute the shadow rendering operation based on the shadow rendering instruction issued by the application program. In other embodiments, the terminal may also autonomously perform shadow rendering operations. In this embodiment, as shown in fig. 5, a shadow rendering module may also be disposed in the frame layer. The processing module may notify the shadow rendering module to control the GPU to complete shadow rendering after the GPU completes a rendering, such as completing depth rendering or completing geometry rendering. The processing module may determine that the GPU has completed a rendering according to the message that the GPU callback has completed a rendering. For example, after performing the geometric drawing operation to obtain a geometric drawing result, the GPU may send a message to the processing module that the geometric drawing is completed. The processing module may determine that the geometric rendering has been completed after receiving the message that the geometric rendering has been completed.
In the embodiment of the application, the GPU also needs to complete noise reduction processing. Similar to shadow rendering: in some embodiments, the processing module may control the GPU to complete the noise reduction process for the current frame image when the application issues the noise reduction process instruction, and store the noise reduction process result (i.e., the smoothed shadow effect map) in the frame buffer. In other embodiments, the terminal may also autonomously perform the noise reduction process. In this embodiment, as shown in fig. 5, a noise reduction module may also be disposed in the frame layer. The processing module or the shadow rendering module may notify the noise reduction module to control the GPU to complete the noise reduction process after the GPU completes shadow rendering. The processing module or the shadow rendering module may determine that the GPU has completed shadow rendering according to the message that the GPU callback has completed shadow rendering.
Hereinafter, an embodiment in which a terminal autonomously performs a shadow rendering process and a noise reduction process will be mainly described.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Framework (Media Framework), standard C library (Standard C library, libc), SQLite, graphics library, etc.
The surface manager is used for managing the display subsystem and providing fusion of 2D and 3D layers for a plurality of application programs. Media frames support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: motion picture expert group 4 (Moving Pictures Experts Group, MPEG 4), h.264, motion picture expert compression standard audio layer3 (Moving Picture Experts Group Audio Layer, MP 3), advanced audio coding (Advanced Audio Coding, AAC), adaptive Multi-Rate (AMR), joint photographic expert group (Joint Photographic Experts Group, JPEG, or JPG), portable network graphics (Portable Network Graphics, PNG), and the like. The SQLite provides a lightweight relational database for applications of the terminal.
Wherein the graphic library may include at least one of: open graphics library (Open Graphics Library, openGL), open graphics library of embedded system (OpenGL for Embedded Systems, openGL ES), vulkan, etc. The graphics library may provide drawing and manipulation of 2D graphics and 3D graphics in an application.
The hardware layer may include a processor, such as CPU, GPU, NPU, etc. Wherein, the CPU can control each model in the framework layer to execute corresponding functions. And, the hardware layer may further include a memory. In the embodiment of the application, a frame buffer can be created in the memory for storing the rendering result and the noise reduction processing result.
The image processing method provided by the embodiment of the application can be applied to the terminal with the software and hardware structures shown in the figures 4 and 5 to realize the noise reduction processing of the image. In the application, the noise points are mainly found in the edge area of the shadow, so that the filtering processing is mainly carried out on the edge of the shadow to form a smooth shadow effect graph, thereby realizing noise reduction. Thus, the noise reduction can be realized in a targeted manner.
The image processing method provided by the embodiment of the application can be applied to common forward rendering (Forward Rendering) pipelines and delay rendering (Deferred Rendering) pipelines. The two rendering pipelines have different rendering flows and different information which can be generated in the rendering process. For example, with a forward rendering pipeline, depth rendering results may be rendered. In another example, with a delayed rendering pipeline, not only a depth rendering result but also a geometry rendering result including normal information may be obtained. Specific implementations of the image processing method provided by the embodiment of the application in the forward rendering pipeline and the delayed rendering pipeline will be described below.
Referring to fig. 6, with a deferred rendering pipeline, all geometry within the scene may be rendered to a frame buffer (the geometry rendering process shown in fig. 6) before rendering calculations. The geometry rendering process may calculate information such as the location, color, normal, etc. of the mapping of the geometry to pixels (primitives) in the scene. And, with the delay rendering pipeline, a depth rendering result (i.e., depth information shown in fig. 6) obtained by the depth rendering and a geometry drawing result (including normal information shown in fig. 6) obtained by the geometry drawing can be used for the shadow rendering, resulting in a shadow rendering result including normal information, shadow information, and distance information shown in fig. 6.
Then, as shown in fig. 6, applied to the delayed rendering pipeline, the terminal implements the noise reduction processing according to the depth rendering result and the shadow rendering result, which may specifically be: and taking the depth information obtained by the depth rendering, the normal information, the shadow information and the distance information obtained by the shadow rendering as the input of the noise reduction processing, and outputting a smooth shadow effect graph after the noise reduction processing is completed.
The Depth rendering process may be referred to as Depth Pass, and after the Depth rendering is completed, the Depth information may be stored in a frame buffer of a memory, such as frame buffer 71 shown in fig. 7. The geometric rendering process may also be referred to as G-buffer Pass, and after the geometric rendering is completed, the geometric rendering result including normal information may be stored in a frame buffer, and the frame buffer storing the geometric rendering result may be referred to as G-buffer. In the geometric drawing process, the GPU needs to finish a large amount of drawing to the G-buffer, so that the cost of the bandwidth for reading and writing data between the GPU and the G-buffer is large. Based on this, in some embodiments, the G-Buffer may be set as a frame Buffer in the Tile Buffer of the GPU (may also be referred to as a first Tile Buffer), such as frame Buffer 72 in the GPU shown in fig. 7. Thus, the bandwidth overhead of the GPU for drawing the geometric information to the G-buffer can be reduced. It should be noted that, in the embodiments of the present application, detailed descriptions of specific implementations of depth rendering and geometric rendering are not provided, and in the specific implementation, reference may be made to the descriptions of related prior art, which are not repeated herein.
And, the Shadow rendering process may also be referred to as Shadow Pass. In the deferred rendering pipeline, shadow rendering may be accomplished based on depth information resulting from the depth rendering and normal information resulting from the geometry rendering. Shadow rendering, therefore, needs to be performed after depth rendering and geometry rendering. However, in the embodiment of the present application, the order of precedence between the depth rendering and the geometric rendering is not limited. Hereinafter, shadow rendering is mainly described as an example immediately after the completion of geometric rendering.
Exemplary, referring to fig. 7, a module interaction diagram for implementing shadow rendering is provided in an embodiment of the present application. In this example, the terminal autonomously performs shadow rendering after the geometric drawing is completed, which is described as an example.
As shown in fig. 7, the processing module may instruct the shadow rendering module to present a rendering schedule after the GPU completes the geometric drawing as follows: geometric rendering has been completed. The processing module can determine that the GPU completes the geometric drawing according to the message of the GPU callback after the geometric drawing is completed. And the shadow rendering module sends a shadow rendering instruction to the GPU after receiving the rendering progress of the completed geometric drawing.
Shadow rendering instructions may Bind (Bind) the Tile Buffer (also referred to as a second Tile Buffer) of the GPU to facilitate high performance access to information in the G-Buffer when performing shadow rendering operations, taking advantage of the low bandwidth overhead characteristics of the Tile Buffer. For example, instructions to perform shadow rendering may be bundled with frame buffer 73 so that the GPU may have high performance access to normal information in frame buffer 72 when performing shadow rendering operations in frame buffer 73.
The shadow rendering instruction may further carry a frame buffer ID storing depth information, a frame buffer ID storing a geometric drawing result including normal line information, and a frame buffer ID storing a shadow rendering result. For example, the shadow rendering instruction may include a frame buffer ID of frame buffer 71, a frame buffer ID of frame buffer 72, and a frame buffer ID of frame buffer 74. To facilitate the GPU to obtain input data from the frame buffer 71 and the frame buffer 72 required for the shadow rendering process and to store the shadow rendering results in the frame buffer 74.
In response to the shadow rendering instruction, the GPU may obtain depth information and normal information, completing the shadow rendering operation. For example, the GPU may obtain depth information from frame buffer 71 and normal information from frame buffer 72.
In the application, the GPU can store the shadow rendering result in the memory after completing the shadow rendering operation, so that the shadow rendering result can be called in the subsequent noise reduction processing process. For example, the GPU may store shadow rendering results in a frame buffer 74 of memory.
Wherein the shadow rendering result includes normal information, shadow information, and distance information. The normal information may include normal information in both x and y directions. That is, the normal information may include two parts of normal information (x) and normal information (y). The Normal information (x) may also be referred to as Normal (x), and the Normal information (y) may also be referred to as Normal (y).
In the present application, when the GPU stores the shadow rendering results in the frame buffer of the memory, all the shadow rendering results may be encoded into a map (which may also be referred to as a first map) in a preset format and then stored in the frame buffer. Encoding the shadow rendering result onto a map can reduce the bandwidth required to write the shadow rendering result into the frame Buffer after exiting the Tile Buffer or read the shadow rendering result from the frame Buffer during subsequent noise reduction processing.
Wherein, the map in the preset format may include at least four channels. Two channels are used to store normal information, one channel is used to store shadow information and the other channel is used to store distance information.
As one possible implementation, the preset format may be an RGBA16F format. In connection with FIG. 8, after the GPU completes the shadow rendering operation on frame buffer 73, the shadow rendering results may be encoded onto a map in RGBA16F format and then output onto frame buffer 74. Then, in the frame buffer 74, the shadow rendering results may exist in a map in RGBA16F format. For example, normal information (x) (i.e., normal (x)) may be stored in the R channel of the map in RGBA16F format on the frame buffer 74; normal information (y) (i.e., normal (y)) may be stored in the G channel of the map in RGBA16F format on the frame buffer 74; shadow information (shadow mask) may be stored into the B-channel of the map in RGBA16F format on the frame buffer 74; distance information (Distance) may be stored in the a-channel of the map in RGBA16F format on the frame buffer 74.
After the depth rendering is completed through the process, depth information can be obtained; and after the geometric drawing and the shadow rendering are completed through the processes, a shadow rendering result can be obtained, wherein the shadow rendering result comprises normal line information, shadow information and distance information. After that, noise reduction processing may be performed.
Exemplary, referring to fig. 9A, a schematic module interaction for implementing noise reduction processing is provided in an embodiment of the present application. In this example, the terminal autonomously performs the noise reduction processing after completing the shadow rendering to obtain the shadow rendering result is described as an example.
As shown in fig. 9A, after the GPU finishes shadow rendering, the shadow rendering module may indicate that the current rendering progress of the noise reduction module is: shadow rendering has been completed. The shadow rendering module can determine that the GPU finishes shadow rendering according to the message of the GPU callback that the shadow rendering is finished. After receiving the message that the shadow rendering is completed, the noise reduction module can issue a noise reduction processing instruction to the GPU.
The noise reduction instruction may carry a frame buffer ID storing depth information, a frame buffer ID storing a shadow rendering result, and a frame buffer ID storing a noise reduction result. For example, the noise reduction processing instruction may include a frame buffer ID of the frame buffer 71, a frame buffer ID of the frame buffer 74, and a frame buffer ID of the frame buffer 91. So that the GPU obtains input data required for the noise reduction process from the frame buffer 71 and the frame buffer 74 and stores the noise reduction process result in the frame buffer 91.
In response to the noise reduction instruction, the GPU may obtain depth information, normal information, shadow information, and distance information, completing the noise reduction operation. It should be understood that if the normal information, the shadow information, and the distance information are stored after being encoded into a map in a preset format, the GPU may obtain the normal information, the shadow information, and the distance information only by acquiring one map from the frame buffer, thereby reducing the bandwidth required for data reading. For example, the GPU may obtain depth information from frame buffer 71 and a map storing normal information, shading information, and distance information from frame buffer 74.
The shadow information may be used by the GPU to determine a shadow region in the frame image, where the shadow region includes a penumbra region and a penumbra region, where the penumbra region refers to a fully dark region, and the penumbra region refers to a penumbra region. Penumbra regions are typically located at the edges of shadow regions, and noise typically occurs within the penumbra regions. The distance information may be used to calculate a pixel range (kernel) of the penumbra region in the shadow region.
In some embodiments, after calculating the pixel range of the penumbra region, the GPU may perform filtering processing (also referred to as softening processing, blurring processing, etc.) on the pixel range, so that the color change of the pixel points in the penumbra region is smoother, and a smooth shadow effect map is obtained. Thus, the noise reduction processing of the frame image can be completed.
In an actual frame image, adjacent pixel points may not be continuous in space. This is typically due to the space having a depth. As shown in fig. 9B, it is assumed that the pixel a and the pixel B are two adjacent pixels on the image, but in actual space, the pixel a is a point on the table surface, and the pixel B may be a point on the ground or a wall, and it is obvious that the pixel a and the pixel B are not continuous in space, and may also be referred to as surface discontinuity. In such surface discontinuity locations, the shadow area will generally not continue. For example, in fig. 9B the shadow of the cup is in the direction of pixel a, and is not continuous because it is the boundary between pixel a and pixel B where the table edge is the floor or wall, the shadow of the cup does not extend to pixel B.
Based on this, in some embodiments, the GPU may also determine a discontinuous surface in the frame image based on the depth information, and perform the filtering process on only consecutive pixels at the time of the filtering process. The GPU may derive a partial derivative of the depth information, which may represent the rate of change of the view space z. A large rate of change indicates that the surface is discontinuous; a small rate of change indicates that the surface is continuous. The GPU may determine whether the calculated partial derivative is greater than a preset threshold, such as 0.1, 0.12, etc. If the partial derivative is greater than the preset threshold, the surface representing the corresponding position is discontinuous. Then, even if the pixel corresponding to the position is within the pixel range of the penumbra region, the filtering process is not performed thereon. Otherwise, if the partial derivative is smaller than the preset threshold, the surface representing the corresponding position is continuous. Then, in the filtering process, if the pixel corresponding to the position is within the pixel range of the penumbra region, the filtering process is performed. Therefore, the discontinuous surface is not subjected to filtering treatment, namely, the position without shadow is not subjected to filtering treatment, so that the accuracy of the filtering treatment can be improved.
In the frame image, the directions of adjacent pixels are suddenly changed, and the directions of shadows are also changed. For example, an object placed on the seat of a chair may have shadows on both the seat and the back. However, since the back and the seat are almost vertical, the direction of the shadow also changes greatly at the position where the back and the seat meet. Based on this, in some embodiments, the GPU may also determine the direction of the pixel based on the normal information, determine the directional change of the shadow from the direction of the pixel. Thereby facilitating a more accurate determination of the position of the penumbra region during the filtering process.
By the noise reduction processing, the edge (namely, penumbra area) of the shadow can be smoother, and a smooth shadow effect diagram is obtained. And after the noise reduction processing operation is finished, storing the smooth shadow effect graph obtained by the noise reduction processing in a memory so as to generate a noise-reduced frame image later. For example, the GPU may store the noise reduction processing results in the frame buffer 91 of the memory.
In this way, in the delay rendering pipeline, the noise reduction processing can be realized without acquiring information such as depth, color and the like of the previous frame image and without a large amount of G-Buffer information of the current frame image. Therefore, the bandwidth overhead caused by acquiring a large amount of information in the noise reduction processing process can be reduced.
After all rendering operations and noise reduction processing are completed, the smooth shadow effect graph can be attached to the frame image obtained by rendering, so that the frame image after noise reduction is obtained.
Unlike the delayed rendering pipeline is: with the forward rendering pipeline, a series of computations from the vertex shader to the pixel shader need to be performed for one geometry in the scene, and then output to the frame buffer. Typically, a plurality of geometries are included in the scene, and then a series of computations from the vertex shader to the pixel shader need to be performed on each of the plurality of geometries and then output to the frame buffer. That is, with the forward rendering pipeline, rendering is completed in units of geometry, and for each geometry, the output is not performed until after the pixel shader is completed, and the geometry drawing result including normal line information is not output in the middle. That is, with the forward rendering pipeline, there is no G-Buffer Pass process.
By way of example, where a scene includes 2 chairs and 1 table for a total of 3 geometries, a front line rendering pipeline may first perform a series of calculations from the vertex shader to the pixel shader on one of the chairs, outputting the results to a frame buffer; then performing a series of calculations from the vertex shader to the pixel shader on another chair, outputting the results to a frame buffer; finally, a series of computations from the vertex shader to the pixel shader are performed on the table, with the results output to the frame buffer.
Then, with the forward rendering pipeline, no normal information is available for shadow rendering, and accordingly, the resulting shadow rendering result will not include normal information. Referring to fig. 10, applied to a forward rendering pipeline, a terminal implements noise reduction processing according to a depth rendering result and a shadow rendering result, which may specifically be: and taking the depth information obtained by the depth rendering, the shadow information obtained by the shadow rendering and the distance information as the input of the noise reduction processing, and outputting a smooth shadow effect graph after the noise reduction processing is completed.
In the forward rendering pipeline, the Depth rendering process may also be referred to as Depth Pass, and after the Depth rendering is completed, the Depth information may be stored in a frame buffer in memory, such as frame buffer 1101 shown in FIG. 11. And, the Shadow rendering process may also be referred to as Shadow Pass. In the forward rendering pipeline, shadow rendering does not obtain normal information, but can only be done based on depth information. Thus, shadow rendering needs to be performed after depth rendering. Hereinafter, shadow rendering is mainly described as an example immediately after the completion of depth rendering.
Exemplary, referring to fig. 11, a module interaction schematic for implementing shadow rendering is provided in an embodiment of the present application. In this example, the terminal autonomously performs shadow rendering after the completion of the depth rendering is described as an example.
As shown in fig. 11, the processing module may indicate, after the GPU finishes the depth rendering, that the current rendering progress of the shadow rendering module is: depth rendering has been completed. The processing module may determine that the GPU completes the depth rendering according to the message that the GPU callback has completed the depth rendering. And the shadow rendering module sends a shadow rendering instruction to the GPU after receiving the rendering progress of the completed geometric drawing.
The shadow rendering instruction may carry a frame buffer ID storing depth information, and a frame buffer ID storing a shadow rendering result. For example, a frame buffer ID of frame buffer 1101, and a frame buffer ID of frame buffer 1102 may be included in the shadow rendering instruction. To facilitate the GPU to obtain depth information from the frame buffer 1101 that is required for the shadow rendering process and to store the shadow rendering results in the frame buffer 1102.
In response to the shadow rendering instruction, the GPU may obtain depth information, completing the shadow rendering operation. For example, the GPU may obtain depth information from the frame buffer 1101.
In the application, the GPU can store the shadow rendering result in the memory after completing the shadow rendering operation, so that the shadow rendering result can be called in the subsequent noise reduction processing process. For example, the GPU may store shadow rendering results in a frame buffer 1102 of memory. Wherein, unlike the delayed rendering pipeline, is: in the forward rendering pipeline, shadow rendering results include shadow information as well as distance information, but do not include normal information.
In the present application, when the GPU stores the shadow rendering results in the frame buffer of the memory, all the shadow rendering results may be encoded into a map (which may also be referred to as a second map) in a preset format and then stored in the frame buffer. Storing the shadow rendering result code on a map can reduce the bandwidth required to write the shadow rendering result into the frame buffer or read the shadow rendering result from the frame buffer at the time of subsequent noise reduction processing.
Wherein, the map of the preset format may include at least two channels. One channel is used to store shadow information and the other channel is used to store distance information.
As a possible implementation, the preset format may be an RG16F format. In connection with fig. 12, after completing the shadow rendering operation, the gpu may encode the shadow rendering results onto a map in RG16F format and then output onto frame buffer 1102. Then, in frame buffer 1102, the shadow rendering results may exist in a map in RG16F format. For example, shadow information (shadow mask) may be stored in the R-channel of the RG16F format map on the frame buffer 1102; distance information (Distance) may be stored in the G channel of the RGBA16F format map on the frame buffer 1102.
After the depth rendering is completed through the process, depth information can be obtained; and after the shadow rendering is completed through the process, a shadow rendering result can be obtained, wherein the shadow rendering result comprises shadow information and distance information. After that, noise reduction processing may be performed.
In the forward rendering pipeline, the process of implementing the noise reduction process is similar to that in the deferred rendering pipeline, and in particular, reference may be made to the related description of fig. 9A, which is not repeated herein. The only difference is that: in the deferred rendering pipeline, the GPU, in response to the noise reduction processing instruction, obtains shadow rendering results that include shadow information and distance information, but do not include normal information. After the GPU obtains the depth information, the shadow information, and the distance information, it may determine a shadow region in the frame image based on the shadow information, calculate a pixel range (kernel) of a penumbra region in the shadow region based on the distance information, and determine a discontinuous surface in the frame image based on the depth information. The specific principles may be found in the delayed rendering pipeline and will not be described in detail herein.
By the noise reduction processing, the edge (namely, penumbra area) of the shadow can be smoother, and a smooth shadow effect diagram is obtained. And after the noise reduction processing operation is finished, storing the smooth shadow effect graph obtained by the noise reduction processing in a memory so as to generate a noise-reduced frame image later.
That is, in the forward rendering pipeline, even if there is insufficient geometric information, such as no normal line information, the noise reduction process can be realized in accordance with the depth information, the shadow information, and the distance information. In this way, noise reduction may be achieved with limited geometry of the forward rendering pipeline. Thus overcoming the defect that the conventional noise reduction technology can only be applied to the delay rendering pipeline.
Finally, after all rendering operations and noise reduction processing are completed, the smooth shadow effect graph can be attached to the frame image obtained by rendering, so that the frame image after noise reduction is obtained.
In the above embodiment, the rendering method provided by the embodiment of the application is mainly described from the view of interaction between modules. The following is a description of a solution provided by an embodiment of the present application with reference to a module interaction flowchart shown in fig. 13. Since the rendering flows of different terminals or different application programs in the same terminal are different, the rendering processes of depth rendering, geometric drawing, shadow rendering, and the like are not described in fig. 13, but only the depth rendering results obtained by the depth rendering and the shadow rendering results obtained by the shadow rendering can be stored in the memory are shown. In fig. 13, the case where the terminal completes the shading process and then automatically performs the noise reduction operation will be described.
S1301, the GPU sends an indication of shadow rendering completion to the shadow rendering module.
S1302, the shadow rendering module sends an indication of shadow rendering completion to the noise reduction module.
S1303, the noise reduction module sends a noise reduction instruction to the GPU.
In this embodiment, the GPU triggers the noise reduction process after completing the shadow rendering. In practice, the timing of performing the noise reduction process is not limited to this, as long as it is ensured that the noise reduction result is obtained before the rendered frame image is displayed. For example, the noise reduction process may be triggered after all the rendering processes are completed.
S1304, the GPU reads depth information and shadow rendering results from the memory.
It should be understood that, when the delayed rendering pipeline is used to perform the shadow rendering operation, the obtained shadow rendering result includes normal line information, shadow information and distance information; and performing shadow rendering operation by adopting a forward rendering pipeline, wherein the obtained shadow rendering result comprises shadow information and distance information, and does not comprise normal line information. That is, with different rendering pipelines, the shadow rendering results obtained are different, and thus the information subsequently used for the noise reduction process is also different.
In some embodiments, shadow rendering results are stored in a map of a preset format, the amount of data is small, and the overhead of bandwidth in storing and reading shadow rendering results is small. For example, in a delayed rendering pipeline, shadow rendering results are saved in a 64-bit map in RGBA16F format; in the forward rendering pipeline, shadow rendering results are saved in a 32-bit map in RG16F format.
S1305, the GPU executes noise reduction processing according to the depth information and the shadow rendering result. See in particular the description hereinbefore regarding the achievement of a smooth shadow effect map.
S1306, the GPU sends the noise reduction processing result to the memory. The noise reduction processing result is a smooth shadow effect graph.
S1307, store the noise reduction processing result.
Before the frame image is sent to display later, the noise reduction processing result can be attached to the frame image obtained by rendering, so that the noise reduction of the frame image can be realized, and the noise points of shadow edges are eliminated.
Thus, noise reduction can be achieved as long as the depth rendering result and the shadow rendering result are acquired as inputs. Easy to integrate into both the deferred rendering pipeline and the forward rendering pipeline.
The above description mainly describes the scheme provided by the embodiment of the application from the perspective of each service module. To achieve the above functions, it includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
Fig. 14 shows a schematic diagram of the composition of a terminal 1400. As shown in fig. 14, the terminal 1400 may include: a processor 1401 and a memory 1402. The memory 1402 is used to store computer-executable instructions. For example, in some embodiments, the processor 1401, when executing the instructions stored in the memory 1402, may cause the terminal 1400 to perform the methods shown in any of the embodiments described above.
It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
Fig. 15 shows a schematic diagram of the composition of a chip system 1500. The chip system 1500 may include: a processor 1501 and a communication interface 1502 for supporting the relevant devices to implement the functions referred to in the above embodiments. In one possible design, the system on a chip also includes memory to hold the necessary program instructions and data for the terminal. The chip system can be composed of chips, and can also comprise chips and other discrete devices. It should be noted that, in some implementations of the present application, the communication interface 1502 may also be referred to as an interface circuit.
It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
The functions or acts or operations or steps and the like in the embodiments described above may be implemented in whole or in part by software, hardware, firmware or any combination thereof. When implemented using a software program, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more servers, data centers, etc. that can be integrated with the medium. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Although the application has been described in connection with specific features and embodiments thereof, it will be apparent that various modifications and combinations can be made without departing from the spirit and scope of the application. Accordingly, the specification and drawings are merely exemplary illustrations of the present application as defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the application. It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (16)

1. An image processing method, applied to a terminal, the terminal requiring display of a first image, the method comprising:
the terminal finishes the depth rendering and obtains a depth rendering result of the first image;
the terminal finishes shadow rendering and obtains a shadow rendering result of the first image;
and the terminal completes noise reduction processing based on the depth rendering result and the shadow rendering result, and a smoothly-changed shadow effect diagram is obtained.
2. The method of claim 1, wherein the terminal renders the first image using a deferred rendering pipeline, the method further comprising, before the terminal completes shadow rendering to obtain a shadow rendering result for the first image:
the terminal finishes geometric drawing to obtain a geometric drawing result of the first image, wherein the geometric drawing result comprises normal information;
the terminal finishes shadow rendering and obtains a shadow rendering result of the first image, and the shadow rendering result comprises the following steps:
and the terminal finishes shadow rendering based on the depth rendering result and normal information to obtain a shadow rendering result of the first image, wherein the shadow rendering result comprises normal information, shadow information and distance information.
3. The method of claim 2, wherein the terminal completes a noise reduction process based on the depth rendering result and the shadow rendering result, comprising:
the terminal completes noise reduction processing based on the depth rendering result, the normal line information, the shadow information and the distance information, wherein the depth rendering result is used for determining discontinuous surfaces in the first image, the normal line information is used for determining directions of pixels in the first image, the shadow information is used for determining shadow areas in the first image, and the distance information is used for determining penumbra areas in the first image.
4. A method according to claim 2 or 3, wherein after said obtaining a shadow rendering result of said first image, the method further comprises:
the terminal encodes the shadow rendering result into a first mapping and stores the first mapping into a memory, wherein the first mapping comprises at least four channels, two channels are used for storing the normal information, one channel of the remaining two channels is used for storing shadow information, and the other channel is used for storing distance information;
before the noise reduction processing, the terminal acquires the first mapping from the memory to obtain the shadow rendering result.
5. The method of claim 4, wherein the first map is in RGBA16F format, the first map comprising four channels of R channel, G channel, B channel, and a channel.
6. The method according to any one of claims 2-5, wherein the terminal completing the geometric rendering to obtain a geometric rendering result of the first image, includes:
and the terminal finishes geometric drawing in a Buffer on a first piece of the GPU of the terminal, and stores the geometric drawing result in the first Buffer.
7. The method of claim 6, wherein prior to the terminal completing shadow rendering based on the depth rendering result and normal information, the method further comprises:
the GPU acquires normal line information from the first Tile Buffer;
the terminal finishes shadow rendering based on the depth rendering result and normal line information, and comprises the following steps:
and the GPU finishes shadow rendering in a second Tile Buffer of the GPU based on the depth rendering result and the normal information.
8. The method of claim 1, wherein the terminal renders the first image using a forward rendering pipeline, the terminal completing shadow rendering to obtain a shadow rendering result of the first image, comprising:
and the terminal finishes shadow rendering based on the depth rendering result to obtain a shadow rendering result of the first image, wherein the shadow rendering result comprises shadow information and distance information.
9. The method of claim 8, wherein the terminal completes a noise reduction process based on the depth rendering result and the shadow rendering result, comprising:
the terminal completes noise reduction processing based on the depth rendering result, the shadow information and the distance information, wherein the depth rendering result is used for determining a discontinuous surface in the first image, the shadow information is used for determining a shadow area in the first image, and the distance information is used for determining a penumbra area in the first image.
10. The method according to claim 8 or 9, wherein after the obtaining the shadow rendering result of the first image, the method further comprises:
the terminal encodes the shadow rendering result into a second mapping and stores the second mapping into a memory, wherein the second mapping comprises at least two channels, one channel is used for storing shadow information, and the other channel is used for storing distance information;
before the noise reduction processing, the terminal acquires the second mapping from the memory to obtain the shadow rendering result.
11. The method of claim 10, wherein the second map is in RG16F format, and the second map includes two channels, an R channel and a G channel.
12. The method according to any one of claims 1-11, wherein after said obtaining a smoothly varying shadow effect map, the method further comprises:
and the terminal pastes the shadow effect graph with smooth change on the first image obtained by rendering to obtain the first image after noise reduction.
13. The method of any one of claims 1-12, wherein the terminal performs shadow rendering, comprising:
The terminal completes shadow rendering based on a ray tracing algorithm.
14. A terminal comprising one or more processors and one or more memories; the one or more memories coupled to the one or more processors, the one or more memories storing computer instructions;
the computer instructions, when executed by the one or more processors, cause the terminal to perform the image processing method of any of claims 1-13.
15. A computer readable storage medium comprising computer instructions which, when run on a terminal, cause the terminal to perform the image processing method of any of claims 1-13.
16. A chip system, wherein the chip system comprises an interface circuit and a processor; the interface circuit and the processor are interconnected through a circuit; the interface circuit is configured to receive a signal from a memory and to send a signal to the processor, the signal comprising computer instructions stored in the memory; when the processor executes the computer instructions, the chip system performs the image processing method of any of claims 1-13.
CN202211185727.2A 2022-09-27 2022-09-27 Image processing method and terminal Pending CN116740254A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211185727.2A CN116740254A (en) 2022-09-27 2022-09-27 Image processing method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211185727.2A CN116740254A (en) 2022-09-27 2022-09-27 Image processing method and terminal

Publications (1)

Publication Number Publication Date
CN116740254A true CN116740254A (en) 2023-09-12

Family

ID=87906661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211185727.2A Pending CN116740254A (en) 2022-09-27 2022-09-27 Image processing method and terminal

Country Status (1)

Country Link
CN (1) CN116740254A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745518A (en) * 2024-02-21 2024-03-22 芯动微电子科技(武汉)有限公司 Graphics processing method and system for optimizing memory allocation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5870097A (en) * 1995-08-04 1999-02-09 Microsoft Corporation Method and system for improving shadowing in a graphics rendering system
CN101840566A (en) * 2010-04-16 2010-09-22 中山大学 Real-time shadow generating method based on GPU parallel calculation and system thereof
CN110152291A (en) * 2018-12-13 2019-08-23 腾讯科技(深圳)有限公司 Rendering method, device, terminal and the storage medium of game picture
CN112700528A (en) * 2020-12-21 2021-04-23 南京理工大学 Virtual object shadow rendering method for head-mounted augmented reality equipment
US11232628B1 (en) * 2020-11-10 2022-01-25 Weta Digital Limited Method for processing image data to provide for soft shadow effects using shadow depth information
CN114757837A (en) * 2021-12-23 2022-07-15 每平每屋(上海)科技有限公司 Target model rendering method, device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5870097A (en) * 1995-08-04 1999-02-09 Microsoft Corporation Method and system for improving shadowing in a graphics rendering system
CN101840566A (en) * 2010-04-16 2010-09-22 中山大学 Real-time shadow generating method based on GPU parallel calculation and system thereof
CN110152291A (en) * 2018-12-13 2019-08-23 腾讯科技(深圳)有限公司 Rendering method, device, terminal and the storage medium of game picture
US11232628B1 (en) * 2020-11-10 2022-01-25 Weta Digital Limited Method for processing image data to provide for soft shadow effects using shadow depth information
CN112700528A (en) * 2020-12-21 2021-04-23 南京理工大学 Virtual object shadow rendering method for head-mounted augmented reality equipment
CN114757837A (en) * 2021-12-23 2022-07-15 每平每屋(上海)科技有限公司 Target model rendering method, device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745518A (en) * 2024-02-21 2024-03-22 芯动微电子科技(武汉)有限公司 Graphics processing method and system for optimizing memory allocation

Similar Documents

Publication Publication Date Title
US10986330B2 (en) Method and system for 360 degree head-mounted display monitoring between software program modules using video or image texture sharing
US9715750B2 (en) System and method for layering using tile-based renderers
KR102655540B1 (en) Efficient parallel optical flow algorithm and gpu implementation
US20190333265A1 (en) Electronic device for generating images having rendering qualities differing by view vector
US20220019640A1 (en) Automatic website data migration
CN114669047B (en) Image processing method, electronic equipment and storage medium
CN115698927A (en) Interface carousel for use with image processing SDK
WO2021008390A1 (en) Image layer processing method and apparatus, electronic device, and computer-readable medium
CN113094123A (en) Method and device for realizing functions in application program, electronic equipment and storage medium
CN115699097A (en) Software development kit for image processing
CN112004041B (en) Video recording method, device, terminal and storage medium
US8522201B2 (en) Methods and apparatus for sub-asset modification
KR20140113559A (en) Texture address mode discarding filter taps
CN116740254A (en) Image processing method and terminal
US20200364926A1 (en) Methods and apparatus for adaptive object space shading
WO2024060949A1 (en) Method and apparatus for augmented reality, device, and storage medium
CN111031377B (en) Mobile terminal and video production method
WO2024027231A1 (en) Image rendering method and electronic device
CN110659024B (en) Graphics resource conversion method and device, electronic equipment and storage medium
CN116546228A (en) Plug flow method, device, equipment and storage medium for virtual scene
CN115018692A (en) Image rendering method and electronic equipment
CN115424118B (en) Neural network training method, image processing method and device
CN116688494B (en) Method and electronic device for generating game prediction frame
US20230419559A1 (en) Double camera streams
CN112988364B (en) Dynamic task scheduling method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination