CN114972605A - Rendering method, rendering apparatus, computer-readable storage medium, and electronic device - Google Patents

Rendering method, rendering apparatus, computer-readable storage medium, and electronic device Download PDF

Info

Publication number
CN114972605A
CN114972605A CN202210809442.5A CN202210809442A CN114972605A CN 114972605 A CN114972605 A CN 114972605A CN 202210809442 A CN202210809442 A CN 202210809442A CN 114972605 A CN114972605 A CN 114972605A
Authority
CN
China
Prior art keywords
scene image
rendering
rendered
scene
interpolation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210809442.5A
Other languages
Chinese (zh)
Inventor
姚士峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210809442.5A priority Critical patent/CN114972605A/en
Publication of CN114972605A publication Critical patent/CN114972605A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)

Abstract

The disclosure provides a rendering method, a rendering device, a computer readable storage medium and electronic equipment, and relates to the technical field of computers. The rendering method comprises the following steps: acquiring a scene image to be rendered, and reducing the resolution of the scene image to be rendered to obtain a first scene image; determining boundary pixel points in the first scene image, and performing interpolation processing on the first scene image by using the determination result of the boundary pixel points to obtain a second scene image; rendering is performed based on the second scene image. The present disclosure may reduce GPU load.

Description

Rendering method, rendering apparatus, computer-readable storage medium, and electronic device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a rendering method, a rendering apparatus, a computer-readable storage medium, and an electronic device.
Background
Before displaying the interface of the application program, the terminal device needs to render the interface of the application program through a GPU (Graphics Processing Unit). However, the process of rendering typically results in a high GPU load.
Disclosure of Invention
The present disclosure provides a rendering method, a rendering apparatus, a computer-readable storage medium, and an electronic device, thereby overcoming, at least to some extent, the problem of high GPU load in the rendering process.
According to a first aspect of the present disclosure, there is provided a rendering method including: reducing the resolution of a scene image to be rendered to obtain a first scene image; determining boundary pixel points in the first scene image, and performing interpolation processing on the first scene image by using the determination result of the boundary pixel points to obtain a second scene image; rendering is performed based on the second scene image.
According to a second aspect of the present disclosure, there is provided a rendering apparatus including: the resolution reducing module is used for reducing the resolution of the scene image to be rendered so as to obtain a first scene image; the interpolation module is used for determining boundary pixel points in the first scene image and performing interpolation processing on the first scene image by using the determination result of the boundary pixel points to obtain a second scene image; and the rendering module is used for rendering based on the second scene image.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the rendering method described above.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising a processor; a memory for storing one or more programs which, when executed by the processor, cause the processor to implement the rendering method described above.
In the technical solutions provided in some embodiments of the present disclosure, a resolution of a scene image to be rendered is reduced to obtain a first scene image, interpolation processing is performed on the first scene image by using a determination result of a boundary pixel point in the first scene image to obtain a second scene image, and rendering is performed based on the second scene image. On one hand, the rendering process is executed by reducing the resolution of the scene image to be rendered, the GPU load can be reduced, the image boundary information is reserved by combining an interpolation means, and the GPU load is further reduced under the condition that the user perception is influenced as little as possible; on the other hand, the scheme disclosed by the invention has strong universality, can render interfaces of various types of application programs, and does not need the participation of application program developers.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
FIG. 1 shows a schematic diagram of a process of a rendering scheme of an embodiment of the present disclosure;
FIG. 2 shows a detailed schematic diagram of the rendering process of the present disclosure, exemplified by 3D scene rendering;
FIG. 3 schematically shows a flow chart of a rendering method according to an exemplary embodiment of the present disclosure;
FIG. 4 shows a schematic view of a Hook Layer (Hook Layer) of an embodiment of the present disclosure in place;
FIG. 5 shows a schematic diagram of a filter employed in computing a pixel gradient according to an embodiment of the present disclosure;
FIG. 6 is a diagram illustrating the use of weights in a lanczos interpolation process according to an embodiment of the disclosure;
FIG. 7 shows a flowchart of deriving a second scene image using interpolation means according to an embodiment of the disclosure;
FIG. 8 shows a flow diagram of the overall rendering process of the present disclosure, exemplified by rendering a gaming application;
fig. 9 schematically shows a block diagram of a rendering apparatus according to an exemplary embodiment of the present disclosure;
fig. 10 schematically shows a block diagram of a rendering apparatus according to another exemplary embodiment of the present disclosure;
FIG. 11 schematically illustrates a block diagram of a rendering apparatus according to still another exemplary embodiment of the present disclosure;
fig. 12 schematically shows a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the embodiments of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the steps. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation. In addition, all of the following terms "first", "second", "third", "fourth", etc. are for distinguishing purposes only and should not be construed as limiting the present disclosure.
The rendering scheme of the embodiments of the present disclosure may be implemented by an electronic device. That is, the electronic device in which the rendering apparatus of the present disclosure may be configured may perform the respective steps of the rendering method described below. The electronic device may be, for example, a mobile terminal such as a smart phone, a tablet computer, a smart wearable device, or a server.
Fig. 1 shows a schematic diagram of a process of a rendering scheme according to an embodiment of the present disclosure.
Referring to fig. 1, in a case where a scene image to be rendered is obtained, the scene image to be rendered may be rendered by using a scene rendering process according to the embodiments of the present disclosure to obtain a scene rendering result, and the scene rendering result may be cached, which may be referred to as a scene rendering process. After the scene rendering process, a User Interface (UI) rendering process may be performed on a UI image to be rendered corresponding to the scene image to be rendered, to obtain a UI rendering result, which may be referred to as a UI rendering process. The electronic device may then display the scene rendering results and the UI rendering results on a display screen.
It should be understood that the above-described rendering process is only a process of processing one frame of image of a scene to be rendered by an application program, and is performed for each frame in a continuously rendered scene.
In addition, for the UI rendering process, if the UI interface is not changed, after obtaining a UI rendering result, only the scene rendering process may be executed, and the scene is continuously updated at the display side while the UI is kept unchanged.
The application of the embodiments of the present disclosure may be any application installed in the electronic device, such as a game application, a multimedia application, a browser application, and the like.
Taking a game application as an example, the scene image to be rendered may be an interface for game execution, including a 2D or 3D game scene image. The corresponding UI image to be rendered may be an image corresponding to a game UI interface, and is used for realizing human-computer interaction.
Fig. 2 shows a detailed schematic diagram of a rendering process of an embodiment of the present disclosure.
Referring to fig. 2, first, the electronic device may perform 3D rendering on 3D scene data and store the rendering result in an off-screen frame buffer (offset frame buffer) 1.
Then, the resolution of the 3D rendering result may be reduced to obtain a first scene image, and the first scene image may be interpolated based on the boundary information to obtain a second scene image, which is stored in the off-screen frame buffer 2.
Next, the electronic device may perform a post-processing rendering process on the second scene image to obtain a third scene image, and may store the third scene image in a post-processing frame buffer. Post-processing may include, but is not limited to, Anti-Aliasing (AA) processing, High Dynamic Range Imaging (HDR) processing, and the like.
Subsequently, the electronic device may perform fusion rendering on the second scene image and the third scene image, and record the fused image as a fourth scene image, where the fourth scene image is a 3D scene rendering result corresponding to the 3D scene data.
After the 3D scene rendering result is determined, rendering may be performed on UI data corresponding to the 3D scene data to obtain a UI rendering result.
In this case, the 3D scene rendering result and the UI rendering result may be displayed on a display screen of the electronic device, thereby enabling presentation of one frame of screen.
Fig. 3 schematically shows a flowchart of a rendering method of an exemplary embodiment of the present disclosure.
Referring to fig. 3, the rendering method may include the steps of:
and S32, acquiring a scene image to be rendered, and reducing the resolution of the scene image to be rendered to obtain a first scene image.
In an exemplary embodiment of the present disclosure, the scene image to be rendered may be one scene of the application program to be rendered. The application may be any application installed on the electronic device that requires scene rendering, including but not limited to a gaming application, a multimedia application, a browser application, a social application, and the like. The scene image to be rendered may include a 2D scene image and a 3D scene image, and the application program and the scene to be rendered are not limited in the present disclosure.
The scene image to be rendered is usually rendered in a specific Off-screen Window (Off-screen Window). According to some embodiments of the present disclosure, an off-screen window of a scene image to be rendered may be determined first.
Specifically, when the application program is started, the electronic device may start a Hook Layer (Hook Layer). Referring to FIG. 4, the hooking layer is located between the Application and the graphics APIs (Application Programming interfaces), each of which goes through the hooking layer to prepare for subsequent actions. The graphics APIs include, but are not limited to, OpenGL ES, Vulkan, DirectX, Metal, and the like.
At the hook layer, the electronic device may find the off-screen window of the scene image to be rendered according to a label (label) of the off-screen window or a label of an attachment of the off-screen window.
There are multiple attachments to the off-screen window including, but not limited to, a color cache, a depth cache, a stencil cache, etc. The attachment of the off-screen window corresponds to the memory or the display memory. The color buffer is used for storing colors. The depth cache is used for storing depth values and helping the GPU to eliminate the shielded object pixels. The stencil cache helps the GPU draw specially shaped objects or implement certain features.
After the off-screen window is determined, the resolution of the scene image to be rendered may be reduced in the off-screen window to obtain the first scene image. The resolution reduction factor is not limited by the present disclosure, for example, the resolution of the first scene image may be 0.7 times or 0.8 times the resolution of the scene image to be rendered.
The electronic device may determine an original resolution of the scene image to be rendered at the hook layer, and perform reduction processing on the original resolution at the hook layer to determine the first scene image.
It should be appreciated that when the application sets the viewport, if it is currently switched to the off-screen window, the viewport is reset to the reduced resolution viewport.
And S34, determining boundary pixel points in the first scene image, and performing interpolation processing on the first scene image by using the determination result of the boundary pixel points to obtain a second scene image.
After obtaining the first scene image, the electronic device may determine boundary pixel points in the first scene image. It is to be understood that the boundary pixel points may be pixel points where pixel information in the first scene image changes (especially, changes suddenly), and generally include contour points of an object in the first scene image.
It should be understood that the gray value of a pixel may be used to determine whether the pixel is a boundary pixel.
First, the electronic device may obtain gradient values of pixel points in the first scene image. The gradient value is characterized in that the directional derivative of a certain function at the point takes the maximum value along the direction, namely, the function changes the fastest along the direction at the point, and the change rate is the maximum. The gradient value of the pixel point can be calculated by adopting a center difference method.
Specifically, the gray values of the pixel points may be processed by using filters in the x direction and the y direction to obtain f (x) and f (y), respectively, and fig. 5 shows an example of a group of filters, which does not limit the parameter values and forms of the filters in the present disclosure. The gradient value of a pixel is determined by, for example, computing the root of the sum of squares of f (x) and f (y).
Next, the gradient values of the pixel points may be compared to a gradient threshold. If the gradient value of the pixel point is larger than the gradient threshold value, determining that the pixel point is a boundary pixel point in the first scene image; and if the gradient value of the pixel point is less than or equal to the gradient threshold value, determining that the pixel point is a non-boundary pixel point in the first scene image. The present disclosure does not limit the specific value of the gradient threshold, for example, the gradient threshold is set to 0.1.
After the boundary pixel points in the first scene image are determined, interpolation processing can be performed on the first scene image by using the determination result of the boundary pixel points to obtain a second scene image. The determination result of the boundary pixel points refers to a result of determining which pixel points in the first scene image are boundary pixel points, that is, a result of dividing the pixel points in the first scene image into boundary pixel points and non-boundary pixel points.
In an exemplary embodiment of the present disclosure, different interpolation methods are used for boundary pixels and non-boundary pixels.
Specifically, a first interpolation mode may be adopted to perform interpolation processing on the boundary pixel points, and a second interpolation mode may be adopted to perform interpolation processing on the non-boundary pixel points.
For non-boundary pixels, the difference between the non-boundary pixels and surrounding pixels is not large, so that fewer surrounding pixels can be taken for interpolation. For the boundary pixel point, the difference between the boundary pixel point and the surrounding pixel points is usually larger, and if a small number of surrounding pixel points are taken for interpolation, the boundary pixel point cannot be reflected well, so that a large number of surrounding pixel points are taken for interpolation. That is, the number of the peripheral pixels selected by the first interpolation mode for the boundary pixels is greater than the number of the peripheral pixels selected by the second interpolation mode for the non-boundary pixels. For example, for the first interpolation method, 16 or 32 surrounding pixels may be taken for interpolation, and for the second interpolation method, 4 or 8 surrounding pixels may be taken for interpolation.
In some embodiments of the present disclosure, for the first interpolation mode, the nearest m surrounding pixel points may be selected to perform interpolation with the boundary pixel point as a center. For the second interpolation mode, nearest n surrounding pixel points can be selected to perform frame interpolation by taking the non-boundary pixel points as centers. Wherein m is greater than n. For example, m can be 16, 20, 24, 32, etc., and n can be 4, 8, etc. The number of surrounding pixel points used for interpolation is not limited in the disclosure, and can be determined jointly according to the specific content of the image and the processing capacity of the electronic device.
It can be understood that, when performing interpolation by using surrounding pixels, weights are usually configured for the surrounding pixels. In order to further restore the boundary information, the weights adopted by the surrounding pixel points in the first interpolation mode contain negative values, and the weights adopted by the surrounding pixel points in the second interpolation mode are all positive values.
That is to say, in the process of performing interpolation processing on the boundary pixel point by using the first interpolation mode, the pixel value after interpolation of the boundary pixel point is determined by using the pixel values and weights of the surrounding pixel points of the boundary pixel point. And in the process of carrying out interpolation processing on the non-boundary pixel points by using the second interpolation mode, determining the pixel values of the non-boundary pixel points after interpolation by using the pixel values and the weights of the surrounding pixel points of the non-boundary pixel points. The weights of the surrounding pixels of the boundary pixels contain negative values, and the weights of the surrounding pixels of the non-boundary pixels are all positive values. For example, the weights of the surrounding pixels of the non-boundary pixels are all positive values and the weight of each surrounding pixel may be the same.
In some embodiments of the present disclosure, the first interpolation mode may be a lanczos interpolation mode, and the second interpolation mode may be a bilinear interpolation mode.
Fig. 6 shows a schematic diagram of weights used in the laczos interpolation process according to an embodiment of the disclosure. Referring to fig. 6, a negative value exists in the weight configured for the surrounding pixel points in the lanczos interpolation mode. For the bilinear interpolation mode, it is easy to understand that the weighted average mode of surrounding pixel points is usually adopted, and the weights are all positive values. It should be understood that the first interpolation method and the second interpolation method may adopt a weighted average processing method, but are different at least in terms of weight configuration and the number of surrounding pixels.
Fig. 7 shows a flowchart of obtaining a second scene image by interpolation according to an embodiment of the disclosure.
In step S702, gradient values of pixel points in the first scene image may be calculated.
In step S704, it is determined whether the pixel is a boundary pixel based on the calculated gradient value, and if not, step S706 is performed, where a processing procedure of bilinear interpolation is adopted; if the pixel point is the boundary pixel point, step S708 is executed, and a processing procedure of lanczos interpolation is adopted.
Combining the output results of step S706 and step S708, a second scene image may be obtained.
It should be noted that the resolution of the second scene image is typically the same as the resolution of the original scene image to be rendered to ensure that the perception of the user is not affected. However, in other embodiments of the present disclosure, in the case that the resolution of the scene image to be rendered is not consistent with the display resolution, the resolution of the second scene image may also be not consistent with the resolution of the scene image to be rendered, but consistent with the display resolution.
And S36, rendering based on the second scene image.
According to some embodiments of the present disclosure, the electronic device may render the second scene image to obtain a scene rendering result corresponding to the scene image to be rendered.
According to other embodiments of the present disclosure, to further improve the quality of the output image, first, the electronic device may perform at least one post-processing procedure on the second scene image to obtain a third scene image. Post-processing may include, but is not limited to, anti-aliasing processing, high dynamic range imaging processing, and the like, among others.
Next, the electronic device may fuse the second scene image with the third scene image to obtain a fourth scene image. The process of fusion may also be referred to as synthesis (composite), and the manner of fusion may be controlled by the application side. In addition, the fusion weight may be configured for the second scene image and the third scene image, for example, in order to improve the accuracy of the fused image, the weight of the second scene image may be configured to be greater than the weight of the third scene image.
Then, the electronic device may render the fourth scene image to obtain a scene rendering result corresponding to the scene image to be rendered.
If the frame of the scene image to be rendered corresponds to the user interface, after the scene rendering result is obtained, the electronic device may determine the user interface to be rendered corresponding to the scene image to be rendered, and render the user interface to be rendered according to the resolution of the user interface to be rendered, so as to obtain the user interface rendering result.
The user interface to be rendered is directly rendered according to the resolution ratio, so that the image quality of the user interface is not influenced, and better interactive experience of a user is guaranteed.
After determining the scene rendering result and the user interface rendering result, the electronic device may display the scene rendering result and the user interface rendering result on a display screen, thereby implementing presentation of one frame of picture.
The entire rendering process of the embodiment of the present disclosure will be explained with reference to fig. 8.
In step S802, the electronic device acquires a game scene image to be rendered.
In step S804, the electronic device reduces the resolution of the game scene image to be rendered, so as to obtain a first scene image.
In step S806, the electronic device determines boundary pixel points and non-boundary pixel points in the first scene image.
In step S808, the electronic device interpolates the boundary pixel points in a first interpolation manner.
In step S810, the electronic device performs interpolation on the non-boundary pixel points by using a second interpolation method.
In step S812, a second scene image may be generated using the interpolation results of step S808 and step S810.
In step S814, the electronic device may perform post-processing on the second scene image to obtain a third scene image.
In step S816, the electronic device may fuse the second scene image with the third scene image to obtain a fourth scene image.
In step S818, the electronic device may render the fourth scene image to obtain a game scene rendering result.
In step S820, the electronic device may perform rendering of a game UI image corresponding to a game scene image to be rendered to obtain a game UI rendering result.
In step S822, the electronic device may display the game scene rendering result obtained in step S818 and the game UI rendering result obtained in step S820 on the display screen to complete the display of the current frame of screen.
It should be noted that after step S822 is executed, step S802 may be returned to implement rendering of the next frame screen.
It should be noted that although the various steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Further, a rendering apparatus is also provided in the present exemplary embodiment.
Fig. 9 schematically illustrates a block diagram of a rendering apparatus according to an exemplary embodiment of the present disclosure. Referring to fig. 9, the rendering apparatus 9 according to an exemplary embodiment of the present disclosure may include a down-resolution module 91, an interpolation module 93, and a rendering module 95.
Specifically, the resolution reduction module 91 may be configured to obtain a scene image to be rendered, and reduce the resolution of the scene image to be rendered to obtain a first scene image; the interpolation module 93 may be configured to determine boundary pixel points in the first scene image, and perform interpolation processing on the first scene image by using the determination result of the boundary pixel points to obtain a second scene image; the rendering module 95 may be configured to render based on the second scene image.
According to an exemplary embodiment of the present disclosure, the interpolation module 93 may be configured to perform: performing interpolation processing on the boundary pixel points by adopting a first interpolation mode; performing interpolation processing on non-boundary pixel points in the first scene image by adopting a second interpolation mode; the number of the peripheral pixels selected by the first interpolation mode aiming at the boundary pixels is larger than that of the peripheral pixels selected by the second interpolation mode aiming at the non-boundary pixels.
According to an exemplary embodiment of the present disclosure, the interpolation module 93 may be configured to perform: determining pixel values after interpolation of the boundary pixel points by using pixel values and weights of surrounding pixel points of the boundary pixel points; determining the pixel value of the non-boundary pixel point after interpolation by using the pixel values and the weights of the surrounding pixel points of the non-boundary pixel point; the weights of the surrounding pixels of the boundary pixels contain negative values, and the weights of the surrounding pixels of the non-boundary pixels are all positive values.
According to an exemplary embodiment of the present disclosure, the process of the interpolation module 93 determining the boundary pixel point may be configured to perform: acquiring gradient values of pixel points in a first scene image; comparing the gradient value of the pixel point with a gradient threshold value; and if the gradient value of the pixel point is larger than the gradient threshold value, determining the pixel point as a boundary pixel point in the first scene image.
According to an exemplary embodiment of the present disclosure, the resolution reduction module 91 may be configured to perform: determining an off-screen window of a scene image to be rendered; and reducing the resolution of the scene image to be rendered in the off-screen window to obtain a first scene image.
According to an example embodiment of the present disclosure, the rendering module 95 may be configured to perform: and rendering the second scene image to obtain a scene rendering result corresponding to the scene image to be rendered.
According to an example embodiment of the present disclosure, the rendering module 95 may be further configured to perform: performing at least one post-processing procedure on the second scene image to obtain a third scene image; fusing the second scene image and the third scene image to obtain a fourth scene image; rendering the fourth scene image to obtain a scene rendering result corresponding to the scene image to be rendered.
According to an exemplary embodiment of the present disclosure, referring to fig. 10, the rendering apparatus 10 may further include a user interface processing module 101 compared to the rendering apparatus 9.
In particular, the user interface processing module 101 may be configured to perform: after a scene rendering result is obtained, determining a user interface to be rendered corresponding to a scene image to be rendered; and rendering the user interface to be rendered according to the resolution of the user interface to be rendered so as to obtain a user interface rendering result.
According to an exemplary embodiment of the present disclosure, referring to fig. 11, the rendering apparatus 11 may further include a result display module 111, compared to the rendering apparatus 10.
In particular, the result display module 111 may be configured to perform: and displaying the scene rendering result and the user interface rendering result on a display screen.
According to an exemplary embodiment of the present disclosure, a resolution of a scene image to be rendered is the same as a resolution of a second scene image.
Since each functional module of the rendering apparatus in the embodiment of the present disclosure is the same as that in the embodiment of the method described above, it is not described herein again.
FIG. 12 shows a schematic diagram of an electronic device suitable for use in implementing exemplary embodiments of the present disclosure. It should be noted that the electronic device shown in fig. 12 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The electronic device of the present disclosure includes at least a processor and a memory for storing one or more programs, which when executed by the processor, cause the processor to implement the rendering method of the exemplary embodiments of the present disclosure.
Specifically, as shown in fig. 12, the electronic device 120 may include: processor 1210, internal memory 1221, external memory interface 1222, Universal Serial Bus (USB) interface 1230, charge management Module 1240, power management Module 1241, battery 1242, antenna 1, antenna 2, mobile communication Module 1250, wireless communication Module 1260, audio Module 1270, sensor Module 1280, display screen 1290, camera 1291, indicator 1292, motor 1293, button 1294, and Subscriber Identity Module (SIM) card interface 1295, among others. Wherein the sensor module 1280 may include a depth sensor, a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
It is to be understood that the illustrated structure of the embodiments of the present disclosure does not constitute a specific limitation to the electronic device 120. In other embodiments of the present disclosure, the electronic device 120 may include more or fewer components than illustrated, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 1210 may include one or more processing units, such as: processor 1210 may include an Application Processor (AP), a modem Processor, a GPU, an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors. Additionally, a memory may be provided in processor 1210 for storing instructions and data. Specifically, the rendering scheme of the embodiment of the present disclosure may be implemented by a GPU.
The electronic device 120 may implement a shooting function through the ISP, the camera module 1291, the video codec, the GPU, the display screen 1290, the application processor, and the like. In some embodiments, the electronic device 120 may include 1 or N camera modules 1291, where N is a positive integer greater than 1, and if the electronic device 120 includes N cameras, one of the N cameras is the main camera.
Internal memory 1221 may be used to store computer-executable program code, which includes instructions. The internal memory 1221 may include a program storage area and a data storage area. The external memory interface 1222 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 120.
The present disclosure also provides a computer-readable storage medium, which may be contained in the electronic device described in the above embodiment; or may exist separately without being assembled into the electronic device.
A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable storage medium may transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The computer readable storage medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method as described in embodiments of the disclosure.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, and may also be implemented by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (13)

1. A rendering method, comprising:
reducing the resolution of a scene image to be rendered to obtain a first scene image;
determining boundary pixel points in the first scene image, and performing interpolation processing on the first scene image by using the determination result of the boundary pixel points to obtain a second scene image;
rendering based on the second scene image.
2. The rendering method according to claim 1, wherein interpolating the first scene image using the determination result of the boundary pixel point includes:
performing interpolation processing on the boundary pixel points by adopting a first interpolation mode;
performing interpolation processing on non-boundary pixel points in the first scene image by adopting a second interpolation mode;
the number of the peripheral pixels selected by the first interpolation mode aiming at the boundary pixels is larger than that of the peripheral pixels selected by the second interpolation mode aiming at the non-boundary pixels.
3. The rendering method according to claim 2, wherein the interpolating the boundary pixel point in the first interpolation mode includes: determining the pixel value of the boundary pixel point after interpolation by using the pixel values and the weights of the surrounding pixels of the boundary pixel point;
the interpolation processing of the non-boundary pixel points by adopting a second interpolation mode comprises the following steps: determining the pixel value of the non-boundary pixel point after interpolation by using the pixel values and the weights of the surrounding pixels of the non-boundary pixel point;
the weights of the surrounding pixels of the boundary pixels comprise negative values, and the weights of the surrounding pixels of the non-boundary pixels are all positive values.
4. The rendering method of claim 1, wherein determining boundary pixel points in the first scene image comprises:
acquiring gradient values of pixel points in the first scene image;
comparing the gradient value of the pixel point with a gradient threshold value;
and if the gradient value of the pixel point is larger than the gradient threshold value, determining the pixel point as a boundary pixel point in the first scene image.
5. The rendering method according to claim 1, wherein reducing the resolution of the scene image to be rendered to obtain the first scene image comprises:
determining an off-screen window of a scene image to be rendered;
and reducing the resolution of the scene image to be rendered in the off-screen window to obtain the first scene image.
6. The rendering method according to claim 1, wherein rendering based on the second scene image comprises:
rendering the second scene image to obtain a scene rendering result corresponding to the scene image to be rendered.
7. The rendering method according to claim 1, wherein rendering based on the second scene image comprises:
performing at least one post-processing procedure on the second scene image to obtain a third scene image;
fusing the second scene image with the third scene image to obtain a fourth scene image;
rendering the fourth scene image to obtain a scene rendering result corresponding to the scene image to be rendered.
8. The rendering method according to claim 6 or 7, wherein after obtaining the scene rendering result, the rendering method further comprises:
determining a user interface to be rendered corresponding to the scene image to be rendered;
and rendering the user interface to be rendered according to the resolution of the user interface to be rendered so as to obtain a user interface rendering result.
9. The rendering method according to claim 8, wherein the rendering method further comprises:
and displaying the scene rendering result and the user interface rendering result on a display screen.
10. The rendering method according to claim 1, wherein the resolution of the scene image to be rendered is the same as the resolution of the second scene image.
11. A rendering apparatus, characterized by comprising:
the resolution reducing module is used for reducing the resolution of the scene image to be rendered so as to obtain a first scene image;
the interpolation module is used for determining boundary pixel points in the first scene image and carrying out interpolation processing on the first scene image by utilizing the determination result of the boundary pixel points to obtain a second scene image;
and the rendering module is used for rendering based on the second scene image.
12. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the rendering method of any one of claims 1 to 10.
13. An electronic device, comprising:
a processor;
a memory for storing one or more programs which, when executed by the processor, cause the processor to implement the rendering method of any one of claims 1 to 10.
CN202210809442.5A 2022-07-11 2022-07-11 Rendering method, rendering apparatus, computer-readable storage medium, and electronic device Pending CN114972605A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210809442.5A CN114972605A (en) 2022-07-11 2022-07-11 Rendering method, rendering apparatus, computer-readable storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210809442.5A CN114972605A (en) 2022-07-11 2022-07-11 Rendering method, rendering apparatus, computer-readable storage medium, and electronic device

Publications (1)

Publication Number Publication Date
CN114972605A true CN114972605A (en) 2022-08-30

Family

ID=82968759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210809442.5A Pending CN114972605A (en) 2022-07-11 2022-07-11 Rendering method, rendering apparatus, computer-readable storage medium, and electronic device

Country Status (1)

Country Link
CN (1) CN114972605A (en)

Similar Documents

Publication Publication Date Title
CN112565589B (en) Photographing preview method and device, storage medium and electronic equipment
CN111641835B (en) Video processing method, video processing device and electronic equipment
CN106997579B (en) Image splicing method and device
CN107690804B (en) Image processing method and user terminal
WO2020171300A1 (en) Processing image data in a composite image
CN112468796A (en) Method, system and equipment for generating fixation point
CN115330925A (en) Image rendering method and device, electronic equipment and storage medium
CN115546043A (en) Video processing method and related equipment
CN111583329B (en) Augmented reality glasses display method and device, electronic equipment and storage medium
EP4050567A1 (en) Information processing device, 3d data generation method, and program
CN114972605A (en) Rendering method, rendering apparatus, computer-readable storage medium, and electronic device
CN115471413A (en) Image processing method and device, computer readable storage medium and electronic device
CN116091292A (en) Data processing method and related device
CN113034416A (en) Image processing method and device, electronic device and storage medium
CN115134532A (en) Image processing method, image processing device, storage medium and electronic equipment
CN114119413A (en) Image processing method and device, readable medium and mobile terminal
CN110874816B (en) Image processing method, device, mobile terminal and storage medium
CN113538269A (en) Image processing method and device, computer readable storage medium and electronic device
CN115984950B (en) Sight line detection method, device, electronic equipment and storage medium
CN113205011B (en) Image mask determining method and device, storage medium and electronic equipment
CN116668773B (en) Method for enhancing video image quality and electronic equipment
CN117593611B (en) Model training method, image reconstruction method, device, equipment and storage medium
CN113766090B (en) Image processing method, terminal and storage medium
EP4280154A1 (en) Image blurriness determination method and device related thereto
CN112950516A (en) Method and device for enhancing local contrast of image, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination