CN115965737A - Image rendering method and device, terminal equipment and storage medium - Google Patents

Image rendering method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN115965737A
CN115965737A CN202211600159.8A CN202211600159A CN115965737A CN 115965737 A CN115965737 A CN 115965737A CN 202211600159 A CN202211600159 A CN 202211600159A CN 115965737 A CN115965737 A CN 115965737A
Authority
CN
China
Prior art keywords
image
target
depth
depth information
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211600159.8A
Other languages
Chinese (zh)
Inventor
温俊城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202211600159.8A priority Critical patent/CN115965737A/en
Publication of CN115965737A publication Critical patent/CN115965737A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention provides an image rendering method, an image rendering device, terminal equipment and a storage medium, and relates to the technical field of computers. The image rendering method comprises the following steps: acquiring a depth image corresponding to an original image; calculating according to the depth information of each pixel in the depth image to obtain a normal image; processing according to the depth image and the normal image to obtain a target image; and performing forward rendering by adopting the target image, and displaying the final image. The target image is obtained based on the depth image and the normal image, so that information contained in the target image is richer, then the target image is adopted for forward rendering, the forward rendering effect can be improved, the display effect of the final image is better, and the user experience is improved.

Description

Image rendering method and device, terminal equipment and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to an image rendering method, an image rendering device, terminal equipment and a storage medium.
Background
With the rapid development of science and technology, various electronic products are increasing, and rendering in the drawing of the electronic products refers to a process of generating an image from a model by software. Forward rendering techniques and delayed rendering techniques are also known as hot spots of research.
In the related art, forward rendering is a rendering pipeline for calculating illumination object by object, but the existing forward rendering has a problem of poor rendering effect.
Disclosure of Invention
The present invention is directed to provide an image rendering method, an image rendering device, a terminal device and a storage medium, so as to solve the above technical problems in the related art.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides an image rendering method, including:
acquiring a depth image corresponding to an original image;
calculating according to the depth information of each pixel in the depth image to obtain a normal image;
processing according to the depth image and the normal image to obtain a target image;
and performing forward rendering by adopting the target image, and displaying a final image.
In a second aspect, an embodiment of the present invention further provides an image rendering apparatus, including:
the acquisition module is used for acquiring a depth image corresponding to the original image;
the computing module is used for computing according to the depth information of each pixel in the depth image to obtain a normal image;
the processing module is used for processing according to the depth image and the normal image to obtain a target image;
and the display module is used for adopting the target image to perform forward rendering and displaying the final image.
In a third aspect, an embodiment of the present invention further provides a terminal device, including: a memory storing a computer program executable by the processor, and a processor implementing the image rendering method according to any one of the first aspect when the computer program is executed by the processor.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the storage medium, and when the computer program is read and executed, the image rendering method according to any one of the first aspect is implemented.
The beneficial effects of the invention are: the embodiment of the invention provides an image rendering method, which comprises the following steps: acquiring a depth image corresponding to an original image; calculating according to the depth information of each pixel in the depth image to obtain a normal image; processing according to the depth image and the normal image to obtain a target image; and performing forward rendering by adopting the target image, and displaying the final image. The target image is obtained based on the depth image and the normal image, so that the information contained in the target image is richer, the target image is adopted for forward rendering, the forward rendering effect can be improved, the display effect of the final image is better, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic flowchart of an image rendering method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an image rendering method according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of an image rendering method according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of an image rendering method according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating neighboring pixels and an extended dot according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating neighboring pixels and an extended dot according to an embodiment of the present invention;
fig. 7 is a schematic flowchart of an image rendering method according to an embodiment of the present invention;
fig. 8 is a flowchart illustrating an image rendering method according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating image partitioning according to an embodiment of the present invention;
fig. 10 is a schematic flowchart of an image rendering method according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an image rendering apparatus according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
In the description of the present application, it should be noted that if the terms "upper", "lower", etc. are used to indicate an orientation or a positional relationship based on an orientation or a positional relationship shown in the drawings or an orientation or a positional relationship which is usually placed when the product of the application is used, the description is merely for convenience of description and simplification of the application, but the indication or suggestion that the device or the element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, cannot be understood as a limitation of the application.
Furthermore, the terms first, second and the like in the description and in the claims, as well as in the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the features of the embodiments of the present application may be combined with each other without conflict.
Aiming at the problem of poor rendering effect in forward rendering in the related art, the embodiment of the application provides a graph rendering method, depth information of each pixel in a depth image is calculated to obtain a normal image, the normal image is processed according to the depth image and the normal image to obtain a target image, so that information contained in the target image is richer, and then the target image is adopted for forward rendering, so that the rendering effect of forward rendering can be improved, the display effect of a final image is better, and user experience is improved.
The words used in the examples of this application are explained below.
Forward rendering: referred to as forward rendering, a rendering pipeline that computes illumination on an object-by-object basis.
SSAO: screen space ambient occlusion is a technology for improving the sense of reality of pictures.
Depth Buffer: depth buffering in a rendering pipeline.
Normal Buffer: normal buffering in the rendering pipeline.
GTAO: group channel elementary encapsulation: accurate screen space shielding, and is an ordinary SSAO upgrade version
Pre _ z: and generating a Depth Buffer in advance.
Deinterleave: and (4) separating.
Reinterleave: and (6) recombining.
MRT: the Multiple render target is known as a Multiple render target, and refers to a technology of rendering Multiple buffers at a time.
Texture Array: texture arrays are a multi-texture format within the programming of a GPU (graphics processing unit).
The image rendering method provided by the embodiment of the application is applied to a terminal device, and the terminal device can be any one of the following: desktop computers, notebook computers, platform computers, smart phones, smart screens, or the like.
The following explains an image rendering method provided in an embodiment of the present application.
Fig. 1 is a schematic flowchart of an image rendering method according to an embodiment of the present invention, and as shown in fig. 1, the method may include:
and S101, acquiring a depth image corresponding to the original image.
In the embodiment of the application, a preset algorithm or a preset rule can be adopted to preprocess the original image to obtain the depth information of each pixel; and storing the depth information of each pixel in a depth buffer area to obtain a depth image corresponding to the original image.
Note that any one of the following may be included in the original image: static objects, environments, people, animals, and the like, and the object included in the original image is not particularly limited in the embodiment of the present application.
And S102, calculating according to the depth information of each pixel in the depth image to obtain a normal image.
In some embodiments, the normal image may be obtained by performing a calculation based on depth information of each pixel in the depth image to obtain normal information of each pixel, and storing the normal information of each pixel in the normal buffer.
In the embodiment of the application, the normal information of each pixel can be sequentially calculated according to the depth information of each pixel; normal information of each pixel can be calculated simultaneously according to the depth information of each pixel; the normal information of each pixel may also be calculated in other manners, which is not limited in this embodiment of the present application.
Alternatively, the depth image may be a depth buffer, and the normal image may be a normal buffer.
And S103, processing according to the depth image and the normal image to obtain a target image.
The target image may be an image corresponding to the original image, and compared with the original image, the target image contains richer information, including both depth information and normal information.
In some embodiments, the display card of the terminal device may transmit the depth image and the normal image to the GPU of the terminal device, and the GPU of the terminal device may process the depth image and the normal image to obtain the target image.
And S104, performing forward rendering by adopting the target image, and displaying the final image.
It should be noted that the target image includes depth information and normal information, and the target image can be used as an intermediate variable and forward rendered by using the target image, so that the final image is better in presentation effect.
In the embodiment of the present application, the target image is used for forward rendering, and the final image may be displayed on a display screen of the terminal device itself, or of course, the final image may also be displayed on a display screen externally connected to the terminal device.
In summary, an embodiment of the present invention provides an image rendering method, including: acquiring a depth image corresponding to an original image; calculating according to the depth information of each pixel in the depth image to obtain a normal image; processing according to the depth image and the normal image to obtain a target image; and performing forward rendering by adopting the target image, and displaying the final image. The target image is obtained based on the depth image and the normal image, so that information contained in the target image is richer, then the target image is adopted for forward rendering, the forward rendering effect can be improved, the display effect of the final image is better, and the user experience is improved.
Fig. 2 is a schematic flowchart of an image rendering method according to an embodiment of the present invention, and as shown in fig. 2, the process of acquiring the depth image corresponding to the original image in S101 may include:
s201, packing the same virtual static objects in the original image into the same group to obtain a plurality of groups.
The same virtual static object in the original image is packed into a group made of the same material to obtain a plurality of groups, and each group can comprise packed contents, so that state switching can be reduced during subsequent presetting operation, and the efficiency of the presetting operation can be improved.
S202, processing the contents in the plurality of groups by adopting a preset operation to obtain a depth image.
The preset operation may be an operation of pre _ z.
In some embodiments, at the beginning of the Forward rendering pipeline, pre _ z is performed to process the contents of multiple groups to obtain a depth image of the current screen resolution.
In the embodiment of the application, a pre _ z operation can be performed once in a Forward rendering pipeline to obtain a pre _ z Depth buffer, and then an accurate Normal buffer is generated through a specific sampling rule for the Depth buffer.
In summary, the same virtual static objects in the original image are packed into the same group to obtain a plurality of groups, and the content in the plurality of groups is processed by the preset operation to obtain the depth image, so that the efficiency of the preset operation can be improved, and the efficiency of obtaining the depth image can be improved.
Optionally, fig. 3 is a schematic flowchart of an image rendering method according to an embodiment of the present invention, and as shown in fig. 3, the process of calculating according to the depth information of each pixel in the depth image in S102 to obtain the normal image may include:
s301, determining adjacent pixels of the target pixel in the first direction and adjacent pixels of the target pixel in the second direction.
The target pixel is any one of the pixels in the depth image.
In addition, the first direction and the second direction may be opposite directions.
In some embodiments, a preset number of pixels adjacent to the target pixel are determined in a first direction of the target pixel, and an adjacent pixel in the first direction is obtained; and determining a preset number of pixels adjacent to the target pixel in the second direction of the target pixel to obtain adjacent pixels in the second direction.
It should be noted that, before the above S301, the depth image may be converted from the projection space to the camera space, and the depth buffer of the camera space may be referred to as a view depth buffer.
S302, determining a target plane corresponding to the target pixel according to the depth information of the adjacent pixel in the first direction and the depth information of the adjacent pixel in the second direction.
The target plane is a first plane where adjacent pixels in the first direction are located, or a second plane where adjacent pixels in the second direction are located.
In this embodiment of the application, it may be determined, according to the depth information of the neighboring pixels in the first direction and the depth information of the neighboring pixels in the second direction, whether the target pixel is on a first plane where the neighboring pixels in the first direction are located or on a second plane where the neighboring pixels in the second direction are located, so as to determine a home plane of the target pixel.
And S303, calculating a normal image according to the target plane corresponding to the target pixel.
The target plane corresponding to the target pixel may also be referred to as a home plane of the target pixel.
In this embodiment, the above-mentioned manners of S301 to S302 may be adopted to calculate the home plane corresponding to each pixel in the depth image, and then calculate the normal image according to the home plane corresponding to each pixel.
It should be noted that, the home plane corresponding to each pixel in the depth image may be calculated simultaneously, the home plane corresponding to each pixel in the depth image may be calculated sequentially, or the home plane corresponding to each pixel in the depth image may be calculated in other manners, which is not limited in this embodiment of the application.
In summary, the normal image is calculated by adopting the processes from S301 to S303, so that the accuracy of the acquired normal image can be improved, and the acquired normal image is more accurate.
Optionally, fig. 4 is a flowchart of an image rendering method according to an embodiment of the present invention, and as shown in fig. 4, the process of determining the target plane corresponding to the target pixel according to the depth information of the adjacent pixel in the first direction and the depth information of the adjacent pixel in the second direction in S302 may include:
s401, according to the depth information of the adjacent pixels in the first direction, the depth information of the first extension point corresponding to the first plane is calculated.
S402, according to the depth information of the adjacent pixels in the second direction, the depth information of the second extending point corresponding to the second plane is calculated.
The first extending point may be a point on an extension line of a first plane where adjacent pixels in the first direction are located; the second extending point may be a point on a second plane extension line of the adjacent pixel in the second direction.
In some embodiments, the preset perspective correction interpolation formula may be adopted to calculate the depth information of the first extension point corresponding to the first plane according to the depth information of the adjacent pixels in the first direction. Similarly, a preset perspective correction interpolation formula may be adopted to calculate the depth information of the second extension point corresponding to the second plane according to the depth information of the adjacent pixel in the second direction.
In the embodiment of the present application, the number of adjacent pixels in the first direction may be two, and the number of adjacent pixels in the second direction may be two. The first direction may be a left direction and the second direction may be a right direction.
The above perspective correction interpolation formula can be expressed as:
Figure BDA0003997127980000081
wherein Z represents depth information of another adjacent pixel located between the extension point and the one adjacent pixel, Z 0 Depth information (extension points or points) for points to the left of another adjacent pixelOne adjacent pixel), Z 1 Depth information for a point to the right of another neighboring pixel (an extension point or one neighboring pixel). q may be a preset parameter, and may have a value of 0.5, for example.
Fig. 5 is a schematic diagram of a neighboring pixel and an extension point according to an embodiment of the present invention, and fig. 6 is a schematic diagram of a neighboring pixel and an extension point according to an embodiment of the present invention, where as shown in (a) in fig. 5 and 6, a and b are neighboring pixels in a first direction, d and e are neighboring pixels in a second direction, and c is a target pixel, and as shown in (b) in fig. 5 and 6, the first extension point may be c1, and the second extension point may be c2.
In the above-mentioned perspective correction interpolation formula, Z may be depth information of b, Z 0 Depth information, Z, which may be a 1 May be the depth information (information to be found) of c 1. Z may be depth information of d, Z 0 May be the depth information (information to be found) of c2, Z 1 Depth information that may be e.
It should be noted that the process of S401 may be executed first and then the process of S402 may be executed, the process of S402 may be executed first and then the process of S401 is executed, and the processes of S401 and S402 may also be executed at the same time, which is not limited in this embodiment of the application.
And S403, determining a target plane according to the depth information of the first extension point, the depth information of the second extension point and the depth information of the target pixel.
In some embodiments, the depth information of the target pixel may be compared with the depth information of the first extension point and the depth information of the second extension point, respectively, to determine a target plane corresponding to the target pixel.
Optionally, fig. 7 is a schematic flowchart of an image rendering method according to an embodiment of the present invention, and as shown in fig. 7, the process of determining the target plane according to the depth information of the first extension point, the depth information of the second extension point, and the target pixel in S403 may include:
s701, from the depth information of the first extension point and the depth information of the second extension point, the depth information of the target extension point closer to the depth information of the target pixel is determined.
Wherein, the target extension point can be the first extension point or the second extension point.
It should be noted that a first distance between the depth information of the first extension point and the depth information of the target pixel may be calculated, and a second distance between the depth information of the second extension point and the depth information of the target pixel may be calculated; if the first distance is smaller than the second distance, the first extension point is a target extension point; and if the second distance is smaller than the first distance, the second extension point is the target extension point.
And S702, taking the plane corresponding to the target extension point as a target plane.
In the embodiment of the present application, if the target extension point is the first extension point, the target plane is the first plane; and if the target extension point is the second extension point, the target plane is a second plane.
In practical applications, for the examples in fig. 5 and 6, the target plane corresponding to the target pixel c may be the plane where c and d are located.
Optionally, fig. 8 is a flowchart illustrating an image rendering method according to an embodiment of the present invention, and as shown in fig. 8, the process of obtaining the target image by performing processing according to the depth image and the normal image in S103 may include:
s801, dividing the depth image into a plurality of sub-depth images.
Wherein the resolution of the depth image may be a current screen resolution.
In some embodiments, the length and width of the current screen resolution may be divided by a preset value to obtain the length and width of a reduced version of the small screen, and the depth image is divided into a plurality of sub-depth images, where each sub-depth image includes depth information of one pixel. The preset value may be set according to actual requirements, for example, the preset value may be 4, and the number of the sub-depth images may be 14.
A plurality of buffers may be created, and depth information of a plurality of pixels is separated (deinterlaced) into the plurality of buffers to obtain a plurality of sub-depth images. Wherein, the created plurality of buffers may be MRT buffers.
Optionally, the length and width of the current screen resolution may be expressed as: src _ screen _ w, src _ screen _ h. The length and width of the reduced version of the small screen can be expressed as: dst _ screen _ w, dst _ screen _ h. I.e., dst _ screen _ w = src _ screen _ w/4, dst _screen _h = src _screen _h/4.
Fig. 9 is a schematic diagram of image division according to an embodiment of the present invention, as shown in fig. 9, the full-screen image a includes a plurality of pixels, and the image a may be sampled to obtain four images (a 1, a2, a3, and a 4) from the full-screen image (depth image).
And S802, obtaining a target image according to the normal image and the sub-depth images in a texture array mode.
The texture array may also be referred to as a TextureArray, among others.
In some embodiments, the terminal device may adopt an SSAO algorithm to transmit the normal image and the sub-depth images to a GPU of the terminal device in a texture array manner, and obtain the target image by using the GPU.
Wherein, the SSAO algorithm can adopt a GTAO calculation mode.
Optionally, fig. 10 is a flowchart illustrating an image rendering method according to an embodiment of the present invention, and as shown in fig. 10, the obtaining a target image according to a normal image and a plurality of sub-depth images by using a texture array in S802 may include:
s1001, obtaining a plurality of sub-target images according to the normal image and the sub-depth images in a texture array mode.
In some embodiments, the terminal device may transmit the normal image and the sub-depth images to a GPU of the terminal device by using an SSAO algorithm in a texture array manner, obtain a plurality of sub-target images by using the GPU, and then output the plurality of sub-target images by using an MRT.
In practical applications, the multiple sub-target images may be referred to as AO buffers or MRT AO buffers. Alternatively, the number of sub-target images may be 16, that is, the number of AO buffers may be 16.
S1002, combining the plurality of sub-target images in a texture array mode to obtain a target image.
In this embodiment of the present application, a texture array mode may be adopted, multiple sub-target images (multiple MRT AO buffers) may be combined and input into the GPU, and the GPU is adopted to perform a reorganization on the multiple sub-target images to obtain a target image.
In summary, the depth image is divided into a plurality of sub-depth images, and the texture array manner is adopted to obtain the target image according to the normal image and the plurality of sub-depth images, so that the processing efficiency can be improved. And adopting a deinterlaceve and reinterleave mode to separate the depth image with the screen size into a plurality of sub-depth images with small blocks, and then recombining the target image with the screen size. The sub-depth images of the small blocks are in a cache-friendly mode, and the speed is increased when SSAO is calculated, so that the processing efficiency is improved.
Optionally, the process of performing forward rendering by using the target image and displaying the final image in S104 may include:
and performing illumination calculation according to the target image to perform forward rendering by adopting the target image and display the final image.
In some embodiments, in the Forward rendering pipeline, when the object performs illumination calculation, the illumination calculation may be performed according to the target image, so as to perform Forward rendering with the target image, and display the final image.
In the embodiment of the application, a Depth operation may be performed on the current Depth buffer, and the current full-screen Depth buffer is separated into 4X4 (16) small Depth buffers (sub-Depth images) in an interleaving manner. Then, SSAO calculation is started, AO calculation is performed on the 16 small depth buffers, and GTAO is adopted as the calculation AO algorithm. After 16 small AO results (sub-target images) are obtained, the 16 AO results are subjected to reinterleave and combined into an AO Buffer (target image) of a large final result. And finally, when the object is rendered in the Forward rendering pipeline, transmitting the object into the AO Buffer to perform final AO calculation of the object in an illumination stage.
The embodiment of the invention provides an image rendering method, which comprises the following steps: acquiring a depth image corresponding to an original image; calculating according to the depth information of each pixel in the depth image to obtain a normal image; processing according to the depth image and the normal image to obtain a target image; and performing forward rendering by adopting the target image, and displaying the final image. The target image is obtained based on the depth image and the normal image, so that information contained in the target image is richer, then the target image is adopted for forward rendering, the forward rendering effect can be improved, the display effect of the final image is better, and the user experience is improved. And the depth image is divided into a plurality of sub-depth images, and a texture array mode is adopted to obtain a target image according to the normal image and the plurality of sub-depth images, so that the processing efficiency can be improved.
Specific implementation processes and technical effects of the image rendering apparatus, the terminal device, the storage medium, and the like for executing the image rendering method provided by the present application are described below with reference to relevant contents of the image rendering method, and are not described in detail below.
Optionally, fig. 11 is a schematic structural diagram of an image rendering apparatus according to an embodiment of the present invention, and as shown in fig. 11, the image rendering apparatus may include:
an obtaining module 1101, configured to obtain a depth image corresponding to an original image;
a calculating module 1102, configured to calculate according to depth information of each pixel in the depth image, to obtain a normal image;
a processing module 1103, configured to perform processing according to the depth image and the normal image to obtain a target image;
and a display module 1004, configured to perform forward rendering on the target image, and display a final image.
Optionally, the obtaining module 1101 is specifically configured to pack the same virtual static objects in the original image into the same group, so as to obtain a plurality of groups; and processing the contents in the groups by adopting a preset operation to obtain the depth image.
Optionally, the calculating module 1102 is specifically configured to determine a first-direction adjacent pixel and a second-direction adjacent pixel of a target pixel, where the target pixel is any one of the pixels in the depth image; determining a target plane corresponding to a target pixel according to the depth information of the adjacent pixels in the first direction and the depth information of the adjacent pixels in the second direction, wherein the target plane is a first plane where the adjacent pixels in the first direction are located, or a second plane where the adjacent pixels in the second direction are located; and calculating the normal image according to the target plane corresponding to the target pixel.
Optionally, the calculating module 1102 is specifically configured to calculate, according to the depth information of the adjacent pixels in the first direction, the depth information of the first extension point corresponding to the first plane; according to the depth information of the adjacent pixels in the second direction, the depth information of a second extension point corresponding to the second plane is calculated; and determining the target plane according to the depth information of the first extension point, the depth information of the second extension point and the depth information of the target pixel.
Optionally, the calculating module 1102 is specifically configured to determine, from the depth information of the first extension point and the depth information of the second extension point, depth information of a target extension point that is closer to the depth information of the target pixel; and taking the plane corresponding to the target extension point as the target plane.
Optionally, the processing module 1103 is specifically configured to divide the depth image into a plurality of sub-depth images; and obtaining the target image according to the normal image and the plurality of sub-depth images in a texture array mode.
Optionally, the processing module 1103 is specifically configured to obtain, in a manner of the texture array, multiple sub-target images according to the normal image and the multiple sub-depth images; and combining the plurality of sub-target images in the texture array mode to obtain the target image.
Optionally, the display module 1104 is specifically configured to perform illumination calculation according to the target image, so as to perform forward rendering by using the target image, and display the final image.
The above-mentioned apparatus is used for executing the method provided by the foregoing embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
The above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. As another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present invention, where the terminal device includes: a processor 1201, and a memory 1202.
The memory 1202 is used for storing programs, and the processor 1201 calls the programs stored in the memory 1202 to execute the above-mentioned method embodiments. The specific implementation and technical effects are similar, and are not described herein again.
By way of example, the method may comprise:
acquiring a depth image corresponding to an original image;
calculating according to the depth information of each pixel in the depth image to obtain a normal image;
processing according to the depth image and the normal image to obtain a target image;
and performing forward rendering by adopting the target image, and displaying a final image.
Optionally, the obtaining of the depth image corresponding to the original image includes:
packing the same virtual static objects in the original image into the same group to obtain a plurality of groups;
and processing the contents in the groups by adopting a preset operation to obtain the depth image.
Optionally, the calculating according to the depth information of each pixel in the depth image to obtain the normal image includes:
determining adjacent pixels of a target pixel in a first direction and adjacent pixels of a second direction, wherein the target pixel is any one of the pixels in the depth image;
determining a target plane corresponding to a target pixel according to the depth information of the adjacent pixels in the first direction and the depth information of the adjacent pixels in the second direction, wherein the target plane is a first plane where the adjacent pixels in the first direction are located, or a second plane where the adjacent pixels in the second direction are located;
and calculating the normal image according to the target plane corresponding to the target pixel.
Optionally, the determining, according to the depth information of the adjacent pixel in the first direction and the depth information of the adjacent pixel in the second direction, a target plane corresponding to the target pixel includes:
according to the depth information of the adjacent pixels in the first direction, calculating the depth information of a first extension point corresponding to the first plane;
calculating the depth information of a second extending point corresponding to the second plane according to the depth information of the adjacent pixels in the second direction;
and determining the target plane according to the depth information of the first extension point, the depth information of the second extension point and the depth information of the target pixel.
Optionally, the determining the target plane according to the depth information of the first extension point, the depth information of the second extension point, and the target pixel includes:
determining depth information of a target extension point closer to the depth information of the target pixel from the depth information of the first extension point and the depth information of the second extension point;
and taking the plane corresponding to the target extension point as the target plane.
Optionally, the processing according to the depth image and the normal image to obtain a target image includes:
dividing the depth image into a plurality of sub-depth images;
and obtaining the target image according to the normal image and the plurality of sub-depth images in a texture array mode.
Optionally, obtaining the target image according to the normal image and the multiple sub-depth images in a texture array manner includes:
obtaining a plurality of sub-target images according to the normal image and the plurality of sub-depth images in a texture array mode;
and combining the plurality of sub-target images by adopting the texture array mode to obtain the target image.
Optionally, the performing forward rendering by using the target image and displaying a final image includes:
and performing illumination calculation according to the target image so as to perform forward rendering by adopting the target image and display the final image.
In summary, an embodiment of the present invention provides an image rendering method, including: acquiring a depth image corresponding to an original image; calculating according to the depth information of each pixel in the depth image to obtain a normal image; processing according to the depth image and the normal image to obtain a target image; and performing forward rendering by adopting the target image, and displaying the final image. The target image is obtained based on the depth image and the normal image, so that the information contained in the target image is richer, the target image is adopted for forward rendering, the forward rendering effect can be improved, the display effect of the final image is better, and the user experience is improved. And the depth image is divided into a plurality of sub-depth images, and a texture array mode is adopted to obtain a target image according to the normal image and the plurality of sub-depth images, so that the processing efficiency can be improved.
Optionally, the invention also provides a program product, for example a computer-readable storage medium, comprising a program which, when being executed by a processor, is adapted to carry out the above-mentioned method embodiments.
By way of example, the method may comprise:
acquiring a depth image corresponding to an original image;
calculating according to the depth information of each pixel in the depth image to obtain a normal image;
processing according to the depth image and the normal image to obtain a target image;
and performing forward rendering on the target image, and displaying the target image.
Optionally, the obtaining of the depth image corresponding to the original image includes:
packing the same virtual static objects in the original image into the same group to obtain a plurality of groups;
and processing the contents in the groups by adopting a preset operation to obtain the depth image.
Optionally, the calculating according to the depth information of each pixel in the depth image to obtain the normal image includes:
determining adjacent pixels of a target pixel in a first direction and adjacent pixels of a second direction, wherein the target pixel is any one of the pixels in the depth image;
determining a target plane corresponding to a target pixel according to the depth information of the adjacent pixels in the first direction and the depth information of the adjacent pixels in the second direction, wherein the target plane is a first plane where the adjacent pixels in the first direction are located, or a second plane where the adjacent pixels in the second direction are located;
and calculating the normal image according to the target plane corresponding to the target pixel.
Optionally, the determining a target plane corresponding to the target pixel according to the depth information of the adjacent pixel in the first direction and the depth information of the adjacent pixel in the second direction includes:
according to the depth information of the adjacent pixels in the first direction, calculating the depth information of a first extension point corresponding to the first plane;
calculating the depth information of a second extending point corresponding to the second plane according to the depth information of the adjacent pixels in the second direction;
and determining the target plane according to the depth information of the first extension point, the depth information of the second extension point and the depth information of the target pixel.
Optionally, the determining the target plane according to the depth information of the first extended point, the depth information of the second extended point, and the target pixel includes:
determining depth information of a target extension point closer to the depth information of the target pixel from the depth information of the first extension point and the depth information of the second extension point;
and taking the plane corresponding to the target extension point as the target plane.
Optionally, the processing according to the depth image and the normal image to obtain a target image includes:
dividing the depth image into a plurality of sub-depth images;
and obtaining the target image according to the normal image and the plurality of sub-depth images in a texture array mode.
Optionally, obtaining the target image according to the normal image and the plurality of sub-depth images in a texture array manner includes:
obtaining a plurality of sub-target images according to the normal image and the plurality of sub-depth images in a texture array mode;
and combining the plurality of sub-target images by adopting the texture array mode to obtain the target image.
Optionally, the forward rendering the target image and displaying the target image includes:
and performing illumination calculation according to the target image to perform forward rendering on the target image and display the target image.
In summary, an embodiment of the present invention provides an image rendering method, including: acquiring a depth image corresponding to an original image; calculating according to the depth information of each pixel in the depth image to obtain a normal image; processing according to the depth image and the normal image to obtain a target image; and performing forward rendering by adopting the target image to obtain a final image. The target image is obtained based on the depth image and the normal image, so that the information contained in the target image is richer, the target image is adopted for forward rendering, the forward rendering effect can be improved, the display effect of the final image is better, and the user experience is improved. And the depth image is divided into a plurality of sub-depth images, and a texture array mode is adopted to obtain a target image according to the normal image and the plurality of sub-depth images, so that the processing efficiency can be improved.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or in the form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (in english: processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. An image rendering method, comprising:
acquiring a depth image corresponding to an original image;
calculating according to the depth information of each pixel in the depth image to obtain a normal image;
processing according to the depth image and the normal image to obtain a target image;
and performing forward rendering by adopting the target image, and displaying a final image.
2. The method of claim 1, wherein the obtaining of the depth image corresponding to the original image comprises:
packing the same virtual static objects in the original image into the same group to obtain a plurality of groups;
and processing the contents in the groups by adopting a preset operation to obtain the depth image.
3. The method according to claim 1, wherein the calculating according to the depth information of each pixel in the depth image to obtain the normal image comprises:
determining adjacent pixels of a target pixel in a first direction and adjacent pixels of a second direction, wherein the target pixel is any one of the pixels in the depth image;
determining a target plane corresponding to a target pixel according to the depth information of the adjacent pixels in the first direction and the depth information of the adjacent pixels in the second direction, wherein the target plane is a first plane where the adjacent pixels in the first direction are located, or a second plane where the adjacent pixels in the second direction are located;
and calculating the normal image according to the target plane corresponding to the target pixel.
4. The method according to claim 3, wherein determining the target plane corresponding to the target pixel according to the depth information of the neighboring pixels in the first direction and the depth information of the neighboring pixels in the second direction comprises:
according to the depth information of the adjacent pixels in the first direction, the depth information of a first extension point corresponding to the first plane is calculated;
calculating the depth information of a second extending point corresponding to the second plane according to the depth information of the adjacent pixels in the second direction;
and determining the target plane according to the depth information of the first extension point, the depth information of the second extension point and the depth information of the target pixel.
5. The method of claim 4, wherein determining the target plane according to the depth information of the first extension point, the depth information of the second extension point, and the target pixel comprises:
determining depth information of a target extension point closer to the depth information of the target pixel from the depth information of the first extension point and the depth information of the second extension point;
and taking the plane corresponding to the target extension point as the target plane.
6. The method of claim 1, wherein the processing from the depth image and the normal image to obtain a target image comprises:
dividing the depth image into a plurality of sub-depth images;
and obtaining the target image according to the normal image and the plurality of sub-depth images in a texture array mode.
7. The method according to claim 6, wherein obtaining the target image according to the normal image and the plurality of sub-depth images in a texture array manner comprises:
obtaining a plurality of sub-target images according to the normal image and the plurality of sub-depth images in a texture array mode;
and combining the plurality of sub-target images by adopting the texture array mode to obtain the target image.
8. The method of claim 1, wherein the forward rendering with the target image, displaying a final image, comprises:
and performing illumination calculation according to the target image, so as to perform forward rendering by adopting the target image and display the final image.
9. An image rendering apparatus, comprising:
the acquisition module is used for acquiring a depth image corresponding to the original image;
the computing module is used for computing according to the depth information of each pixel in the depth image to obtain a normal image;
the processing module is used for processing according to the depth image and the normal image to obtain a target image;
and the display module is used for adopting the target image to perform forward rendering and displaying a final image.
10. A terminal device, comprising: a memory storing a computer program executable by the processor, and a processor implementing the image rendering method of any one of claims 1 to 8 when the computer program is executed by the processor.
11. A computer-readable storage medium, wherein a computer program is stored on the storage medium, and when the computer program is read and executed, the computer program implements the image rendering method according to any one of claims 1 to 8.
CN202211600159.8A 2022-12-13 2022-12-13 Image rendering method and device, terminal equipment and storage medium Pending CN115965737A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211600159.8A CN115965737A (en) 2022-12-13 2022-12-13 Image rendering method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211600159.8A CN115965737A (en) 2022-12-13 2022-12-13 Image rendering method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115965737A true CN115965737A (en) 2023-04-14

Family

ID=87352102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211600159.8A Pending CN115965737A (en) 2022-12-13 2022-12-13 Image rendering method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115965737A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745915A (en) * 2024-02-07 2024-03-22 西交利物浦大学 Model rendering method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745915A (en) * 2024-02-07 2024-03-22 西交利物浦大学 Model rendering method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107958480B (en) Image rendering method and device and storage medium
KR101639852B1 (en) Pixel value compaction for graphics processing
US11328395B2 (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110032314B (en) Long screen capture method and device, storage medium and terminal equipment
CN1329870C (en) Block-based rotation of arbitrary-shaped images
JP2006501522A5 (en)
EP2133843A1 (en) Image generating apparatus and image generating method
CN105930464B (en) Web rich media cross-screen adaptation method and device
CN113126937B (en) Display terminal adjusting method and display terminal
US8913080B2 (en) Partitioning high resolution images into sub-images for display
KR20120099075A (en) Methods and apparatus for image processing at pixel rate
CN110599564A (en) Image display method and device, computer equipment and storage medium
CN113015007B (en) Video frame inserting method and device and electronic equipment
CN108509241B (en) Full-screen display method and device for image and mobile terminal
CN115965737A (en) Image rendering method and device, terminal equipment and storage medium
CN113849254A (en) Self-adaptive adjusting method of page layout and computing equipment
CN112184538B (en) Image acceleration method, related device, equipment and storage medium
CN111931794B (en) Sketch-based image matching method
CN111626938B (en) Image interpolation method, image interpolation device, terminal device, and storage medium
EP3008577A1 (en) Virtualizing applications for multi-monitor environments
US10565674B2 (en) Graphics processing device and graphics processing method
CN113703653A (en) Image processing method, device, equipment and computer readable storage medium
CN108376161B (en) Method, device, terminal and storage medium for displaying webpage
CN115456892B (en) 2.5-dimensional visual image automatic geometric correction method, device, equipment and medium
CN112915534B (en) Game image calculation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination