CN114581589A - Image processing method and related device - Google Patents
Image processing method and related device Download PDFInfo
- Publication number
- CN114581589A CN114581589A CN202011379098.8A CN202011379098A CN114581589A CN 114581589 A CN114581589 A CN 114581589A CN 202011379098 A CN202011379098 A CN 202011379098A CN 114581589 A CN114581589 A CN 114581589A
- Authority
- CN
- China
- Prior art keywords
- image
- rendered
- ray tracing
- target object
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 30
- 238000012545 processing Methods 0.000 claims abstract description 130
- 238000009877 rendering Methods 0.000 claims abstract description 121
- 238000000034 method Methods 0.000 claims abstract description 118
- 239000000872 buffer Substances 0.000 claims description 146
- 238000004364 calculation method Methods 0.000 claims description 37
- 230000001133 acceleration Effects 0.000 claims description 28
- 239000000463 material Substances 0.000 claims description 24
- 238000005286 illumination Methods 0.000 claims description 22
- 230000002093 peripheral effect Effects 0.000 claims description 19
- 239000003518 caustics Substances 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 4
- 239000003550 marker Substances 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 25
- 230000008569 process Effects 0.000 description 73
- 239000013598 vector Substances 0.000 description 31
- 238000010586 diagram Methods 0.000 description 16
- 239000012634 fragment Substances 0.000 description 16
- 238000004891 communication Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 9
- 230000003111 delayed effect Effects 0.000 description 6
- 230000009466 transformation Effects 0.000 description 6
- 238000004040 coloring Methods 0.000 description 5
- 239000003086 colorant Substances 0.000 description 4
- BQCADISMDOOEFD-UHFFFAOYSA-N Silver Chemical compound [Ag] BQCADISMDOOEFD-UHFFFAOYSA-N 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012938 design process Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 229910052709 silver Inorganic materials 0.000 description 3
- 239000004332 silver Substances 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/60—Memory management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/06—Ray-tracing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/21—Collision detection, intersection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Image Generation (AREA)
Abstract
The application discloses an image processing method which is applied to computing-force-limited electronic equipment. The method comprises the following steps: acquiring data to be rendered; rasterizing the data to be rendered to obtain a first image; and marking a target object in the first image by the identifier, and performing ray tracing processing on the target object to obtain a second image. Because only the local object in the image is subjected to ray tracing processing, the computational power requirement of image rendering is reduced, equipment with limited computational power can also adopt a ray tracing method to realize image rendering, and the image rendering effect is improved.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and a related apparatus.
Background
With the rapid development of the computer industry, the requirements of people on images are increasingly increased. The quality of an image obtained by rendering a three-dimensional scene by adopting a traditional rendering method is general, and a vivid picture is often difficult to present. Based on this, ray tracing techniques have been developed.
Ray Tracing (RT) technology is a technology that realizes effects such as reflection, refraction, shading, or caustic by Tracing each Ray emitted from a camera, thereby simulating a real virtual scene and rendering to obtain a realistic image. However, since each ray in the scene needs to be traced in the ray tracing process, it is computationally expensive.
In the related art, the ray tracing technology is often applied to a device with high computing power (for example, a personal computer with a separate graphics card), and the ray tracing technology cannot be applied to a device with limited computing power (for example, a mobile device), so that it is difficult to obtain a better rendering effect on the device with limited computing power.
Disclosure of Invention
The application provides an image processing method, which comprises the steps of conducting primary rendering on data to be rendered through rasterization processing to obtain a first image, and conducting secondary rendering on an object with an identifier in the first image through ray tracing processing to improve rendering effect. Because only the local object in the image is subjected to ray tracing processing, the computational power requirement of image rendering is reduced, equipment with limited computational power can also adopt a ray tracing method to realize image rendering, and the image rendering effect is improved.
A first aspect of the present application provides an image processing method that may be applied in a computationally limited electronic device that is capable of performing an image rendering process. The method comprises the following steps: the electronic device obtains data to be rendered, which may include models in the 3D scene and attribute information of the models, such as models of sky, house, etc., and attribute information of colors, materials, etc. of the models. And the electronic equipment performs rasterization processing on the data to be rendered through a forward rendering method or a delayed rendering method to obtain a first image. The electronic device performs ray tracing processing on a target object with a mark in the first image to obtain a second image, wherein the mark is used for marking the object to be subjected to ray tracing processing, namely the target object with the mark is the object to be subjected to ray tracing processing. For example, the target object may be an object capable of displaying a more obvious light and shadow effect, such as a floor, a mirror, or a window.
That is, the electronic device only performs ray tracing on the target object with the mark in the first image, and does not perform ray tracing on the object without the mark. The identification of the target object can be realized in various ways, and in one way, if the target object has a corresponding specific field, the target object can be considered to have the identification; in another mode, if the target object has a corresponding specific field and the value of the specific field is a preset value, the target object may be considered to have an identifier.
In the scheme, the data to be rendered is rendered for the first time by adopting rasterization processing to obtain a first image, and then the object with the identifier in the first image is rendered for the second time by ray tracing processing to improve the rendering effect. Because only the local object in the image is subjected to ray tracing processing, the computational power requirement of image rendering is reduced, equipment with limited computational power can also adopt a ray tracing method to realize image rendering, and the image rendering effect is improved.
In one possible implementation, the identifier is also used to mark ray tracing, which may include, for example, reflection, refraction, shading, or caustic. In this way, in the process that the electronic device performs ray tracing on the target object in the first image, the electronic device may determine, according to the identifier of the target object, a ray tracing manner that needs to be performed by the target object, and implement ray tracing based on the ray tracing manner. For example, when the target object in the first image is a floor and the value of the identifier of the floor is 0, the electronic device may perform ray tracing on the floor, and the ray tracing process is reflection.
By adopting the identification mark ray tracing processing mode, the electronic equipment can be prevented from selecting the ray tracing processing method based on the material of the target object after analyzing the material of the target object, and the efficiency of executing ray tracing processing by the electronic equipment is improved.
In a possible implementation manner, the data to be rendered includes the target object and a material parameter of the target object; the electronic device may determine the identity of the target object according to the material parameter of the target object. For example, when the roughness in the material parameter of the floor is 0, the electronic device may determine and generate an identifier of the floor, and the value of the identifier is 0, that is, the light tracing manner corresponding to the floor is reflection. The identification of the target object is determined by the electronic equipment based on the material parameters of the target object, so that the process of manually adding the identification to the target object can be omitted, and manpower and material resources are saved.
In one possible implementation, the electronic device performs ray tracing processing on the target object in the first image, including: the electronic device obtains the position of the target object in the first image in the three-dimensional scene, that is, the electronic device transforms the coordinate system of the target object in the first image into the coordinate system in the three-dimensional scene by means of coordinate system transformation, so as to obtain the position of the target object in the three-dimensional scene. And according to the position of the target object in the three-dimensional scene, the electronic equipment executes ray tracing processing to obtain a ray tracing result. And finally, the electronic equipment updates the color of the target object in the first image according to the ray tracing result to obtain the second image.
According to the scheme, the electronic equipment executes ray tracing processing by acquiring the position of the target object in the image in the three-dimensional scene, so that the ray tracing processing is performed on the target object in the image on the basis of the image obtained after rasterization processing, the overall rendering effect of the image is effectively improved, and the calculation requirement is low.
In a possible implementation manner, the electronic device performs ray tracing processing on the target object in the first image to obtain the second image, including: the electronic equipment performs ray tracing on the target object in the first image according to the identification of the target object to obtain a ray tracing result; and updating the color of the target object in the first image according to the ray tracing result by the electronic equipment to obtain a second image.
In the scheme, the electronic equipment updates the color of the target object in the first image based on the ray tracing result to realize ray tracing processing, so that the change of the prior art can be reduced as much as possible, and the realizability of the scheme is improved.
In a possible implementation manner, the electronic device performs ray tracing on the target object in the first image according to the identifier of the target object, and obtains a ray tracing result, which may include: the electronic device determining a target pixel in the first image, the target pixel having the identification, the target object including one or more of the target pixels; and the electronic equipment acquires the target position of the target pixel in the three-dimensional scene in a coordinate transformation mode, and performs ray tracing according to the target position and the identifier to obtain the intersection point of the ray and the three-dimensional scene. After determining the intersection point, the electronic device may calculate the color of the intersection point, and then fuse the color of the intersection point with the original color of the target pixel based on a ray tracing manner, so as to update to obtain a new color of the target pixel. That is, in the actual process of the ray tracing process, the electronic device may perform the ray tracing process on each pixel in units of pixels in the first image, so as to implement the ray tracing process on the target object.
In this embodiment, the electronic device performs ray tracing processing on each pixel having the identifier, and updates the color of the pixel based on the intersection point obtained by ray tracing, thereby implementing ray tracing processing and effectively improving the overall rendering effect of the image.
In one possible implementation manner, the updating, by the electronic device, the color of the target pixel according to the color of the intersection includes: the electronic equipment calculates the projection of the intersection point on the image according to the position of the intersection point in the three-dimensional scene; if the intersection point has a corresponding projection pixel on the first image or the third image, updating the color of the target pixel according to the color of the projection pixel; if the intersection point does not have a corresponding projection pixel on the first image or the third image, calculating the color of the intersection point, and updating the color of the target pixel according to the color of the intersection point; wherein the third image is a previous frame image of the second image.
In short, in the process of rendering an image by the electronic device, the electronic device does not render all objects in the 3D scene in real time. The electronic device generally renders an object to be displayed on a screen to obtain a rendered image and displays the rendered image on the screen. If the intersection has been rendered and displayed on the image during rendering of the previous frame image (i.e., the third image), or the intersection has been rendered and displayed on the image during rendering of the current frame image (i.e., the first image), the color of the intersection may be determined based on the color of the corresponding pixel point of the intersection on the previous frame image or the current frame image. That is, the color of the intersection is obtained by multiplexing the colors of the pixels on the previous frame image or the current frame image, thereby avoiding recalculation of the color of the intersection and reducing the amount of calculation.
In a possible implementation manner, the performing, by the electronic device, ray tracing according to the target position and the identifier to obtain an intersection point of a ray and the three-dimensional scene includes: the electronic device acquires an acceleration structure, wherein the acceleration structure is obtained based on the three-dimensional scene, and the acceleration structure may include, but is not limited to, a structure such as a Bounding Volume Hierarchy (BVH), a Uniform grid (Uniform grid), or a k-dimensional tree (k-dimensional tree); and tracking rays through the acceleration structure according to the target position and the identification to obtain an intersection point of the rays and the three-dimensional scene. Ray tracing processing is achieved through the acceleration structure, the speed of finding the intersection point by the electronic equipment is increased, and the efficiency of executing ray tracing by the electronic equipment is improved.
In a possible implementation manner, the rasterizing, by the electronic device, the data to be rendered to obtain a first image includes: the electronic equipment renders the data to be rendered without illumination to obtain a fourth image; the electronic equipment obtains a geometric buffer area corresponding to the pixel in the fourth image according to the attribute information of the data to be rendered, wherein the geometric buffer area is used for storing attribute parameters corresponding to the pixel; and the electronic equipment performs illumination calculation on the pixels in the fourth image according to the geometric buffer area to obtain the first image.
In a possible implementation manner, the obtaining, by the electronic device, a geometric buffer corresponding to a pixel in the fourth image according to the attribute information of the data to be rendered includes: if the object to be rendered in the fourth image is the target object, generating a first geometric buffer area corresponding to the object to be rendered according to the attribute information of the object to be rendered and the first resolution; if the object to be rendered in the fourth image is located in the peripheral area of the target object, generating a second geometric buffer area corresponding to the object to be rendered according to the attribute information and the second resolution of the object to be rendered, and if the object to be rendered in the fourth image is located in the background area, generating a third geometric buffer area corresponding to the object to be rendered according to the attribute information and the third resolution of the object to be rendered; wherein the data to be rendered comprises the object to be rendered, the first resolution is greater than the second resolution, the second resolution is greater than the third resolution, and the first geometric buffer, the second geometric buffer, and the third geometric buffer are used to store color attribute parameters.
In a possible implementation manner, the obtaining, by the electronic device, a geometric buffer corresponding to a pixel in the fourth image according to the attribute information of the data to be rendered, further includes: the electronic equipment generates a fourth geometric buffer area corresponding to the object to be rendered according to the attribute information of the object to be rendered and a fourth resolution, wherein the attribute parameters used for storing in the fourth geometric buffer area are not color attribute parameters; the fourth resolution is less than the first resolution.
In short, before the electronic device generates the G-buffer corresponding to the pixel point in the fourth image, the electronic device may determine a corresponding object to be rendered, that is, an object to be displayed in the fourth image, then determine a resolution for generating the G-buffer according to specific information of the object to be rendered, and finally generate the G-buffer corresponding to the object to be rendered based on the resolution, thereby obtaining the G-buffer corresponding to the pixel point in the fourth image. By adopting different resolutions to generate the G-buffer corresponding to the object to be rendered, the G-buffer corresponding to the non-target object can be reduced, the calculation amount of the electronic equipment can be effectively reduced, the storage space is saved, and the requirement on the input/output (I/O) bandwidth of the electronic equipment is reduced.
In one possible implementation manner, the electronic device acquiring data to be rendered includes: the electronic equipment acquires the three-dimensional scene data and a fifth image sent by the server, wherein the fifth image is a background image rendered by the server. Therefore, the electronic equipment can only render the part of the non-background area in the 3D scene, and the rendered image is fused with the background image issued by the server, so that a complete and rendered image can be obtained. The background image sent by the server is an image including only a background area, that is, the background image includes only a distant background. For example, the server may render the background of the sky, a mountain, the sea, or a distant high-rise building, and obtain a corresponding background image.
In a possible scenario, a game application may be run in the electronic device, and the server renders a background area in the 3D scene in real time, obtains a background image, and sends the background image to the electronic device. In the process of running the game application, the electronic equipment renders a non-background area in the 3D scene, and obtains a rendered image by combining a background image issued by the server so as to display the rendered image on a screen.
Optionally, when the game application run by the electronic device is a multi-user online network game, the background images rendered by the server may be further respectively sent to a plurality of different electronic devices. Different electronic devices respectively perform personalized rendering according to the content displayed according to actual needs so as to display different images on the screen.
By rendering the background area through the server, the rendering calculation amount of the electronic equipment can be reduced, and the calculation force requirement on the electronic equipment is lowered.
A second aspect of the present application provides an electronic device, including an acquisition unit and a processing unit; the acquisition unit is used for acquiring data to be rendered; the processing unit is used for carrying out rasterization processing on the data to be rendered to obtain a first image; the processing unit is further used for performing ray tracing processing on the target object in the first image to obtain a second image; wherein the target object has a marker for marking an object to be subjected to ray tracing processing.
In one possible implementation, the identification is also used to mark ray tracing processing.
In one possible implementation, the ray tracing processing manner includes reflection, refraction, shading, or caustic.
In a possible implementation manner, the obtaining unit is further configured to obtain a position of a target object in the first image in a three-dimensional scene; the processing unit is further used for executing ray tracing processing according to the position of the target object in the three-dimensional scene to obtain a ray tracing result; the processing unit is further configured to update the color of the target object in the first image according to the ray tracing result to obtain the second image.
In a possible implementation manner, the processing unit is further configured to: performing ray tracing on the target object in the first image according to the identifier of the target object to obtain a ray tracing result; and updating the color of the target object in the first image according to the ray tracing result to obtain a second image.
In a possible implementation manner, the processing unit is further configured to determine a target pixel in the first image, where the target pixel has the identifier, and the target object includes one or more target pixels; the acquisition unit is further used for acquiring a target position of the target pixel in a three-dimensional scene; the processing unit is further used for performing ray tracing according to the target position and the identifier to obtain an intersection point of the ray and the three-dimensional scene; and the processing unit is also used for updating the color of the target pixel according to the color of the intersection point.
In one possible implementation, the processing unit is further configured to: calculating the projection of the intersection point on the image according to the position of the intersection point in the three-dimensional scene; if the intersection point has a corresponding projection pixel on the first image or the third image, updating the color of the target pixel according to the color of the projection pixel; if the intersection point does not have a corresponding projection pixel on the first image or the third image, calculating the color of the intersection point, and updating the color of the target pixel according to the color of the intersection point; wherein the third image is a previous frame image of the second image.
In a possible implementation manner, the obtaining unit is further configured to obtain an acceleration structure, where the acceleration structure is obtained based on the three-dimensional scene; and the processing unit is further used for performing ray tracing through the acceleration structure according to the target position and the identification to obtain an intersection point of the ray and the three-dimensional scene.
In a possible implementation manner, the processing unit is further configured to: rendering the data to be rendered without illumination to obtain a fourth image; obtaining a geometric buffer area corresponding to a pixel in the fourth image according to the attribute information of the data to be rendered, wherein the geometric buffer area is used for storing attribute parameters corresponding to the pixel; and performing illumination calculation on the pixels in the fourth image according to the geometric buffer area to obtain the first image.
In a possible implementation manner, the processing unit is further configured to: if the object to be rendered in the fourth image is the target object, generating a first geometric buffer area corresponding to the object to be rendered according to the attribute information of the object to be rendered and the first resolution; if the object to be rendered in the fourth image is located in the peripheral area of the target object, generating a second geometric buffer area corresponding to the object to be rendered according to the attribute information and the second resolution of the object to be rendered, and if the object to be rendered in the fourth image is located in the background area, generating a third geometric buffer area corresponding to the object to be rendered according to the attribute information and the third resolution of the object to be rendered; wherein the data to be rendered comprises the object to be rendered, the first resolution is greater than the second resolution, the second resolution is greater than the third resolution, and the first geometric buffer, the second geometric buffer, and the third geometric buffer are used to store color attribute parameters.
In a possible implementation manner, the processing unit is further configured to: generating a fourth geometric buffer area corresponding to the object to be rendered according to the attribute information of the object to be rendered and a fourth resolution, wherein the attribute parameters of the fourth geometric buffer area for storage are not color attribute parameters; the fourth resolution is less than the first resolution.
In a possible implementation manner, the obtaining unit is further configured to obtain three-dimensional scene data and a fifth image sent by the server, where the fifth image is a rendered background image.
In a possible implementation manner, the data to be rendered includes the target object and a material parameter of the target object; the processing unit is further configured to determine an identifier of the target object according to the material parameter of the target object.
A third aspect of the present application provides an electronic device, comprising: a processor, a non-volatile memory, and a volatile memory; wherein the non-volatile memory or the volatile memory has stored therein computer readable instructions; the processor reads the computer readable instructions to cause the electronic device to implement the method as implemented in any one of the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon a computer program which, when run on a computer, causes the computer to perform a method according to any one of the implementations of the first aspect.
A fifth aspect of the present application provides a computer program product which, when run on a computer, causes the computer to perform the method according to any one of the implementations of the first aspect.
A sixth aspect of the present application provides a chip comprising one or more processors. A part or all of the processor is used for reading and executing the computer program stored in the memory so as to execute the method in any possible implementation mode of any one aspect. Optionally, the chip may include a memory, and the memory and the processor may be connected to the memory through a circuit or a wire. Optionally, the chip further comprises a communication interface, and the processor is connected to the communication interface. The communication interface is used for receiving data and/or information needing to be processed, the processor acquires the data and/or information from the communication interface, processes the data and/or information, and outputs a processing result through the communication interface. The communication interface may be an input output interface. The method provided by the application can be realized by one chip or by cooperation of a plurality of chips.
Drawings
FIG. 1a is a schematic diagram of ray tracing;
FIG. 1b is a schematic diagram of a rasterization process;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating an image processing method 300 according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart illustrating a ray tracing process performed on an image according to an embodiment of the present disclosure;
FIG. 5 is a schematic view of a BVH according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a reflection scene according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of determining a color of an intersection according to an embodiment of the present disclosure;
FIG. 8 is a flowchart illustrating a ray tracing process according to an embodiment of the present disclosure;
fig. 9 is a schematic flowchart of rasterizing data to be rendered according to an embodiment of the present application;
fig. 10 is a schematic flowchart of generating a G-buffer based on adaptive resolution according to an embodiment of the present disclosure;
fig. 11 is a schematic flowchart of a process of rendering and issuing a background image on a server according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a pre-filtered ambient light map provided in accordance with an embodiment of the present application;
fig. 13 is a schematic flowchart of an end-cloud combined image rendering method according to an embodiment of the present disclosure;
FIG. 14 is a schematic diagram of a hybrid rendering pipeline provided by an embodiment of the present application;
fig. 15(a) is a first image after rasterization processing provided in the embodiment of the present application;
FIG. 15(b) is a second image after ray tracing according to an embodiment of the present disclosure;
fig. 16 is a schematic structural diagram of an electronic device 1600 provided in an embodiment of the present application.
Detailed Description
Embodiments of the present application will now be described with reference to the accompanying drawings, and it is to be understood that the described embodiments are merely illustrative of some, but not all, embodiments of the present application. As can be known to those skilled in the art, with the development of technology and the emergence of new scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Moreover, the terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules explicitly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus. The naming or numbering of the steps appearing in the present application does not mean that the steps in the method flow have to be executed in the chronological/logical order indicated by the naming or numbering, and the named or numbered process steps may be executed in a modified order depending on the technical purpose to be achieved, as long as the same or similar technical effects are achieved.
With the development of computer technology, more and more applications, such as game applications or video applications, are required to display images with high quality on electronic devices. These images are typically rendered by an electronic device based on a model in a three-dimensional (3D) scene.
In a conventional image processing method, a 3D scene is rendered by rasterization to obtain an image capable of displaying the 3D scene. However, the quality of an image obtained by rendering by using the rasterization processing method is general, and a vivid picture is often difficult to present. For example, it is often difficult to truly restore the effects of light reflection, refraction, and shading in a scene in a rendered image. Accordingly, a new rendering technique, ray tracing, has been developed. Ray tracing and rasterization processing are methods for realizing image rendering, and the main purpose of the method is to project an object in a 3D space to a two-dimensional screen space for display through calculation and coloring.
Referring to fig. 1a, fig. 1a is a schematic diagram illustrating the principle of ray tracing. As shown in fig. 1a, the principle of ray tracing is: a beam of light is emitted from the position of the camera to the 3D scene through the pixel position on the image plane, the nearest intersection point between the light and the geometric figure is solved, and then the coloring of the intersection point is solved. If the material of the intersection point is reflective, the tracing can be continued in the reflection direction of the intersection point, and the coloring of the reflected intersection point can be continuously obtained. That is, the ray tracing method calculates projection and global illumination by tracing the propagation process of rays in a three-dimensional scene, so as to render a two-dimensional image.
Referring to fig. 1b, fig. 1b is a schematic diagram of the rasterization process. As shown in fig. 1b, the principle of the rasterization process is: the method comprises the steps of segmenting a model in a 3D scene by using triangles, transforming three-dimensional coordinates of vertexes of the triangles into two-dimensional coordinates on an image through coordinate change calculation, and finally filling textures in the triangles on the image to achieve rendering of the image.
Because the rasterization processing is to directly project the content visible on the screen space to obtain the corresponding image, the processing difficulty is low, and the provided light and shadow effect is poor. The ray tracing method is to trace each ray emitted from the camera to realize real effects such as reflection, refraction, shading and ambient light shielding, so the ray tracing method can provide real and vivid light and shadow effects. Meanwhile, since the ray tracing method needs to trace the direction of each ray, the calculation amount is very complicated, and the apparatus for performing the ray tracing method has high calculation capability.
In the related art, the ray tracing technology is mainly applied to a device with high computing power, such as a Personal Computer (PC) with a separate graphics card, and a device with limited computing power, such as a mobile device like a mobile phone or a tablet computer, cannot apply the ray tracing technology, so that it is difficult to obtain a better rendering effect on the device with limited computing power.
In view of this, an embodiment of the present application provides an image processing method, where data to be rendered is rendered once by using rasterization processing to obtain a first image, and then an object having an identifier in the first image is rendered twice by using ray tracing processing to improve a rendering effect. Because only the local object in the image is subjected to ray tracing processing, the computational power requirement of image rendering is reduced, equipment with limited computational power can also adopt a ray tracing method to realize image rendering, and the image rendering effect is improved.
The image processing method in the embodiment of the present application may be performed by an electronic device. The electronic equipment comprises a CPU and a GPU, and can perform rendering processing on images. Illustratively, the electronic device may be, for example, a mobile phone (mobile phone), a tablet computer, a notebook computer, a PC, a Mobile Internet Device (MID), a wearable device, a Virtual Reality (VR) device, an Augmented Reality (AR) device, a wireless electronic device in industrial control (industrial control), a wireless electronic device in self driving (self driving), a wireless electronic device in remote surgery (remote medical supply), a wireless electronic device in smart grid (smart grid), a wireless electronic device in transportation safety (transportation safety), a wireless electronic device in city (smart city), a wireless electronic device in smart home (smart home), and the like. The electronic device may be a device running an android system, an IOS system, a windows system, and other systems. An application program, such as a game application, a lock screen application, a map application, or a monitoring application, that needs to render a 3D scene to obtain a two-dimensional image may be run in the electronic device.
For ease of understanding, the specific structure of the electronic device will be described in detail below with reference to fig. 2. Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
In one possible embodiment, as shown in fig. 2, the electronic device 2000 may include: a central processing unit 2001, a graphics processor 2002, a display device 2003, and a memory 2004. Optionally, the electronic device 2000 may further include at least one communication bus (not shown in fig. 2) for implementing connection communication between the components.
It should be understood that the various components of the electronic device 2000 may also be coupled by other connectors, which may include various types of interfaces, transmission lines, or buses, etc. The various components of the electronic device 2000 may also be connected in a radial fashion, centered on the central processor 2001. In various embodiments of the present application, coupled means connected or communicated electrically with each other, including directly or indirectly through other devices.
There are various connection methods between the cpu 2001 and the graphic processor 2002, and the connection method is not limited to the connection method shown in fig. 2. The cpu 2001 and the gpu 2002 in the electronic device 2000 may be located on the same chip or may be separate chips.
The roles of the central processor 2001, graphics processor 2002, display device 2003 and memory 2004 are briefly described below.
The central processing unit 2001: for running an operating system 2005 and application programs 2006. The application 2006 may be a graphics class application such as a game, video player, or the like. Operating system 2005 provides a system graphics library interface through which application programs 2006 pass, and drivers provided by operating system 2005, such as graphics library user-state drivers and/or graphics library kernel-state drivers, generate instruction streams for rendering graphics or image frames, and the associated rendering data needed. The system graphic library includes but is not limited to: system graphics libraries such as an embedded graphics library for embedded system (OpenGL ES), the koruos platform graphics interface (the khronos platform graphics interface), or Vulkan (a cross-platform drawing application program interface). The instruction stream contains a series of instructions, typically call instructions to a system graphics library interface.
Alternatively, the central processor 2001 may include at least one of the following types of processors: an application processor, one or more microprocessors, a Digital Signal Processor (DSP), a microcontroller unit (MCU), or an artificial intelligence processor, among others.
The cpu 2001 may further include necessary hardware accelerators, such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or an integrated circuit for implementing logic operations. The processor 2001 may be coupled to one or more data buses for transmitting data and instructions between the various components of the electronic device 2000.
The graphics processor 2002: the graphics pipeline is configured to receive a graphics instruction stream sent by the processor 2001, generate a rendering target through a rendering pipeline (pipeline), and display the rendering target to the display device 2003 through a layer composition display module of the operating system. The rendering pipeline, which may also be referred to as a rendering pipeline, a pixel pipeline, or a pixel pipeline, is a parallel processing unit within graphics processor 2002 for processing graphics signals. The graphics processor 2002 may include a plurality of rendering pipelines, and the plurality of rendering pipelines may process graphics signals in parallel independently of each other. For example, a rendering pipeline may perform a number of operations in the process of rendering graphics or image frames, typical operations may include: vertex Processing (Vertex Processing), Primitive Processing (primative Processing), Rasterization (Rasterization), Fragment Processing (Fragment Processing), and the like.
Alternatively, the graphics processor 2002 may comprise a general-purpose graphics processor executing software, such as a GPU or other type of special-purpose graphics processing unit, or the like.
The display device 2003: for displaying various images generated by the electronic device 2000, which may be a Graphical User Interface (GUI) of an operating system or image data (including still images and video data) processed by the graphic processor 2002.
Alternatively, display device 2003 may include any suitable type of display screen. Such as a Liquid Crystal Display (LCD) or a plasma display, or an organic light-emitting diode (OLED) display.
The memory 2004, which is a transmission channel between the cpu 2001 and the gpu 2002, may be a double data rate synchronous dynamic random access memory (DDR SDRAM) or other types of cache.
The specific structure of the electronic device to which the image processing method provided by the embodiment of the present application is applied is described above, and the flow of the image processing method provided by the embodiment of the present application will be described in detail below.
Referring to fig. 3, fig. 3 is a schematic flowchart of an image processing method 300 according to an embodiment of the present disclosure. As shown in fig. 3, the image processing method 300 includes the following steps.
In this embodiment, the data to be rendered may include a model in the 3D scene and attribute information of the model. Illustratively, the model in the 3D scene may include a model of the sky, house, bridge, character, box, or tree, for example. The attribute information of the model may include attribute information of the color, material, and the like of the model.
It should be understood that, during the process of downloading and installing the application, the electronic device downloads and stores the data to be rendered in the application into the electronic device. In the process of running the application, the electronic device may obtain the data to be rendered by loading the data related to the application. In addition, the electronic device may also acquire the data to be rendered by receiving the data sent by the server in the process of browsing the webpage, so that the electronic device can render the image based on the data to be rendered.
That is to say, the electronic device may obtain the data to be rendered by reading local data, or may obtain the data to be rendered by receiving data sent by other devices in real time.
In this embodiment, there may be a plurality of ways to perform rasterization processing on data to be rendered.
In the first mode, the data to be rendered is rasterized by means of Forward Rendering (forwarded Rendering).
Forward rendering refers to rendering by projecting and breaking down geometry in the data to be rendered into vertices, then transforming and breaking up the vertices into segments or pixels, and performing the final rendering process before these segments or pixels are passed to the screen. The forward rendering is characterized in that the whole process of a geometric body from the beginning to the last screen display image is uninterrupted, namely the forward rendering is a linear process.
Briefly, in a 3D scene, a GPU in an electronic device performs illumination calculation on one object according to all light sources to realize rendering of the object, and then renders the next object, and so on. However, for each object to be rendered, the CPU needs to iteratively render each fragment to be rendered in the object in the fragment shader, so as to obtain a rendering result for each fragment. Since the rendering results of most segments are overwritten by the rendering results of the subsequent segments, forward rendering tends to waste a significant amount of time on rendering of useless segments.
And in the second mode, rasterization processing is carried out on the data to be rendered in a delayed rendering mode.
Delayed rendering refers to that after a geometrical body in data to be rendered is projected and disassembled into a vertex and the vertex is converted and disassembled into fragments or pixels, various geometrical information of the fragments or pixels is obtained, and the geometrical information is stored in a geometrical buffer (G-buffer). The geometric information may include Position Vector (Position Vector), Color Vector (Color Vector), and/or Normal Vector (Normal Vector). And finally, performing illumination calculation on the fragments or pixels based on the geometric information stored in the G-buffer to obtain a final rendering result. In the illumination calculation stage, according to the segments or pixels needing to be displayed in the screen space, illumination calculation of the scene is carried out on the corresponding segments or pixels by using the geometric information in the G-buffer so as to output an image for displaying on the screen space. Compared with forward rendering, delayed rendering does not need to repeatedly perform illumination calculation on a large number of fragments or pixels, but only performs illumination calculation on the fragments or pixels needing to be displayed in the screen space, and a large number of useless illumination calculation steps can be saved.
The target object in the first image has a mark, and the mark is used for marking an object to be subjected to ray tracing processing, namely, the target object with the mark is the object to be subjected to ray tracing processing. For example, the identifier may be a field, and the value of the field is 0 or 1, which indicates that the target object needs to perform the ray tracing process. For another example, the identifier may be a specific value in a field, and when the value of the field is 0, the identifier is used to indicate that the target object corresponding to the identifier needs to perform the ray tracing process; when the value of the field is 1, the object corresponding to the field does not need to perform ray tracing processing. In short, the identification of the target object can be realized in various ways, and in one way, if the target object has a corresponding specific field, the target object can be considered to have the identification; in another mode, if the target object has a corresponding specific field and the value of the specific field is a preset value, the target object may be considered to have an identifier, and the implementation manner of the target having the identifier is not limited in this embodiment.
Optionally, the identification may also be used to mark ray tracing, which may include, for example, reflection, refraction, shading, or caustic. The reflection is a phenomenon that when light rays are transmitted to different substances, the transmission direction is changed on an interface and the light rays return to the original substances. For example, in the case where the identification of the floor in the first image is used to mark the reflection, the electronic device may consider that the color of the vehicle chassis is reflected on the floor when the light ray exiting the floor intersects the vehicle chassis, i.e., the intersection of the light ray is the vehicle chassis. The reflection may include diffuse reflection or specular reflection. Diffuse reflection is reflection of light that is incident on a diffuse reflector and that is not directed at the surface or inside of the object. Specular reflection means that a beam of light rays incident in parallel is reflected in one direction when the beam of light rays strikes a reflecting surface with a smooth surface. Refraction is the phenomenon that when light is obliquely incident from one transparent medium to another, the propagation direction changes. The shadow is a dark area formed by the light rays which go straight and meet an opaque object. The caustic refers to that when light passes through a transparent object, because of the unevenness of the object surface, the light is refracted and not parallel, the diffuse refraction occurs, and the photon dispersion occurs on the projection surface.
For example, the identifier may be a field, and when the value of the field is 0, it indicates that the ray tracing processing mode is reflection; when the value of the field is 1, the ray tracing processing mode is refraction; when the value of the field is 2, the ray tracing processing mode is shadow; when the value of this field is 3, it indicates that the ray tracing processing method is caustic. In short, the electronic device may determine whether the object in the first image is the target object by determining whether the object has the identifier; moreover, the electronic device can know the ray tracing processing mode which needs to be executed by the target object by determining the value of the identifier of the target object.
In this way, in the process that the electronic device performs ray tracing on the target object in the first image, the electronic device may determine, according to the identifier of the target object, a ray tracing manner that needs to be performed by the target object, and implement ray tracing based on the ray tracing manner. For example, when the target object in the first image is a floor and the value of the identifier of the floor is 0, the electronic device may perform ray tracing on the floor, and the ray tracing process is reflection.
Optionally, in a case that the identifier is only used for marking the object to be subjected to the ray tracing processing, and the ray tracing processing mode is not marked, the electronic device may perform the ray tracing processing on the target object based on the material parameter of the target object included in the data to be rendered during the execution of the ray tracing processing. Generally, the data to be rendered includes attribute information of a model, the attribute information of the model includes material parameters of the model, and the model is generally divided by the type of the object, so the material parameters of the target object can be obtained through the attribute information of the model corresponding to the target object.
In one possible example, in the data to be rendered acquired by the electronic device, the attribute information of the model may include an identifier corresponding to the model, where the identifier is used to mark that the ray tracing process is to be performed. In the process that the electronic device performs rasterization processing on rendering data, the electronic device may generate an identifier of a target object in the first image according to an identifier corresponding to a model in the data to be rendered, that is, when the electronic device projects the model to a screen space to form the target object in the first image, the electronic device records the identifier of the target object based on the identifier of the model. For example, the identifier of the model may be added during the design process of the model, for example, during the design process of the model, the floor model is added with the identifier with the value of 0, the diamond model is added with the identifier with the value of 1, and the wall model is not added with the identifier. Corresponding marks are added to the specific models according to expected light and shadow effects in the design process, for example, the marks are only added to the models capable of effectively improving the light and shadow effects, the models needing to be subjected to ray tracing can be selectively determined, ray tracing processing on all the models is avoided, and the calculation force requirement of rendering equipment is reduced.
In another possible example, the data to be rendered acquired by the electronic device includes a model of the target object, and the attribute information of the model of the target object includes a material parameter of the target object. The electronic device may determine and generate an identification of the target object based on the material parameters of the target object. In this way, after the electronic device performs rasterization processing and obtains the first image, the electronic device may acquire the identifier of the target object in the first image.
For example, when the roughness in the material parameter of the floor is 0, the electronic device may determine and generate an identifier of the floor, and the value of the identifier is 0, that is, the light tracing manner corresponding to the floor is reflection. For another example, when the metal degree in the material parameter of the silver tableware is 1, the electronic device may determine and generate an identifier of the silver tableware, and the value of the identifier is 0, that is, the ray tracing method corresponding to the silver tableware is reflection.
It should be appreciated that where the electronic device implements rasterization processing in a forward rendering manner, the electronic device may allocate a particular memory space for storing the identification of the target object in the first image. In general, a target object may be composed of one or more pixels, and for each pixel of the target object, its corresponding identity needs to be stored.
Under the condition that the electronic equipment realizes rasterization processing in a delayed rendering mode, the electronic equipment can generate the G-buffer for storing the identifier of the target object based on the identifier of the target object model in the process of generating the G-buffer. That is, for each pixel in the first image, there is a G-buffer for storing the identification corresponding to the pixel in addition to the G-buffer for storing the position vector, color vector, and/or normal vector of the pixel.
In the embodiment, the data to be rendered is rendered for the first time by adopting rasterization processing to obtain the first image, and then the object with the identifier in the first image is rendered for the second time by ray tracing processing to improve the rendering effect. Because only the local object in the image is subjected to ray tracing processing, the computational power requirement of image rendering is reduced, equipment with limited computational power can also adopt a ray tracing method to realize image rendering, and the image rendering effect is improved.
The process of the electronic device performing hybrid rendering on the image based on the rasterization process and the ray tracing process is described above, and for ease of understanding, the specific process of the electronic device performing the ray tracing process on the image will be described in detail below.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating a ray tracing process performed on an image according to an embodiment of the present disclosure. As shown in fig. 4, in one possible example, the step 303 may further include the following steps.
In this embodiment, the target object may include one or more target pixels, and the target pixels have the identifier. For the same target object, one or more target pixels included in the target object have the same identification. In an actual process of the ray tracing process, the ray tracing process may be performed on each pixel in the first image as a unit, so as to perform the ray tracing process on the target object.
It should be understood that for a model in data to be rendered, the model may include a plurality of parts, different parts may have different identifications, or some parts may have identifications and other parts may not have identifications, and in this case, the target object may be understood as a certain part of the model. In short, the material parameters of any part on the same target object are the same, and one or more pixels corresponding to the target object have the same identifier. For example, for a model car, including window, housing, tires, etc., the window may have a corresponding logo with a value of 0, while the housing and tires may not have a corresponding logo.
Since the target pixel is actually a pixel point in the two-dimensional image (i.e., the first image), and the target pixel is obtained by performing projection transformation on a certain part in the 3D scene, the coordinate of the target pixel can be transformed to obtain the target position of the target pixel in the 3D scene. For example, the electronic device may perform coordinate transformation on two-dimensional coordinates of the target pixel in the first image to obtain world coordinates of the target pixel in the 3D scene, so as to determine a target position of the target pixel in the 3D scene.
After determining the target location of the target pixel in the 3D scene, the corresponding ray may be determined based on the identification of the target pixel. For example, when the mark of the target pixel is 0, determining the ray to be traced as a reflected ray; when the identification of the target pixel is 1, the ray to be traced is determined to be a refracted ray. By tracking the route of the ray to be tracked corresponding to the target pixel in the 3D scene, the intersection point of the ray to be tracked and other objects in the 3D scene after the ray to be tracked comes out from the target position can be obtained. For example, by tracking the reflected rays coming out of the floor and intersecting the chassis of the car in the 3D scene, the intersection point can be determined to be the chassis of the car.
It should be understood that in the ray tracing process, in order to detect the intersection point of the ray and the object, for each ray, intersection detection is required with all objects in the scene, such as a sphere, a triangle, and other complex objects. In the related art, each object in the scene is traversed, and then the object closest to the ray intersection from the ray starting point is found as the intersection of the ray. In the case of a complex 3D scene and a large number of objects, the ray tracing process takes a lot of time. In fact, most objects are very far from the ray, and only a small fraction of the objects are likely to intersect the ray, so it is not necessary to traverse all objects in the scene.
Based on this, in one possible example, the electronic device may acquire an acceleration structure obtained based on the three-dimensional scene, the acceleration structure being used for rapidly finding the intersection point of the rays; and then, the electronic equipment tracks rays through the acceleration structure according to the target position and the identification to obtain an intersection point of the rays and the three-dimensional scene.
For example, the acceleration structure may include a structure such as a Bounding Volume Hierarchy (BVH), a Uniform grid (Uniform grid), or a k-dimensional tree (kd-tree), and the acceleration structure is not specifically limited in this embodiment. The acceleration structure can quickly remove irrelevant objects by utilizing a space division structure, so that the nearest intersection point can be found only by traversing a small part of subsets.
For example, for the above BVH, the object is simply enclosed by a simple bounding box, and the ray intersects the object in the scene before intersecting the object, and if the ray does not hit the bounding box, it indicates that the ray must not intersect the object in the bounding box; if the ray hits the bounding box, then it is recalculated if the ray intersects an object in the bounding box.
In general, there are many objects in a 3D scene, and the BVH is actually a binary tree structure for managing the objects in the 3D scene. Referring to fig. 5, fig. 5 is a schematic view of a BVH provided by an embodiment of the present application. As shown in fig. 5, different objects are surrounded by bounding boxes with different sizes, and corresponding binary tree structures are formed. In detecting whether each ray intersects an object in the scene, the binary tree is essentially traversed in order. For example, when the ray is detected not to intersect with the B bounding box in the binary tree, it means that the ray must not intersect with four objects in the B bounding box, and therefore, the step of detecting whether the ray intersects with four objects in the B bounding box can be omitted, so that only whether the ray intersects with two objects in the C bounding box can be detected.
After the intersection point of the light and the three-dimensional scene is obtained through tracking, the electronic device can calculate the color of the intersection point, and then the color of the intersection point is fused with the original color of the target pixel based on a ray tracking mode, so that the new color of the target pixel is obtained through updating.
For example, referring to fig. 6, fig. 6 is a schematic diagram of a reflection scene provided in an embodiment of the present application. Under the condition that the target object is a floor, the ray tracing mode is reflection, and the ray intersection point is the shell of the automobile; then, when the color of the car shell is red and the color of the target pixel is light yellow, the color of the target pixel may be updated based on the red color of the car shell and the original light yellow color of the target pixel.
In real-time ray tracing, the number of rays sampled per pixel is usually limited due to computational limitations, but this usually introduces noise that affects rendering. The solution is mainly to buffer the historical image frame and to perform projection accumulation on the historical image frame and the current image frame so as to increase the number of sampling points. In the embodiment of the application, the coloring calculation amount of the ray intersection can be reduced by utilizing the cached historical image frame and the current image frame.
In a possible embodiment, the electronic device may calculate a projection of the intersection point on the image according to a position of the intersection point in the three-dimensional scene, that is, calculate a corresponding pixel point of the intersection point in the 3D scene on the two-dimensional image through coordinate transformation. If the intersection point has a corresponding projection pixel on the first image or the third image, updating the color of the target pixel according to the color of the projection pixel; if the intersection point does not have a corresponding projection pixel on the first image or the third image, calculating the color of the intersection point, and updating the color of the target pixel according to the color of the intersection point; wherein the third image is a previous frame image of the second image. It should be understood that the electronic device may obtain a continuous video picture by displaying the rendered images on the screen frame by frame, while the second image is the image to be currently displayed on the screen, and the third image is the image of the frame displayed before the second image.
In short, in the process of rendering an image by the electronic device, the electronic device does not render all objects in the 3D scene in real time. The electronic device generally renders an object to be displayed on a screen to obtain a rendered image and displays the rendered image on the screen. If the intersection has been rendered and displayed on the image during rendering of the previous frame image (i.e., the third image), or the intersection has been rendered and displayed on the image during rendering of the current frame image (i.e., the first image), the color of the intersection may be determined based on the color of the corresponding pixel point of the intersection on the previous frame image or the current frame image. That is, the color of the intersection is obtained by multiplexing the colors of the pixels on the previous frame image or the current frame image, thereby avoiding recalculation of the color of the intersection and reducing the amount of calculation.
For example, referring to fig. 7, fig. 7 is a schematic diagram of determining a color of an intersection according to an embodiment of the present application. As shown in fig. 7, the image on the left side is the previous frame image, and the image on the right side is the current frame image. In the previous frame image and the current frame image, the electronic equipment renders the body of the automobile, and the body on the side of the automobile is displayed in the images.
For the floor area marked by the rectangular frame in the current frame image on the right side, in the process of performing ray tracing processing on the area, the intersection point of the reflected ray going out from the floor and the scene is the side of the automobile body, that is, the electronic device needs to determine the color of the side of the automobile body. At this time, the electronic device may calculate projection pixel points of the vehicle body side of the vehicle on the image. Because the side surface of the automobile body can find the corresponding projection pixel points in the previous frame image and the current frame image, the electronic equipment can determine the color of the intersection point of the reflected light and the scene by multiplexing the colors of the projection pixel points of the side surface of the automobile body in the previous frame image or the current frame image, and finally realize the coloring and rendering of the floor.
For example, in the floor area marked by the blue oval frame in the current frame image on the right side, in the process of performing ray tracing processing on the area, the intersection point of the reflected ray going out from the floor and the scene is the chassis of the automobile, that is, the electronic device needs to determine the chassis color of the automobile. However, since there is no projection pixel corresponding to the chassis of the vehicle in the previous frame image and the current frame image, the electronic device needs to recalculate the color of the intersection point of the ray tracing (i.e., the chassis of the vehicle).
For ease of understanding, the process of performing ray tracing processing by the electronic device will be described in detail below with reference to the flowchart.
Referring to fig. 8, fig. 8 is a schematic flowchart illustrating a ray tracing process according to an embodiment of the present disclosure. As shown in fig. 8, the flow of the ray tracing process includes the following steps.
After the electronic device determines the target pixel, the electronic device may obtain a G-buffer corresponding to the target pixel to obtain a position vector, a normal vector, and an identifier stored in the G-buffer. The position vector is used for determining the position information of the target pixel, the normal vector is used for determining the direction of the reflected/refracted ray, and the mark is used for determining whether the ray is the reflected ray or the refracted ray. Based on the information stored in the G-buffer, the electronic device can calculate the reflected/refracted light rays exiting from the target location where the target pixel is located.
The acceleration structure may include, but is not limited to, the above structures such as BVH, uniformity grid, or kd-tree.
And step 804, if the intersection point does not exist between the light ray and the scene, displaying the color of the current pixel point.
If the intersection point does not exist between the calculated light and the scene, namely the target position where the representative target pixel is located does not generate reflection or refraction, further rendering processing is not needed on the target pixel, and the color of the target pixel is rendered based on the color vector stored in the G-buffer of the current image frame.
In step 805, if there is an intersection point between the ray and the scene, a projection calculation is performed on the intersection point.
Specifically, the electronic device may calculate the projection of the intersection point on the image according to the position of the intersection point in the three-dimensional scene, that is, calculate the pixel point corresponding to the intersection point in the 3D scene on the two-dimensional image through coordinate transformation.
The electronic equipment judges whether a projection pixel point corresponding to the intersection point exists on the current image frame.
In step 807, if the projected pixel is located in the current image frame, the color of the projected pixel is used as the reflection/refraction color.
And if the projection pixel point corresponding to the intersection point exists in the current image frame, taking the color of the projection pixel point as the reflection/refraction color, namely fusing the color of the projection pixel point and the color of the target pixel to update the color of the target pixel. The color of the projection pixel point can be obtained based on the G-buffer in the current image frame.
The electronic equipment judges whether a projection pixel point corresponding to the intersection point exists on the previous image frame.
And step 809, if the projection pixel point is located in the previous image frame, taking the color of the projection pixel point as the reflection/refraction color.
And if the projection pixel point corresponding to the intersection point exists in the previous image frame, taking the color of the projection pixel point as the reflection/refraction color, namely fusing the color of the projection pixel point and the color of the target pixel to update the color of the target pixel. The color of the projection pixel point can be obtained based on the G-buffer in the previous image frame.
If it is determined that there is no projected pixel point corresponding to the intersection in the previous image frame, the color of the intersection is recalculated as the reflected/refracted color. The recalculated color of the intersection point is fused with the color of the target pixel to update the color of the target pixel.
In order to further reduce the amount of calculation in the rendering process, the following describes a process of performing rasterization processing on data to be rendered by the electronic device according to this embodiment.
In a possible example, referring to fig. 9, fig. 9 is a schematic flowchart illustrating a process of rasterizing data to be rendered according to an embodiment of the present application. As shown in fig. 9, the step 302 may specifically include:
3021, rendering the data to be rendered without light to obtain a fourth image.
In this embodiment, the electronic device executes rasterization processing in a delayed rendering manner. After obtaining the data to be rendered, the electronic device may perform preliminary rendering, i.e., rendering without illumination calculation, on the data to be rendered to obtain a fourth image. For the specific steps, reference may be made to the description of step 302 above, which is not described herein again.
3022, according to the attribute information of the data to be rendered, obtaining a geometric buffer area corresponding to the pixel in the fourth image, where the geometric buffer area is used to store the attribute parameter corresponding to the pixel.
In the non-illumination rendering stage, the electronic device may generate a G-buffer corresponding to each pixel in the fourth image according to the attribute information of the data to be rendered, and the G-buffer may store attribute parameters such as a position vector, a normal vector, and a color vector corresponding to each pixel.
In one possible embodiment, the electronic device may generate the G-buffer in a variety of ways.
Firstly, before the electronic device generates the G-buffer corresponding to the pixel point in the fourth image, the electronic device may determine a corresponding object to be rendered, that is, an object to be displayed in the fourth image, then determine a resolution for generating the G-buffer according to specific information of the object to be rendered, and finally generate the G-buffer corresponding to the object to be rendered based on the resolution, thereby obtaining the G-buffer corresponding to the pixel point in the fourth image.
In a first case, if the object to be rendered in the fourth image is the target object, a first G-buffer corresponding to the object to be rendered is generated according to the attribute information of the object to be rendered and the first resolution, and the first G-buffer is used for storing the color attribute parameters.
And if the object to be rendered is the target object, rendering pixels based on the target object to be located in the region of interest in the fourth image. Therefore, for the object to be rendered, the electronic device may generate the G-buffer corresponding to the object to be rendered according to a first resolution, where the first resolution may be the same as the resolution of the fourth image itself, i.e., the electronic device generates the G-buffer corresponding to the object to be rendered at full resolution. In this way, for each pixel of the object to be rendered on the fourth image, a corresponding G-buffer is generated. For example, in the case where the resolution of the fourth image is 1000 × 1000, the electronic device generates the first G-buffer corresponding to the object to be rendered based on the resolution of 1000 × 1000.
And in the second situation, if the object to be rendered in the fourth image is located in the peripheral area of the target object, generating a second G-buffer corresponding to the object to be rendered according to the attribute information of the object to be rendered and the second resolution, wherein the second G-buffer is used for storing the color attribute parameters.
If the object to be rendered is located in the peripheral region of the target object, it can be considered that the pixels rendered based on the object to be rendered are located in the peripheral region of the region of interest. Therefore, for the object to be rendered, the electronic device may generate a second G-buffer corresponding to the object to be rendered according to a second resolution, where the second resolution is smaller than the first resolution. For example, the second resolution may be 3/4 of the first resolution, i.e. in case the first resolution is 1000 × 1000 resolution, the second resolution may be 750 × 750 resolution. In addition, the second resolution may also be other specific values smaller than the first resolution, and the second resolution is not particularly limited herein.
Specifically, the manner of determining that the object to be rendered is located in the peripheral area of the target object may be to determine whether the object to be rendered is located in the peripheral area of the target object by determining a distance between a pixel corresponding to the object to be rendered and a pixel corresponding to the target object. For example, if the distance between the pixel corresponding to the object to be rendered and the pixel corresponding to the target object is less than the first preset threshold, it may be determined that the object to be rendered is located in the peripheral region of the target object. The first preset threshold may be, for example, 10 pixels, that is, when the distance between the pixel corresponding to the object to be rendered and the pixel corresponding to the target object is less than 10 pixels, it may be determined that the object to be rendered is located in the peripheral area of the target object.
And in a third situation, if the object to be rendered in the fourth image is located in the background area, generating a third G-buffer corresponding to the object to be rendered according to the attribute information of the object to be rendered and the third resolution, wherein the third G-buffer is used for storing the color attribute parameters.
If the object to be rendered is located in the background area of the target object, then the pixels rendered based on the object to be rendered may be considered to be located in the background area. Therefore, for the object to be rendered, the electronic device may generate a third G-buffer corresponding to the object to be rendered according to a third resolution, where the third resolution is smaller than the second resolution. For example, the third resolution may be 1/2 of the first resolution, i.e. in case the first resolution is 1000 × 1000 resolution, the second resolution may be 500 × 500 resolution. In addition, the third resolution may also be other specific values smaller than the first resolution, and the third resolution is not specifically limited herein.
It should be understood that whether the object to be rendered is located in the background area is determined by determining the distance between the pixel corresponding to the object to be rendered and the pixel corresponding to the target object. For example, if the distance between the pixel corresponding to the object to be rendered and the pixel corresponding to the target object is less than the second preset threshold, it may be determined that the object to be rendered is located in the peripheral region of the target object. The second preset threshold may be, for example, 50 pixels, that is, when the distance between the pixel corresponding to the object to be rendered and the pixel corresponding to the target object is greater than 50 pixels, it may be determined that the object to be rendered is located in the background area.
The data to be rendered comprises the object to be rendered, the first resolution is greater than the second resolution, the second resolution is greater than the third resolution, and the first G-buffer, the second G-buffer and the third G-buffer are all used for storing color attribute parameters, namely color vectors. That is to say, the electronic device selects a corresponding resolution from the three resolutions to generate the G-buffer corresponding to the object to be rendered based on the specific information of the object to be rendered only when the G-buffer corresponding to the object to be rendered and used for storing the color attribute parameters is generated.
The above three cases describe the way in which the electronic device generates G-buffers for storing color attribute parameters, and the following describes the way in which the electronic device generates G-buffers for storing other attribute parameters.
And in case four, generating a fourth G-buffer corresponding to the object to be rendered according to the attribute information of the object to be rendered and a fourth resolution, wherein the attribute parameters used for storage of the fourth G-buffer are not color attribute parameters, and the fourth resolution is smaller than the first resolution.
In brief, when the electronic device generates a G-buffer that is not used for storing the color attribute parameters, the electronic device generates the G-buffer with the fourth resolution. For example, when the electronic device generates the G-buffer for storing the position attribute parameter or the normal vector attribute parameter, the electronic device generates the G-buffer corresponding to the object to be rendered with the fourth resolution no matter whether the object to be rendered is the target object or is located in a peripheral region of the target object.
It should be understood that the higher the resolution used for generating the G-buffer, the more corresponding G-buffers in the image of the same area, i.e. the higher the rendering accuracy; conversely, the lower the resolution used to generate the G-buffers, the less G-buffers corresponding within an image of the same area, i.e., the lower the rendering accuracy. For example, for an image consisting of 1000 by 1000 pixels, if G-buffers are generated at 1000 by 1000 resolution, 1000 by 1000G-buffers can be obtained; if G-buffers are generated at 500 × 500 resolution, 500 × 500G-buffers can be obtained. That is, for a target object in an image, the G-buffer may be generated at a higher resolution to ensure the rendering accuracy of the target object; for the peripheral area and the background area of the target object in the image, the G-buffer may be generated at a lower resolution, so as to reduce the amount of computation of the electronic device, save memory space, and reduce the requirement for input/output (I/O) bandwidth of the electronic device.
3023, performing illumination calculation on the pixels in the fourth image according to the G-buffer to obtain the first image.
After the G-buffers corresponding to the pixels in the fourth image are obtained, the electronic device may perform illumination calculation on the fourth image based on attribute parameters such as position vectors, normal vectors, and color vectors stored in the G-buffers to obtain the rendered first image.
It is to be understood that when the electronic device generates the G-buffer at a lower resolution, there may not be a corresponding G-buffer for a part of pixels in the fourth image, i.e. the electronic device does not store the attribute parameters corresponding to the part of pixels. At this time, in the process of performing illumination calculation on the fourth image, the electronic device may obtain the attribute parameter corresponding to the pixel without the G-buffer in an interpolation manner, so as to implement illumination calculation on the pixel.
Referring to fig. 10, fig. 10 is a schematic flowchart illustrating a process of generating a G-buffer based on adaptive resolution according to an embodiment of the present disclosure. As shown in fig. 10, the process of generating G-buffer based on adaptive resolution includes the following steps.
Because the electronic device needs to generate the G-buffers respectively used for storing different attribute parameters, in the process of generating the G-buffer corresponding to the object to be rendered, the electronic device may first determine whether the current G-buffer to be generated is used for storing the color attribute parameters.
If the G-buffer to be generated is not used for storing the color attribute parameters, the G-buffer is generated at 1/2 resolution, step 1002.
In case that the G-buffer to be generated is not used for storing the color attribute parameters, for example, the G-buffer to be generated is used for storing the location attribute parameters or the normal vector attribute parameters, the electronic device may generate the G-buffer at a lower resolution. For example, the electronic device may generate the G-buffer at 1/2 (i.e., the 1/2 resolution described above) of the original resolution of the image to be generated. Illustratively, in the case where the original resolution of the image to be generated is 1000 × 1000, the 1/2 resolution is 500 × 500 resolution.
And 1003, if the G-buffer to be generated is used for storing the color attribute parameters, judging whether the object to be rendered is the target object.
The manner of determining, by the electronic device, whether the object to be rendered is the target object may be determining whether the object to be rendered has a corresponding identifier. If the object to be rendered has the corresponding identifier, determining that the object to be rendered is a target object; if the object to be rendered does not have a corresponding identification, it may be determined that the object to be rendered is not a target object.
And 1004, if the object to be rendered is the target object, generating the G-buffer at full resolution.
And if the object to be rendered is the target object, rendering pixels based on the target object to be located in the region of interest in the image. Therefore, for the object to be rendered, the rendering accuracy does not need to be reduced. The full resolution refers to the original resolution of the image to be generated, that is, when the electronic device generates the G-buffer of the object to be rendered, the G-buffer is generated with the normal resolution, so that the rendering accuracy of the target object is ensured.
And step 1006, if the object to be rendered is located in the peripheral area of the target object, generating the G-buffer with 3/4 resolution.
If the object to be rendered is located in the peripheral region of the target object, it can be considered that the pixels rendered based on the object to be rendered are located in the peripheral region of the region of interest. Therefore, for an object to be rendered, the electronic device may slightly reduce the accuracy of the rendering to reduce the computational load of the electronic device. For example, the electronic device may generate the G-buffer at 3/4 (i.e., the 3/4 resolution described above) of the original resolution of the image to be generated. Illustratively, in the case where the original resolution of the image to be generated is 1000 × 1000, the 1/2 resolution is 750 × 750 resolution.
If the object to be rendered is located in the background area, step 1008, a G-buffer is generated at 1/2 resolution.
If the object to be rendered is located in the background area, it can be considered that the pixel rendered based on the object to be rendered is located in the background area with lower attention. Therefore, for the object to be rendered, the electronic device may further reduce the rendering accuracy to reduce the computational load of the electronic device. For example, the electronic device may generate the G-buffer at 1/2 of the original resolution of the image to be generated to further reduce the computational load of the electronic device.
In order to further reduce the amount of calculation in the rendering process, the following describes a process of acquiring data to be rendered by the electronic device according to the embodiment.
In one possible embodiment, the electronic device obtaining data to be rendered may include: the electronic equipment acquires the 3D scene data and a fifth image sent by the server, wherein the fifth image is a rendered background image. That is, the server may render the background area in the 3D scene and issue the rendered background image to the electronic device through the network. Therefore, the electronic equipment can only render the part of the non-background area in the 3D scene, and the rendered image is fused with the background image issued by the server, so that a complete and rendered image can be obtained.
The background image sent by the server is an image including only a background area, that is, the background image includes only a distant background. For example, the server may render the background of the sky, a mountain, the sea, or a distant high-rise building, and obtain a corresponding background image.
For example, in one possible scenario, a game application may be run in the electronic device, and the server may render a background area in the 3D scene in real time, obtain a background image, and send the background image to the electronic device. The electronic equipment renders a non-background area in the 3D scene in the process of running the game application, and obtains a rendered image by combining a background image issued by the server so as to display the rendered image on a screen.
Optionally, when the game application run by the electronic device is a multi-user online network game, the background images rendered by the server may be further respectively sent to a plurality of different electronic devices. Different electronic devices respectively perform personalized rendering according to the content displayed according to actual needs so as to display different images on the screen.
Specifically referring to fig. 11, fig. 11 is a schematic flowchart illustrating a process of rendering and issuing a background image on a server according to an embodiment of the present application. As shown in fig. 11, the process of rendering and issuing a background image on a server includes the following steps.
In one possible example, a server renders a background area in a 3D scene, and may generate six maps, which may be used to compose a cube map as a background image. In brief, a cube map is a texture that includes 6 2D textures, each of which constitutes a face of the cube, thereby forming a textured cube. The cube map may contain all background areas in the 3D scene, i.e. objects of the 3D scene may be considered to be wrapped in the cube map. Illustratively, referring to fig. 11, the background image generated by the server is composed of 6 tiles, and each tile may constitute one face of the cube, thereby forming a cube tile. The cube map comprises background areas such as remote high-rise buildings, lamplight and night sky, and objects in non-background areas in the 3D scene are all wrapped in the cube map.
Optionally, when the light source of the 3D scene changes or the background area of the 3D scene is updated, the server may re-render the changed 3D scene to obtain a new background image, so as to update the background image in real time.
Since the electronic device may also need to perform ray tracing during rendering to determine whether the ray intersects an object in the background area. Therefore, in addition to obtaining a background image corresponding to a background area, the electronic device may also need to obtain diffuse reflection irradiance corresponding to a background object in the background image, so that the electronic device can implement ray tracing processing.
In particular, the irradiance corresponding to the background image may be calculated based on the reflection equation, due to diffuse reflection kdAnd mirror surface ksAre independent of each other, so the reflection equation can be integrated into two parts as shown in equation 1:
where Lo represents the irradiance reflected from the point p upon which the light is projected, as viewed in the direction ω o. Li () represents the radiation at a certain point through a certain infinitesimal solid angle ω iThe ratio, and the solid angle can be regarded as the incident direction vector ω i. Li (p, wi) represents the incident light intensity at the p-point. wi denotes the incident ray vector of the solid angle. N denotes a normal line. (wi x n) denotes the incident light attenuation by the incident angle, and the multiplication notation here denotes dot multiplication. Jew represents the hemispherical integral of the incident ray vector to the incident direction hemisphere. k is a radical ofsRepresents the mirror scaling factor, kdRepresents a diffuse reflection scaling factor, and ks+kdLess than or equal to 1.
At step 1103, the server computes a pre-filtered ambient light map.
In this embodiment, since the electronic device performs ray tracing processing on the target object in a specific manner, the electronic device is actually mainly focused on the specular reflection part in the reflection equation, i.e., the second half of the right side of the equation 1 and the like. By converting the second half of equation 1 to the right of the equal sign, equation 2 can be obtained:
where fr is called the Reflectance equation, a Bidirectional Reflectance Distribution Function (BRDF) is generally used. Since the integral term is not only dependent on the input light wiAnd also on the output light woTherefore, the cube map cannot be sampled with two direction vectors. In the implementation, a division and summation approximation method is adopted to divide the pre-calculation into two independent parts for solving, and then the two parts are combined to obtain the pre-calculation result. Specifically, the way in which the split sum approximation divides the pre-computation into two separate partial solutions can be as shown in equation 3:
the pre-calculation result may be as shown in fig. 12, and fig. 12 is a schematic diagram of a pre-filtered ambient light map according to an embodiment of the present disclosure. The first part of the convolution calculation is called a pre-filtered environment map, which can be pre-calculated in advance, and in this embodiment, the electronic device can obtain the filter maps of different levels by the roughness value.
In step 1104, the server calculates a Lookup Table (Lookup Table, LUT) corresponding to the point map of the BRDF.
The BRDF part in the second half of equation 3, which is the specular reflection integral, can also be pre-calculated to obtain a pre-calculated result, i.e., an LUT corresponding to the integral map of the BRDF. So that the electronic equipment can pass through the roughness of a given surface when in use, and the included angle nw between the incident ray and the normal lineiTo find the corresponding BRDF credit map in the LUT.
In step 1105, the server sends the background image and related data to the electronic device.
After the server obtains the background image through rendering and calculates the corresponding pre-calculation result, the server may send the background image and the corresponding pre-calculation result to the electronic device.
In step 1106, the electronic device performs image rendering based on the background image and the related data to obtain an image to be displayed.
Finally, the electronic device may perform preliminary rendering based on the background image to obtain the first image, and perform ray tracing processing based on the related data and the first image to obtain a second image for display.
For convenience of understanding, the method for rendering an image by end cloud combination provided by the embodiments of the present application will be described below with reference to the accompanying drawings. Referring to fig. 13, fig. 13 is a schematic flowchart of an image rendering method by end-cloud combination according to an embodiment of the present disclosure. As shown in fig. 13, the end-cloud-combined image rendering method includes the following steps.
In this embodiment, the cloud server may determine a background area in the 3D scene, and then render the background area in the 3D scene based on light sources such as a static light source and a dynamic light source, to obtain an ambient light map, where the ambient light map may be the cube map described above. Optionally, when the light source of the 3D scene changes or the background area of the 3D scene is updated, the server may render the changed 3D scene again to obtain a new ambient light map, so as to update the ambient light map in real time.
In step 1302, the electronic device performs rasterization processing based on the data to be rendered of the 3D scene and the ambient light map issued by the server to obtain a first image and a G-buffer for storing attribute parameters of pixels in the first image.
After obtaining the ambient light map issued by the server, the electronic device may perform rasterization processing based on the data to be rendered of the local 3D scene and the received ambient light map to obtain a first image and a G-buffer, where attribute parameters of pixels in the first image, such as location vector attribute parameters, color vector attribute parameters, normal vector attribute parameters, and the like, are stored in the G-buffer.
And step 1303, the electronic equipment constructs an acceleration structure.
In order to speed up the process of performing ray tracing processing by the electronic device, the electronic device may perform construction of an acceleration structure based on the 3D scene, where the acceleration structure may include structures such as BVH, uniformity grid, or kd-tree.
In step 1304, the electronic device performs ray tracing processing based on the first image and the ambient light map to obtain a second image.
After the acceleration structure is constructed, the electronic device may perform ray tracing processing through the acceleration structure based on the first image and the ambient light map, and find an intersection point of the ray and the 3D scene. Then, the color of the corresponding pixel in the first image is updated based on the color of the intersection, thereby obtaining a second image.
And step 1305, the electronic device performs denoising on the second image to obtain an image to be displayed.
The second image may introduce noise because the electronics typically limit the amount of sampled light per pixel during the ray tracing process. Therefore, the electronic device can perform denoising processing on the second image through a denoising algorithm to obtain a denoised image (i.e., an image to be displayed), and display the denoised image on a screen. The denoising algorithm may include, for example, a time-domain denoising algorithm.
Referring to fig. 14, fig. 14 is a schematic diagram of a hybrid rendering pipeline according to an embodiment of the present disclosure.
In block 1401, the first rendering pass, i.e., the rasterization process, is performed by the vertex shader and the fragment shader. And the vertex shader and the fragment shader perform initial rendering on data to be rendered to obtain a first image. That is, the vertex shader and the fragment shader in the module 1401 execute the above step 1302 to obtain the first image and the G-buffer for storing the attribute parameters of the pixels in the first image. Referring to fig. 15(a), fig. 15(a) is a first image after rasterization processing according to an embodiment of the present application. As shown in fig. 15(a), in the first image obtained after the rasterization processing is performed, there is no reflection of the vehicle on the floor, and the reflection effect of the floor is not rendered on the first image.
In block 1402, the electronic device generates and stores a G-buffer corresponding to the first image. The G-buffer may store information such as an identifier, world coordinates, a normal vector, and a color corresponding to a pixel in the first image. The world coordinates corresponding to the pixels in the first image may be stored as a screen space map, or may be obtained by backward-deriving a depth map associated with the rendering pipeline in combination with an inverse matrix of a view projection matrix (view project).
In block 1403, the CPU or GPU of the electronic device builds the acceleration structure and obtains global vertex information for shading for ray tracing processing.
In block 1404, a second rendering pass, ray tracing, is performed by the compute shader or the fragment shader. The ray tracing effects of reflection, refraction or shadow of local objects are realized through a computation shader or a fragment shader. Referring to fig. 15(b), fig. 15(b) is a second image after the ray tracing process according to the embodiment of the present disclosure. As shown in fig. 15(a), in the second image obtained after the ray tracing process is performed, the reflection effect of the floor is rendered on the second image in which the reflection of the vehicle is present on the floor.
Optionally, in the case where the electronic device implements ray tracing processing based on a compute shader, in block 1405, the rendered image is further processed by a full-screen vertex shader and a full-screen fragment shader to obtain an image for display on the screen.
On the basis of the embodiments corresponding to fig. 1 to fig. 15(b), in order to better implement the above-mentioned scheme of the embodiments of the present application, the following also provides related equipment for implementing the above-mentioned scheme. Specifically, referring to fig. 16, fig. 16 is a schematic structural diagram of an electronic device 1600 provided in an embodiment of the present application, where the electronic device 1600 includes: an acquisition unit 1601 and a processing unit 1602. The obtaining unit 1601 is configured to obtain data to be rendered; the processing unit 1602 is configured to perform rasterization processing on the data to be rendered to obtain a first image; the processing unit 1602 is further configured to perform ray tracing processing on the target object in the first image to obtain a second image; wherein the target object has a marker for marking an object to be subjected to ray tracing processing.
In one possible implementation, the identification is also used to mark ray tracing processing.
In one possible implementation, the ray tracing processing includes reflection, refraction, shading, or caustic.
In a possible implementation manner, the processing unit 1602 is further configured to: performing ray tracing on the target object in the first image according to the identifier of the target object to obtain a ray tracing result; and updating the color of the target object in the first image according to the ray tracing result to obtain a second image.
In a possible implementation manner, the processing unit 1602 is further configured to determine a target pixel in the first image, where the target pixel has the identifier, and the target object includes one or more target pixels; the obtaining unit 1601 is further configured to obtain a target position of the target pixel in the three-dimensional scene; the processing unit 1602 is further configured to perform ray tracing according to the target position and the identifier, so as to obtain an intersection point of a ray and the three-dimensional scene; the processing unit 1602 is further configured to update the color of the target pixel according to the color of the intersection.
In a possible implementation manner, the processing unit 1602 is further configured to: calculating the projection of the intersection point on the image according to the position of the intersection point in the three-dimensional scene; if the intersection point has a corresponding projection pixel on the first image or the third image, updating the color of the target pixel according to the color of the projection pixel; if the intersection point does not have a corresponding projection pixel on the first image or the third image, calculating the color of the intersection point, and updating the color of the target pixel according to the color of the intersection point; wherein the third image is a previous frame image of the second image.
In a possible implementation manner, the obtaining unit 1601 is further configured to obtain an acceleration structure, where the acceleration structure is obtained based on the three-dimensional scene; the processing unit 1602 is further configured to perform ray tracing through the acceleration structure according to the target position and the identifier, so as to obtain an intersection point of the ray and the three-dimensional scene.
In a possible implementation manner, the processing unit 1602 is further configured to: rendering the data to be rendered without illumination to obtain a fourth image; obtaining a geometric buffer area corresponding to a pixel in the fourth image according to the attribute information of the data to be rendered, wherein the geometric buffer area is used for storing attribute parameters corresponding to the pixel; and performing illumination calculation on the pixels in the fourth image according to the geometric buffer area to obtain the first image.
In a possible implementation manner, the processing unit 1602 is further configured to: if the object to be rendered in the fourth image is the target object, generating a first geometric buffer area corresponding to the object to be rendered according to the attribute information of the object to be rendered and the first resolution; if the object to be rendered in the fourth image is located in the peripheral area of the target object, generating a second geometric buffer area corresponding to the object to be rendered according to the attribute information and the second resolution of the object to be rendered, and if the object to be rendered in the fourth image is located in the background area, generating a third geometric buffer area corresponding to the object to be rendered according to the attribute information and the third resolution of the object to be rendered; wherein the data to be rendered comprises the objects to be rendered, the first resolution is greater than the second resolution, the second resolution is greater than the third resolution, and the first geometric buffer, the second geometric buffer, and the third geometric buffer are used to store color attribute parameters.
In a possible implementation manner, the processing unit 1602 is further configured to: generating a fourth geometric buffer area corresponding to the object to be rendered according to the attribute information of the object to be rendered and a fourth resolution, wherein the attribute parameters used for storing the fourth geometric buffer area are not color attribute parameters; the fourth resolution is less than the first resolution.
In a possible implementation manner, the obtaining unit 1601 is further configured to obtain three-dimensional scene data and a fifth image sent by the server, where the fifth image is a rendered background image.
In a possible implementation manner, the data to be rendered includes the target object and a material parameter of the target object; the processing unit 1602 is further configured to determine the identifier of the target object according to the material parameter of the target object.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application, which are essential or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: u disk, removable hard disk, read only memory, random access memory, magnetic or optical disk, etc. for storing program codes.
Claims (16)
1. An image processing method, comprising:
acquiring data to be rendered;
rasterizing the data to be rendered to obtain a first image;
performing ray tracing processing on a target object in the first image to obtain a second image;
wherein the target object has a marker for marking an object to be subjected to ray tracing processing.
2. The image processing method of claim 1, wherein the identifier is further used to mark ray tracing.
3. The image processing method of claim 2, wherein the ray tracing processing manner comprises reflection, refraction, shading or caustic.
4. The image processing method according to any one of claims 1 to 3, wherein performing ray tracing on the target object in the first image includes:
acquiring the position of a target object in the first image in a three-dimensional scene;
executing ray tracing processing according to the position of the target object in the three-dimensional scene to obtain a ray tracing result;
and updating the color of the target object in the first image according to the ray tracing result to obtain the second image.
5. The image processing method according to claim 2 or 3, wherein performing ray tracing on the target object in the first image to obtain the second image comprises:
performing ray tracing on the target object in the first image according to the identifier of the target object to obtain a ray tracing result;
and updating the color of the target object in the first image according to the ray tracing result to obtain a second image.
6. The image processing method of claim 5, wherein performing ray tracing on the target object in the first image according to the identifier of the target object to obtain a ray tracing result comprises:
determining a target pixel in the first image, the target pixel having the identification, the target object including one or more of the target pixels;
acquiring a target position of the target pixel in a three-dimensional scene;
performing ray tracing according to the target position and the identifier to obtain an intersection point of the ray and the three-dimensional scene;
the updating the color of the target object in the first image according to the ray tracing result includes:
and updating the color of the target pixel according to the color of the intersection point.
7. The image processing method according to claim 6, wherein the updating the color of the target pixel according to the color of the intersection includes:
calculating the projection of the intersection point on the image according to the position of the intersection point in the three-dimensional scene;
if the intersection point has a corresponding projection pixel on the first image or the third image, updating the color of the target pixel according to the color of the projection pixel;
if the intersection point does not have a corresponding projection pixel on the first image or the third image, calculating the color of the intersection point, and updating the color of the target pixel according to the color of the intersection point;
wherein the third image is a previous frame image of the second image.
8. The image processing method according to claim 6 or 7, wherein the ray tracing according to the target position and the identifier to obtain an intersection point of a ray and the three-dimensional scene comprises:
obtaining an acceleration structure, wherein the acceleration structure is obtained based on the three-dimensional scene;
and tracking rays through the acceleration structure according to the target position and the identification to obtain an intersection point of the rays and the three-dimensional scene.
9. The image processing method according to any one of claims 1 to 8, wherein the rasterizing the data to be rendered to obtain the first image includes:
rendering the data to be rendered without illumination to obtain a fourth image;
obtaining a geometric buffer area corresponding to a pixel in the fourth image according to the attribute information of the data to be rendered, wherein the geometric buffer area is used for storing attribute parameters corresponding to the pixel;
and performing illumination calculation on the pixels in the fourth image according to the geometric buffer area to obtain the first image.
10. The image processing method according to claim 9, wherein obtaining a geometric buffer corresponding to a pixel in the fourth image according to the attribute information of the data to be rendered comprises:
if the object to be rendered in the fourth image is the target object, generating a first geometric buffer area corresponding to the object to be rendered according to the attribute information of the object to be rendered and the first resolution;
if the object to be rendered in the fourth image is located in the peripheral area of the target object, generating a second geometric buffer area corresponding to the object to be rendered according to the attribute information of the object to be rendered and a second resolution
If the object to be rendered in the fourth image is located in the background area, generating a third geometric buffer area corresponding to the object to be rendered according to the attribute information of the object to be rendered and the third resolution;
wherein the data to be rendered comprises the object to be rendered, the first resolution is greater than the second resolution, the second resolution is greater than the third resolution, and the first geometric buffer, the second geometric buffer, and the third geometric buffer are used to store color attribute parameters.
11. The image processing method according to claim 10, wherein obtaining a geometric buffer corresponding to a pixel in the fourth image according to the attribute information of the data to be rendered further comprises:
generating a fourth geometric buffer area corresponding to the object to be rendered according to the attribute information of the object to be rendered and a fourth resolution, wherein the attribute parameters used for storing the fourth geometric buffer area are not color attribute parameters;
the fourth resolution is less than the first resolution.
12. The image processing method according to any one of claims 1 to 11, wherein the acquiring data to be rendered includes:
and acquiring three-dimensional scene data and a fifth image sent by the server, wherein the fifth image is a rendered background image.
13. The image processing method according to any one of claims 1 to 12, wherein the data to be rendered includes the target object and material parameters of the target object;
the method further comprises the following steps:
and determining the identification of the target object according to the material parameters of the target object.
14. An electronic device comprising a memory and a processor; the memory stores code, the processor is configured to execute the code, and when executed, the electronic device performs the method of any of claims 1-13.
15. A computer readable storage medium comprising computer readable instructions which, when run on a computer, cause the computer to perform the method of any of claims 1 to 13.
16. A computer program product comprising computer readable instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 13.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011379098.8A CN114581589A (en) | 2020-11-30 | 2020-11-30 | Image processing method and related device |
PCT/CN2021/133414 WO2022111619A1 (en) | 2020-11-30 | 2021-11-26 | Image processing method and related apparatus |
EP21897119.0A EP4242973A4 (en) | 2020-11-30 | 2021-11-26 | Image processing method and related apparatus |
US18/323,977 US20230316633A1 (en) | 2020-11-30 | 2023-05-25 | Image processing method and related apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011379098.8A CN114581589A (en) | 2020-11-30 | 2020-11-30 | Image processing method and related device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114581589A true CN114581589A (en) | 2022-06-03 |
Family
ID=81753721
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011379098.8A Pending CN114581589A (en) | 2020-11-30 | 2020-11-30 | Image processing method and related device |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230316633A1 (en) |
EP (1) | EP4242973A4 (en) |
CN (1) | CN114581589A (en) |
WO (1) | WO2022111619A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116051713A (en) * | 2022-08-04 | 2023-05-02 | 荣耀终端有限公司 | Rendering method, electronic device, and computer-readable storage medium |
CN116051704A (en) * | 2022-08-29 | 2023-05-02 | 荣耀终端有限公司 | Rendering method and device |
CN116402935A (en) * | 2023-03-28 | 2023-07-07 | 北京拙河科技有限公司 | Image synthesis method and device based on ray tracing algorithm |
CN116681814A (en) * | 2022-09-19 | 2023-09-01 | 荣耀终端有限公司 | Image rendering method and electronic equipment |
CN116681811A (en) * | 2022-09-19 | 2023-09-01 | 荣耀终端有限公司 | Image rendering method, electronic device and readable medium |
CN116704101A (en) * | 2022-09-09 | 2023-09-05 | 荣耀终端有限公司 | Pixel filling method and terminal based on ray tracing rendering |
WO2024027286A1 (en) * | 2022-08-04 | 2024-02-08 | 荣耀终端有限公司 | Rendering method and apparatus, and device and storage medium |
CN117557701A (en) * | 2022-08-03 | 2024-02-13 | 荣耀终端有限公司 | Image rendering method and electronic equipment |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7387867B2 (en) * | 2019-07-19 | 2023-11-28 | ビーエーエスエフ コーティングス ゲゼルシャフト ミット ベシュレンクテル ハフツング | Method and system for simulating texture characteristics of coatings |
US20230154101A1 (en) * | 2021-11-16 | 2023-05-18 | Disney Enterprises, Inc. | Techniques for multi-view neural object modeling |
CN115952139B (en) * | 2023-03-14 | 2023-06-30 | 武汉芯云道数据科技有限公司 | Multi-frame three-dimensional image processing method and system for mobile equipment |
CN116091684B (en) * | 2023-04-06 | 2023-07-07 | 杭州片段网络科技有限公司 | WebGL-based image rendering method, device, equipment and storage medium |
CN116563445B (en) * | 2023-04-14 | 2024-03-19 | 深圳崇德动漫股份有限公司 | Cartoon scene rendering method and device based on virtual reality |
CN117876564B (en) * | 2024-01-15 | 2024-09-17 | 联想新视界(北京)科技有限公司 | Image processing method and related equipment |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8570322B2 (en) * | 2009-05-12 | 2013-10-29 | Nvidia Corporation | Method, system, and computer program product for efficient ray tracing of micropolygon geometry |
KR20100132605A (en) * | 2009-06-10 | 2010-12-20 | 삼성전자주식회사 | Apparatus and method for hybrid rendering |
CN102855655A (en) * | 2012-08-03 | 2013-01-02 | 吉林禹硕动漫游戏科技股份有限公司 | Parallel ray tracing rendering method based on GPU (Graphic Processing Unit) |
JP2016071733A (en) * | 2014-09-30 | 2016-05-09 | キヤノン株式会社 | Image processor and image processing method |
US10262456B2 (en) * | 2015-12-19 | 2019-04-16 | Intel Corporation | Method and apparatus for extracting and using path shading coherence in a ray tracing architecture |
CN107256574A (en) * | 2017-05-31 | 2017-10-17 | 宝珑珠宝设计(北京)有限公司 | A kind of real-time hybrid rending methods of true 3D |
-
2020
- 2020-11-30 CN CN202011379098.8A patent/CN114581589A/en active Pending
-
2021
- 2021-11-26 EP EP21897119.0A patent/EP4242973A4/en active Pending
- 2021-11-26 WO PCT/CN2021/133414 patent/WO2022111619A1/en unknown
-
2023
- 2023-05-25 US US18/323,977 patent/US20230316633A1/en active Pending
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117557701A (en) * | 2022-08-03 | 2024-02-13 | 荣耀终端有限公司 | Image rendering method and electronic equipment |
CN116051713B (en) * | 2022-08-04 | 2023-10-31 | 荣耀终端有限公司 | Rendering method, electronic device, and computer-readable storage medium |
CN116051713A (en) * | 2022-08-04 | 2023-05-02 | 荣耀终端有限公司 | Rendering method, electronic device, and computer-readable storage medium |
WO2024027286A1 (en) * | 2022-08-04 | 2024-02-08 | 荣耀终端有限公司 | Rendering method and apparatus, and device and storage medium |
CN116051704A (en) * | 2022-08-29 | 2023-05-02 | 荣耀终端有限公司 | Rendering method and device |
CN116704101A (en) * | 2022-09-09 | 2023-09-05 | 荣耀终端有限公司 | Pixel filling method and terminal based on ray tracing rendering |
CN116704101B (en) * | 2022-09-09 | 2024-04-09 | 荣耀终端有限公司 | Pixel filling method and terminal based on ray tracing rendering |
CN116681814A (en) * | 2022-09-19 | 2023-09-01 | 荣耀终端有限公司 | Image rendering method and electronic equipment |
CN116681811A (en) * | 2022-09-19 | 2023-09-01 | 荣耀终端有限公司 | Image rendering method, electronic device and readable medium |
CN116681811B (en) * | 2022-09-19 | 2024-04-19 | 荣耀终端有限公司 | Image rendering method, electronic device and readable medium |
CN116681814B (en) * | 2022-09-19 | 2024-05-24 | 荣耀终端有限公司 | Image rendering method and electronic equipment |
CN116402935A (en) * | 2023-03-28 | 2023-07-07 | 北京拙河科技有限公司 | Image synthesis method and device based on ray tracing algorithm |
CN116402935B (en) * | 2023-03-28 | 2024-01-19 | 北京拙河科技有限公司 | Image synthesis method and device based on ray tracing algorithm |
Also Published As
Publication number | Publication date |
---|---|
US20230316633A1 (en) | 2023-10-05 |
EP4242973A1 (en) | 2023-09-13 |
EP4242973A4 (en) | 2024-08-07 |
WO2022111619A1 (en) | 2022-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114581589A (en) | Image processing method and related device | |
US11024077B2 (en) | Global illumination calculation method and apparatus | |
CN111508052B (en) | Rendering method and device of three-dimensional grid body | |
AU2014363213B2 (en) | Image rendering of laser scan data | |
US6567083B1 (en) | Method, system, and computer program product for providing illumination in computer graphics shading and animation | |
US10776997B2 (en) | Rendering an image from computer graphics using two rendering computing devices | |
US10614619B2 (en) | Graphics processing systems | |
US10049486B2 (en) | Sparse rasterization | |
US11954169B2 (en) | Interactive path tracing on the web | |
KR102701851B1 (en) | Apparatus and method for determining LOD(level Of detail) for texturing cube map | |
CN111311723A (en) | Pixel point identification and illumination rendering method and device, electronic equipment and storage medium | |
US9153065B2 (en) | System and method for adjusting image pixel color to create a parallax depth effect | |
US20110043523A1 (en) | Graphics processing apparatus for supporting global illumination | |
WO2022143367A1 (en) | Image rendering method and related device therefor | |
US7158133B2 (en) | System and method for shadow rendering | |
US10497150B2 (en) | Graphics processing fragment shading by plural processing passes | |
US8248405B1 (en) | Image compositing with ray tracing | |
KR101118597B1 (en) | Method and System for Rendering Mobile Computer Graphic | |
CN111739074B (en) | Scene multi-point light source rendering method and device | |
US6690369B1 (en) | Hardware-accelerated photoreal rendering | |
US20180005432A1 (en) | Shading Using Multiple Texture Maps | |
US20230274493A1 (en) | Direct volume rendering apparatus | |
CN115359172A (en) | Rendering method and related device | |
CN115298699A (en) | Rendering using shadow information | |
KR101208826B1 (en) | Real time polygonal ambient occlusion method using contours of depth texture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |