CN117351134A - Image rendering method and related equipment thereof - Google Patents

Image rendering method and related equipment thereof Download PDF

Info

Publication number
CN117351134A
CN117351134A CN202210752953.8A CN202210752953A CN117351134A CN 117351134 A CN117351134 A CN 117351134A CN 202210752953 A CN202210752953 A CN 202210752953A CN 117351134 A CN117351134 A CN 117351134A
Authority
CN
China
Prior art keywords
target object
light
image
illumination image
light source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210752953.8A
Other languages
Chinese (zh)
Inventor
周鹏
龚哲
徐维超
林澈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210752953.8A priority Critical patent/CN117351134A/en
Priority to PCT/CN2023/103064 priority patent/WO2024002130A1/en
Publication of CN117351134A publication Critical patent/CN117351134A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models

Abstract

The application discloses an image rendering method and related equipment, which can make the rendering effect on a target object high-quality no matter which phenomenon generated by forming light rays to be shot on the target object is diffuse reflection, specular reflection and specular reflection. The method comprises the following steps: resampling a light path between a target object and a light source based on an original light path between the target object and the light source to obtain a new light path, wherein the original light path is obtained through light tracking, the original light path takes the target object as a starting point, the light source as an ending point, the new light path takes the light source as a starting point, and the target object as an ending point; acquiring a sample cell for indicating light rays directed to the target object based on a new light ray path formed by the light rays; rendering the target object based on the sample pool to obtain an image of the target object.

Description

Image rendering method and related equipment thereof
Technical Field
The embodiment of the application relates to the technical field of computer graphics, in particular to an image rendering method and related equipment thereof.
Background
With the rapid development of the computer industry, users have increasingly demanded images. Currently, an electronic device generally adopts a ray tracing technology to render a three-dimensional scene, so that a more realistic image of the three-dimensional scene is obtained, and the image is provided for a user to watch and use, so that user experience is improved.
Currently, related art proposes a ray tracing algorithm based on path resampling. In the algorithm, for an object to be rendered in a three-dimensional scene, after an original ray path formed by a ray intersecting the object for the first time is determined, resampling of the ray path can be performed based on the original ray path, so that a new ray path formed by a new ray intersecting the object for the first time is acquired. Then, a corresponding sample cell may be obtained based on the new light path, the sample cell being used to indicate new light rays forming the new light path. An image of the object may then be rendered based on the sample pool.
In the algorithm, the method is limited by the material of an object, if the effect of light on a certain object is shown as diffuse reflection, the reflection range of the light after hitting the object is quite large for the light re-emitted by the object, and even if the reflected light continuously hits other objects, the reflected light has quite high probability of reaching a light source, so that an effective new light path can be easily re-sampled, and the rendering effect shown by the object is quite good. If the effect of the light on the object is specular reflection or specular reflection, the reflection range of the light after hitting the object is often smaller for the light re-emitted by the object, and after the reflected light continuously hits other objects, only a small probability reaches the light source, so that it is difficult to re-sample an effective new light path, and a high-quality sample pool cannot be obtained, so that the rendering effect exhibited by the object is often poor.
Disclosure of Invention
The embodiment of the application provides an image rendering method and related equipment, which can enable the rendering effect on a target object to be high-quality no matter which phenomenon generated by forming light rays to be shot on the target object is diffuse reflection, specular reflection and specular reflection.
A first aspect of an embodiment of the present application provides an image rendering method, including:
when a target object in a three-dimensional scene needs to be rendered, an original ray path between the target object and a light source can be acquired first. Since the original light path between the target object and the light source takes the target object as a starting point and takes the light source as an ending point, the two end points of the target object and the light source can be determined, and the properties of the target object and the light source can be exchanged to reconstruct the light path between the target object and the light source, namely, resample the light path between the target object and the light source, so as to obtain a new light path between the target object and the light source, wherein the new light path takes the light source as the starting point and the target object as the ending point.
A new light path between the target object and the light source is obtained, which corresponds to obtaining light forming the new light path. Information relating to these rays may then be acquired to construct a sample cell using this information. For example, the intensity values of the light rays before they hit the target object may be obtained and stored in the sample cell. As such, the cuvette may be used to indicate the light rays directed to the target object.
After the sample pool is obtained, the sample pool can be utilized to render the target object, so that an image of the target object is obtained. Then, the image of the target object is the transmitted image, and the transmitted image can be displayed on a screen for the user to watch and use.
From the above method, it can be seen that: after the original light path between the target object and the light source is obtained, the light path between the target object and the light source can be resampled based on the original light path, so that a new light path is obtained, the original light path is obtained through light tracing, the original light path takes the target object as a starting point, the light source as an ending point, the new light path takes the light source as a starting point, and the target object as an ending point. A cuvette may then be acquired based on the new ray paths, the cuvette being used to indicate the rays directed to the target object, the new ray paths being formed by the rays. Finally, the target object can be rendered based on the sample pool, and an image of the target object is obtained. Based on the foregoing, the embodiment of the present application provides a new light path resampling method, which is opposite to the direction of the effective light path in light tracking (with the light source as the starting point and the target object as the end point), and specifically resamples the light path, so that the success rate of sampling the effective new light path can be improved, that is, the effective new light path between the target object and the light source can be easily resampled, so that no matter what phenomenon is generated when the light forming the new light path is emitted on the target object is direct illumination, diffuse reflection, specular reflection and specular reflection, a sample pool with high quality and enough can be successfully obtained, thereby enabling the rendering effect on the target object to be high enough.
In one possible implementation, the new light path includes a first light path starting from the light source and ending with the target object and passing through the remaining objects, and a second light path starting from the light source and ending with the target object and not passing through the remaining objects; the sample cell includes a first sample cell for indicating a first light ray directed to the target object, the first light ray path being formed by the first light ray, and a second sample cell for indicating a second light ray directed to the target object, the second light ray path being formed by the second light ray. In the foregoing implementation manner, after the original light paths between the target object and the light source are obtained, based on the original light paths of the two, a certain manner of reconstructing the light paths between the target object and the light source may be performed, that is, a first resampling is performed on the light paths between the target object and the light source, so as to obtain a first light path (may also be referred to as an indirect light path) between the target object and the light source, the first light path starts from the light source and ends with the target object, and passes through the rest of the objects in the middle (that is, the first light path takes the light source and the target object as two end points, and passes through at least one intermediate node in the space where the target object is located, other rest of the objects except the target object) may be performed based on the original light paths of the two, that is, and a second light path (may also be referred to as a direct light path) between the target object and the light source may be obtained, the second light path starts from the light source and ends with the target object, and the rest of the light paths do not pass through the intermediate node (that is the rest of the target object and the rest of the light paths do not pass through the target object except the two end points). The first light path and the second light path between the target object and the light source are obtained, that is, the first light (which may be called indirect light) between the target object and the light source is obtained, the light is emitted from the light source, and after hitting the rest of the objects, the light is indirectly directed to the target object), and the second light (which may be called direct light, and the light is emitted from the light source and is directly directed to the target object) between the target object and the light source. Then information relating to the first ray may be obtained, whereby this information is used to construct a first sample pool. For example, the intensity value of the first light after hitting the remaining object and before hitting the target object may be obtained, and the intensity value of the first light after hitting the remaining object and before hitting the target object may be stored in the first sample pool. As such, the first sample pool may be used to indicate a first light ray directed to the target object. Information relating to the second ray may also be acquired, such that the information may be used to construct a second sample cell. For example, the intensity value of the second light before hitting the target object may be obtained and stored in the second sample cell. In this way, the second cuvette may be used to indicate a second light ray directed to the target object. Based on this, the target object may be rendered based on the first sample pool and the second sample pool together, thereby obtaining an image of the target object. It can be seen that the foregoing implementations provide a multi-sample cell based ray tracing technique, with multi-sample cells referring to at least one first sample cell and at least one second sample cell. In a common configuration, two first sample pools and one second sample pool can be constructed, rendering effects corresponding to the two first sample pools are specular reflection and specular reflection respectively, and rendering effects corresponding to the second sample pool are direct illumination, so that even if a target object is an object of a special material, based on images of the target object obtained by rendering of the plurality of sample pools, not only can direct illumination effects be presented, but also indirect illumination effects such as specular reflection effects and specular reflection effects can be presented, the obtained images are more real and exquisite, and the user experience is improved.
In one possible implementation, the phenomenon generated by the first light beam striking the target object includes at least one of the following: diffuse reflection, specular reflection, or specular reflection. In the foregoing implementation, there may be two cases: (1) If only the first resampling of a single round is performed, a set of first light paths between the target object and the light source can be obtained, and a set of first light forming the set of first light paths can also be obtained, so that the phenomenon generated by the set of first light after hitting the target object can be controlled to be one of the following: diffuse, specular, or specular reflection, and accordingly, the first sample cell for indicating the set of first light rays corresponds to a rendering effect that may be presented as one of: diffuse reflection, specular reflection, or specular reflection. For example, if the phenomenon generated by the first light ray after hitting the target object is diffuse reflection, the rendering effect corresponding to the first sample cell is diffuse reflection, and so on. (2) If only the first resampling of the multiple rounds is performed, multiple sets of first light ray paths between the target object and the light source can be obtained, and multiple sets of first light rays forming the multiple sets of first light ray paths are also obtained. For any one of the first light beams, the phenomenon generated by the first light beam after hitting the target object can be controlled to be one of the following: diffuse reflection, specular reflection, or specular reflection. It should be noted that the phenomena generated by the multiple groups of first light rays after hitting the target object may be the same or different, which is not limited herein. Accordingly, for any one of a plurality of first sample pools indicating the plurality of sets of first light rays, the first sample pool corresponds to a certain rendering effect, the rendering effect may be presented as one of: diffuse reflection, specular reflection, or specular reflection. It should be noted that rendering effects corresponding to the plurality of first sample pools may be the same or different. For example, one conventional sample cell configuration may be: two first sample pools are constructed, rendering effects corresponding to the first sample pools are specular reflection, and rendering effects corresponding to the second first sample pools are specular reflection. Therefore, the multi-sample pool technology provided by the implementation manner can enable different first sample pools to correspond to different rendering effects, also enable different first sample pools to correspond to the same rendering effects, and can be selected according to the material of a target object, so that the flexibility and the comprehensiveness of the scheme are improved.
In one possible implementation, the first light indicated by the first sample cell is less than the number of pixels displayed by the screen, and the second light indicated by the second sample cell is equal to the number of pixels displayed by the screen, which is used to display an image of the target object. In the foregoing implementation, for any one of the first sample pools, the first sample pool may include the light intensity values of a group of the first light rays after hitting the rest of the objects and before hitting the target object, and then the size of the first sample pool is the number of the light intensity values, that is, the number of the group of the first light rays (that is, the size of the first sample pool is the number of the first light rays indicated by the first sample pool). It is noted that the size of the first cuvette is smaller than the resolution of the screen (i.e. the number of pixels displayed by the screen) used to display the image of the final target object. For any one of the second sample cells, the second sample cell may contain a set of light intensity values of the second light before hitting the target object, and then the size of the second sample cell is the number of the light intensity values, that is, the number of the set of second light (that is, the size of the second sample cell is the number of the second light indicated by the second sample cell). It is noted that the size of the second sample cell is equal to the resolution of the screen (i.e. the number of pixels displayed by the screen). It can be seen that the foregoing implementation may make the size of the first sample pool smaller than the resolution of the screen, and may improve the efficiency of acquiring the first sample pool.
In one possible implementation, rendering the target object based on the first sample cell and the second sample cell to obtain an image of the target object includes: rendering the target object based on the first sample pool to obtain a first illumination image of the target object; rendering the target object based on the second sample pool to obtain a second illumination image of the target object; and fusing the first illumination image and the second illumination image to obtain an image of the target object. In the foregoing implementation, since at least one first sample pool may be obtained, the at least one first sample pool may be respectively colored to obtain at least one indirect illumination image (i.e., the foregoing first illumination image) of the target object. Since at least one second sample cell is available, the at least one second sample cell may be respectively colored to obtain at least one direct illumination image (i.e., the aforementioned second illumination image) of the target object. After at least one indirect illumination image of the target object and at least one direct illumination image of the target object are obtained, the multiple images can be regarded as multiple image layers, so that the at least one indirect illumination image of the target object and the at least one direct illumination image of the target object can be overlapped to obtain the final image of the target object which can be sent and displayed.
In one possible implementation, before fusing the first illumination image and the second illumination image to obtain an image of the target object, the method further includes: acquiring a noise-free component of material information of a target object; performing first processing on the first illumination image based on the noise-free component to obtain a processed first illumination image, wherein the first processing comprises demodulation processing, noise reduction processing and modulation processing; performing a second process on the second illumination image based on the noise-free component to obtain a processed second illumination image, the second process including demodulation process, noise reduction process, and modulation process; fusing the first illumination image and the second illumination image to obtain an image of the target object includes: and fusing the processed first illumination image and the processed second illumination image to obtain an image of the target object. In the foregoing implementation manner, after obtaining the noiseless component of the material information of the target object, the at least one indirect illumination image of the target object may be divided by the noiseless component to obtain the demodulated at least one indirect illumination image of the target object. After the demodulated at least one indirect illumination image of the target object is obtained, the demodulated at least one indirect illumination image of the target object can be subjected to noise reduction, so that the at least one noise-reduced indirect illumination image of the target object is obtained. After obtaining the at least one indirect illumination image of the target object after noise reduction, the at least one indirect illumination image of the target object after noise reduction can be multiplied by the noiseless component to obtain at least one modulated indirect illumination image of the target object, which can be used as at least one indirect illumination image of the target object after processing (the processed indirect illumination image is the first illumination image after processing). Meanwhile, after obtaining the noiseless component of the material information of the target object, at least one direct illumination image of the target object can be divided by the noiseless component to obtain at least one demodulated direct illumination image of the target object. After the demodulated at least one direct illumination image of the target object is obtained, the demodulated at least one direct illumination image of the target object can be subjected to noise reduction, so that the at least one noise-reduced direct illumination image of the target object is obtained. After obtaining the at least one noise-reduced direct illumination image of the target object, the noise-reduced at least one direct illumination image of the target object may be multiplied by the noise-free component to obtain a modulated at least one direct illumination image of the target object, i.e. a processed at least one direct illumination image of the target object (the processed direct illumination image is the processed second illumination image). After the processed at least one indirect illumination image of the target object and the processed at least one direct illumination image of the target object are obtained, the processed at least one indirect illumination image of the target object and the processed at least one direct illumination image of the target object can be overlapped, so that the final image of the target object which can be sent and displayed is obtained. Therefore, the implementation manner provides a new image noise reduction manner, and demodulation, noise reduction and modulation of each image layer can be realized based on noise-free classification of the material information of the target object, so that each image layer can keep details of the target object as much as possible, the image of the target object obtained based on superposition of each image layer is more real and clear, viewing of a user is facilitated, and user experience is further improved.
In one possible implementation, the first processing further includes an over-division processing for making the resolution of the processed first illumination image equal to the resolution of the screen. In the foregoing implementation, since the size of the at least one first sample cell is smaller than the resolution of the screen, the resolution of the modulated at least one indirect illumination image of the target object is also smaller than the resolution of the screen. In order to achieve better fusion of each image layer, after the modulated at least one indirect illumination image of the target object is obtained, super-division processing can be further performed on the modulated at least one indirect illumination image of the target object, the super-divided at least one indirect illumination image of the target object is used as the processed at least one indirect illumination image of the target object, and at this time, the resolution of the processed at least one indirect illumination image of the target object is equal to the resolution of the screen. Therefore, in the process of rendering the target object based on the multi-sample-pool technology, the implementation method can be matched with the super-resolution processing, so that the calculation amount required by the process is reduced to a certain extent, the implementation of the multi-sample-pool-based ray tracing technology is facilitated, and the user experience is further improved.
A second aspect of embodiments of the present application provides an image rendering apparatus, including: the resampling module is used for resampling the light path between the target object and the light source based on the original light path between the target object and the light source to obtain a new light path, wherein the original light path is obtained through light ray tracing, the original light path takes the target object as a starting point, the light source as an ending point, the new light path takes the light source as a starting point, and the target object as an ending point; the first acquisition module is used for acquiring a sample pool based on a new light path, wherein the sample pool is used for indicating light rays emitted to a target object, and the new light path is formed by the light rays; and the rendering module is used for rendering the target object based on the sample pool so as to obtain an image of the target object.
From the above device, it can be seen that: after the original light path between the target object and the light source is obtained, the light path between the target object and the light source can be resampled based on the original light path, so that a new light path is obtained, the original light path is obtained through light tracing, the original light path takes the target object as a starting point, the light source as an ending point, the new light path takes the light source as a starting point, and the target object as an ending point. A cuvette may then be acquired based on the new ray paths, the cuvette being used to indicate the rays directed to the target object, the new ray paths being formed by the rays. Finally, the target object can be rendered based on the sample pool, and an image of the target object is obtained. Based on the foregoing, the embodiment of the present application provides a new light path resampling method, which is opposite to the direction of the effective light path in light tracking (with the light source as the starting point and the target object as the end point), and specifically resamples the light path, so that the success rate of sampling the effective new light path can be improved, that is, the effective new light path between the target object and the light source can be easily resampled, so that no matter what phenomenon is generated when the light forming the new light path is emitted on the target object is direct illumination, diffuse reflection, specular reflection and specular reflection, a sample pool with high quality and enough can be successfully obtained, thereby enabling the rendering effect on the target object to be high enough.
In one possible implementation, the new light path includes a first light path starting from the light source and ending with the target object and passing through the remaining objects, and a second light path starting from the light source and ending with the target object and not passing through the remaining objects; the sample cell includes a first sample cell for indicating a first light ray directed to the target object, the first light ray path being formed by the first light ray, and a second sample cell for indicating a second light ray directed to the target object, the second light ray path being formed by the second light ray.
In one possible implementation, the phenomenon generated by the first light beam impinging on the target object is specular reflection or specular reflection.
In one possible implementation, the first light indicated by the first sample cell is less than the number of pixels displayed by the screen, and the second light indicated by the second sample cell is equal to the number of pixels displayed by the screen, which is used to display an image of the target object.
In one possible implementation, the rendering module is configured to: rendering the target object based on the first sample pool to obtain a first illumination image of the target object; rendering the target object based on the second sample pool to obtain a second illumination image of the target object; and fusing the first illumination image and the second illumination image to obtain an image of the target object.
In one possible implementation, the rendering module is further configured to: acquiring a noise-free component of material information of a target object; performing first processing on the first illumination image based on the noise-free component to obtain a processed first illumination image, wherein the first processing comprises demodulation processing, noise reduction processing and modulation processing; performing a second process on the second illumination image based on the noise-free component to obtain a processed second illumination image, the second process including demodulation process, noise reduction process, and modulation process; and the rendering module is used for fusing the processed first illumination image and the processed second illumination image to obtain an image of the target object.
In one possible implementation, the first processing further includes an over-division processing for making the resolution of the processed first illumination image equal to the resolution of the screen.
A third aspect of embodiments of the present application provides an electronic device comprising a memory and a processor; the memory stores code, the processor being configured to execute the code, which when executed, performs the method according to the first aspect or any one of the possible implementations of the first aspect.
A fourth aspect of the embodiments of the present application provides a circuitry comprising processing circuitry configured to perform the method of any one of the possible implementations of the first aspect.
A fifth aspect of the embodiments of the present application provides a chip system comprising a processor for invoking a computer program or computer instructions stored in a memory to cause the processor to perform a method as described in any one of the possible implementations of the first aspect.
In one possible implementation, the processor is coupled to the memory through an interface.
In one possible implementation, the system on a chip further includes a memory having a computer program or computer instructions stored therein.
A sixth aspect of embodiments of the present application provides a computer storage medium storing one or more instructions that when executed by one or more computers cause the one or more computers to implement the method of the first aspect or any one of the possible implementations of the first aspect.
A seventh aspect of embodiments of the present application provides a computer program product storing instructions that, when executed by a computer, cause the computer to carry out the method according to the first aspect or any one of the possible implementations of the first aspect.
In this embodiment of the present application, after an original light path between a target object and a light source is obtained, a light path between the target object and the light source may be resampled based on the original light path, so as to obtain a new light path, where the original light path is obtained by light tracing, and the original light path uses the target object as a starting point, uses the light source as an ending point, uses the light source as a starting point, and uses the target object as an ending point. A cuvette may then be acquired based on the new ray paths, the cuvette being used to indicate the rays directed to the target object, the new ray paths being formed by the rays. Finally, the target object can be rendered based on the sample pool, and an image of the target object is obtained. Based on the foregoing, the embodiment of the present application provides a new light path resampling method, which is opposite to the direction of the effective light path in light tracking (with the light source as the starting point and the target object as the end point), and specifically resamples the light path, so that the success rate of sampling the effective new light path can be improved, that is, the effective new light path between the target object and the light source can be easily resampled, so that no matter what phenomenon is generated when the light forming the new light path is emitted on the target object is direct illumination, diffuse reflection, specular reflection and specular reflection, a sample pool with high quality and enough can be successfully obtained, thereby enabling the rendering effect on the target object to be high enough.
Drawings
FIG. 1 is a schematic diagram of a ray tracing technique;
FIG. 2 is a schematic diagram of a rasterization technique;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic flow chart of an image rendering method according to an embodiment of the present application;
fig. 5 is a schematic diagram of a BVH tree according to an embodiment of the present application;
FIG. 6 is a schematic diagram of light path resampling according to an embodiment of the present application;
fig. 7 is an application illustration of an image rendering method provided in an embodiment of the present application;
FIG. 8 is a schematic diagram of a noise reduction process according to an embodiment of the present application;
FIG. 9 is a schematic diagram of the comparison result provided in the embodiment of the present application;
FIG. 10 is another schematic diagram of the comparison result provided in the embodiment of the present application;
FIG. 11 is another schematic diagram of the comparison result provided in the embodiment of the present application;
FIG. 12 is another schematic diagram of the comparison result provided in the embodiment of the present application;
FIG. 13 is another schematic diagram of the comparison result provided in the embodiment of the present application;
FIG. 14 is another schematic diagram of the comparison result provided in the embodiment of the present application;
fig. 15 is a schematic structural diagram of an image rendering device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides an image rendering method and related equipment, which can enable the rendering effect on a target object to be high-quality no matter which phenomenon generated by forming light rays to be shot on the target object is diffuse reflection, specular reflection and specular reflection.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely illustrative of the manner in which the embodiments of the application described herein have been described for objects of the same nature. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
With the development of computer technology, more and more applications, such as game applications or video applications, require an image with an exquisite image quality to be displayed on an electronic device. These images are typically rendered by the electronic device based on models in a three-dimensional (three dimensional, 3D) scene.
In conventional image processing methods, a 3D scene is typically rendered using a rasterization process to obtain an image capable of displaying the 3D scene. However, the quality of the image rendered by the rasterization technique is generally high, and it is often difficult to present a realistic picture. For example, it is often difficult to truly restore the effects of ray reflection, refraction, and shadows in a scene in a rendered image. In view of this, a new rendering technique, ray tracing, has been developed. Both ray tracing and rasterization techniques are methods for implementing image rendering, and the main purpose of the techniques is to project an object in 3D space into a two-dimensional screen space for display by computing shading.
Fig. 1 is a schematic diagram of a ray tracing technique. As shown in fig. 1, the principle of ray tracing is: from the camera position, a beam of light is emitted into the three-dimensional scene through the pixel locations on the image plane, the closest intersection between the light and the geometric figure is found, and then the coloration of the intersection is found. If the material of the intersection is reflective, tracking can be continued in the direction of reflection of the intersection, and the coloration of the intersection after reflection can be continued to be obtained. That is, the ray tracing method calculates projection and global illumination by tracing the propagation process of rays in a three-dimensional scene, so as to render a two-dimensional image.
Fig. 2 is a schematic diagram of a rasterization technique. As shown in fig. 2, the principle of the rasterization process is: dividing an object in a three-dimensional scene by adopting a triangle, transforming the three-dimensional coordinates of the vertexes of the triangle into two-dimensional coordinates on an image through coordinate change calculation, and finally filling textures in the triangle on the image to realize the rendering of the image.
Because the rasterization technology directly projects the visible content on the screen space to obtain the corresponding image, the processing difficulty is low, and the provided light and shadow effect is poor. Ray tracing technology is to trace each ray emitted from a camera to achieve real effects such as reflection, refraction, shadow, and ambient light shielding, so that the ray tracing method can provide real and realistic shadow effects. Accordingly, to render more realistic images, current electronic devices often prioritize ray tracing techniques to render three-dimensional scenes to improve the viewing experience of the user.
Currently, related art proposes a ray tracing algorithm based on path resampling. In the algorithm, for an object to be rendered in a three-dimensional scene, after an original ray path formed by a ray intersecting the object for the first time is determined, resampling of the ray path can be performed based on the original ray path, so that a new ray path formed by a new ray intersecting the object for the first time is acquired. Then, a corresponding sample cell may be obtained based on the new light path, the sample cell being used to indicate the new light forming the new light path (i.e., the sample cell may store a light sample). An image of the object may then be rendered based on the sample pool.
In the algorithm, the light is limited by the material of the object, if the effect of the light on a certain object is diffuse reflection, the reflection range of the light after hitting the object is often large for the light re-emitted by the object, and even if the reflected light continuously hits other objects, the reflected light has a larger probability of reaching the light source, so that an effective new light path can be easily re-sampled, and then the rendering effect displayed on the object is better (i.e. the quality of the rendered image of the object is better). If the effect of the light on the object is represented by specular reflection or specular reflection, the reflection range of the light after hitting the object is often smaller for the light re-emitted by the object, and after the reflected light continuously hits the other objects, only a small probability reaches the light source, so that it is difficult to re-sample to an effective new light path, resulting in that a sample pool with good quality and enough quality is not obtained, and thus the rendering effect exhibited by the object is often poor (i.e. the quality of the rendered image of the object is often poor).
Further, the efficiency of obtaining the sample pool is not high, and the data volume of the sample pool is often large, so that a process of rendering an object based on the sample pool needs to be performed with a large amount of calculation, and the whole rendering process needs to take a large time cost, so that user experience is poor.
Further, in the process of rendering an object, a certain image noise reduction process is often used to filter noise in the image. However, the noise reduction process in the related art often filters the entire image, resulting in blurring of the image of the finally obtained object, which may cause poor user's look and feel, and reduce user experience.
In order to solve the above-described problems, embodiments of the present application provide an image rendering method that may be performed by an electronic device. The electronic equipment comprises a CPU and a GPU, and can conduct rendering processing on images. The electronic device may be, for example, a mobile phone, a tablet, a notebook, a PC, a mobile internet device (mobile internet device, MID), a wearable device, a Virtual Reality (VR) device, an augmented reality (augmented reality, AR) device, a wireless electronic device in industrial control (industrial control), a wireless electronic device in self driving (self driving), a wireless electronic device in teleoperation (remote medical surgery), a wireless electronic device in smart grid (smart grid), a wireless electronic device in transportation security (transportation safety), a wireless electronic device in smart city (smart city), a wireless electronic device in smart home (smart home), or the like. The electronic device may be a device that runs an android system, an IOS system, a windows system, and other systems. An application program, such as a game application, a screen locking application, a map application or a monitoring application, which needs to render a 3D scene to obtain a two-dimensional image, can be run in the electronic device.
For ease of understanding, the specific structure of the electronic device will be described in detail below in conjunction with fig. 3. Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 3, the electronic device 3000 may include: a central processor 3001, a graphics processor 3002, a display device 3003, and a memory 3004. Optionally, the electronic device 3000 may also include at least one communication bus (not shown in fig. 3) for enabling connected communications between the various components.
It should be appreciated that the various components in electronic device 3000 may also be coupled by other connectors, which may include various interfaces, transmission lines, buses, or the like. The various components in the electronic device 3000 may also be connected in a radioactive manner centered on the central processing unit 3001. In various embodiments of the present application, coupled means electrically connected or in communication with each other, including directly or indirectly through other devices.
The connection between the cpu 3001 and the graphics processor 3002 is not limited to the one shown in fig. 2. The cpu 3001 and the graphics processor 3002 in the electronic device 3000 may be on the same chip, or may be separate chips.
The functions of the central processor 3001, the graphic processor 3002, the display device 3003, and the memory 3004 are briefly described below.
Central processing unit 3001: for running an operating system 3005 and application programs 3006. The application 3006 may be a graphics-like application such as a game, video player, or the like. The operating system 3005 provides a system graphics library interface through which the application 3006 generates instruction streams for rendering graphics or image frames, and associated rendering data as needed, as well as drivers provided by the operating system 3005, such as graphics library user mode drivers and/or graphics library kernel mode drivers. Among them, the system graphics library includes, but is not limited to: system graphics libraries such as embedded open graphics library (open graphics library for embedded system, openGL ES), ke Luonuo s platform graphics interface (the khronos platform graphics interface) or Vulkan (a cross-platform drawing application program interface). The instruction stream contains columns of instructions, which are typically call instructions to the system graphics library interface.
Optionally, the central processor 3001 may include at least one of the following types of processors: an application processor, one or more microprocessors, a digital signal processor (digital signal processor, DSP), a microcontroller (microcontroller unit, MCU), or an artificial intelligence processor, etc.
The central processor 3001 may further include necessary hardware accelerators such as application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate arrays (field programmable gate array, FPGA), or integrated circuits for implementing logic operations. The processor 3001 may be coupled to one or more data buses for transmitting data and instructions among the various components of the electronic device 3000.
Graphics processor 3002: for receiving a stream of graphics instructions sent by the processor 3001, generating a render target through a render pipeline (pipeline), and displaying the render target to the display device 3003 through a graphics layer composition display module of the operating system. Wherein the rendering pipeline, which may also be referred to as a rendering pipeline, a pixel pipeline, or a pixel pipeline, is a parallel processing unit within the graphics processor 3002 for processing graphics signals. Graphics processor 3002 may include a plurality of rendering pipelines, and the plurality of rendering pipelines may process graphics signals in parallel independently of each other. For example, a rendering pipeline may perform some column operations in rendering graphics or image frames, typical operations may include: vertex processing (vertex processing), primitive processing (primitive processing), rasterization (rasterisation), fragment processing (fragment processing), and the like.
Alternatively, the graphics processor 3002 may include a general-purpose graphics processor, such as a GPU or other type of special-purpose graphics processing unit, or the like, that executes software.
Display device 3003: for displaying various images generated by the electronic device 3000, which may be a graphical user interface (graphical user interface, GUI) of an operating system or image data (including still images and video data) processed by the graphics processor 3002.
Alternatively, display device 3003 may include any suitable type of display screen. Such as a liquid crystal display (liquid crystal display, LCD) or a plasma display or an organic light-emitting diode (OLED) display, etc.
Memory 3004, which is a transmission channel between cpu 3001 and graphics processor 3002, may be double rate synchronous dynamic random access memory (double data rate synchronous dynamic random access memory, DDR SDRAM) or other types of caches.
The specific structure of the electronic device to which the image rendering method provided by the embodiment of the present application is applied is described above, and the flow of the image rendering method provided by the embodiment of the present application will be described in detail below. Fig. 4 is a schematic flow chart of an image rendering method according to an embodiment of the present application, as shown in fig. 4, where the method includes:
401. The method comprises the steps of obtaining an original light ray path between a target object and a light source, wherein the original light ray path is obtained through light ray tracing, and the original light ray path takes the target object as a starting point and takes the light source as an end point.
In this embodiment, after a model file of a certain three-dimensional scene is obtained, rendering information of each object in the three-dimensional scene, spatial information of a camera, spatial information of a light source, and the like can be resolved therefrom, and these information are respectively described in the following brief description:
(1) The rendering information of each object comprises the spatial information of the object and the material information of the object, wherein the spatial information of the object comprises information such as vertex coordinates of the object, vertex normals of the object, triangle indexes and the like, and the material information of the object comprises information such as color of the object, metal degree of the object, roughness of the object and the like.
It should be noted that, a hierarchical bounding box (bounding volume hierarchies, BVH) tree may be constructed based on spatial information of each object, and the BVH tree may be used to implement subsequent ray tracing operations. Specifically, a BVH tree may be constructed according to vertex coordinates, vertex normals, and triangle indexes of an object, and it may be understood that the BVH tree contains spatial information of a plurality of bounding boxes, each bounding box containing coordinates of 8 vertices and vertical heights of 8 vertices of the bounding box (cuboid), each bounding box being used to enclose at least one object. To further understand the aforementioned BVH tree, a further description is provided below in conjunction with FIG. 5. Fig. 5 is a schematic diagram of a BVH tree provided in the embodiment of the present application, as shown in fig. 5, it is assumed that 6 objects exist in a three-dimensional scene, space information of a bounding box a (bounding the 6 objects) may be determined based on space information of the 6 objects, space information of a bounding box B (bounding the 4 objects) may be determined based on space information of 4 objects, space information of a bounding box C (bounding the 2 objects) may be determined based on space information of the remaining 2 objects, and the space information of the bounding box a, the space information of the bounding box B, and the space information of the bounding box C are managed in a binary tree structure, so as to obtain the BVH tree. It can be seen that the BVH tree can centrally manage spatial information of bounding box a, bounding box B, and bounding box C, which is equivalent to centrally managing bounding box a, bounding box B, and bounding box C. The bounding box a is the largest bounding box among all bounding boxes.
When implementing ray tracing based on BVH trees, it is necessary to calculate whether a ray intersects an object in a three-dimensional scene (i.e., intersection calculation). Due to the existence of the BVH tree, a certain bounding box for surrounding an object can be determined based on the BVH tree, then whether the ray intersects the bounding box is judged, and if the ray does not meet the bounding box, the ray is indicated that the ray does not intersect the object in the bounding box; if the ray hits the bounding box, it is recalculated whether the ray intersects an object in the bounding box. For example, when a ray is detected to be disjoint from the B bounding box in the binary tree, it is indicated that the ray must not intersect four objects in the B bounding box, and thus the step of detecting whether the ray intersects four objects in the B bounding box can be omitted, thereby detecting only whether the ray intersects two objects in the C bounding box.
In addition, the operation can be performed based on the material information of each object, so as to obtain the noiseless component of the material information of each object, and the noiseless component of the material information of each object is processed into a table for query, so that the subsequent demodulation processing, noise reduction processing and modulation processing of the image of the object are realized, and the processing is not expanded at first.
(2) The spatial information of the camera may include a vertical height of the camera, coordinates of the camera, etc., which is used to capture an image of the simulated three-dimensional scene. It should be noted that, when ray tracing is implemented, the spatial position of the camera in the three-dimensional scene may be determined based on the spatial information of the camera. Then, the emitting points of the rays can be determined based on the spatial positions of the cameras, namely, rays are emitted to each object in the three-dimensional scene by the cameras, so that route calculation and intersection calculation are realized, and the ray paths formed by the rays in the three-dimensional scene are determined, so that the subsequent rendering operation for each object in the three-dimensional scene is realized.
(3) The spatial information of the light source is generally used to determine a direct light source and an indirect light source in the three-dimensional scene, wherein the direct light source refers to a light source which emits light in the three-dimensional scene, the indirect light source refers to a light source which does not emit light in the three-dimensional scene and emits light by other light sources, for example, each object can be used as an indirect light source. It should be noted that, when implementing ray tracing, if a ray emitted by the camera intersects at least one object in the three-dimensional scene and reflects on the surface of the object, the ray is finally received by the direct light source, and a ray path formed by such a ray is an effective ray path. Then, for some rays emitted by the camera, assuming that the rays first intersect an object in the three-dimensional scene and are eventually received by the direct light source (whether or not the rays intersect the remaining object in the middle), the effective ray path formed by the rays between the object and the direct light source can be used as the original ray path between the object and the direct light source (i.e., the ray path between the object and the direct light source is sampled for the first time). Similar operations may also be performed for the remaining objects in the three-dimensional scene, so that the original ray paths between the individual objects in the three-dimensional scene and the direct light source may be obtained. These original ray paths may be used to effect resampling of ray paths of individual objects and direct light sources in subsequent three-dimensional scenes, as well as rendering of images of individual objects.
It can be seen that after the rendering information of each object, the spatial information of the camera, and the spatial information of the light source in the three-dimensional scene are obtained, the original ray path between each object and the direct light source (hereinafter simply referred to as the light source) can be obtained based on these information.
Since the operations performed by the present embodiment are similar for each object, for convenience of explanation, one of the objects will be schematically described below, and the object will be referred to as a target object. Then, according to the foregoing description, based on the rendering information of each object in the three-dimensional scene, the spatial information of the camera, and the spatial information of the light source, the primary sampling of the light path between the target object and the direct light source may be completed, so as to obtain the original light path between the target object and the light source, where the original light path between the target object and the light source takes the target object as a starting point, takes the light source as an ending point, and may or may not pass through other objects in the middle.
It should be noted that, the original light path between the target object and the light source in the present embodiment can be understood as a set of original light paths between the target object and the light source, because there are usually a plurality of light rays emitted by the camera and intersecting the target object for the first time, the plurality of light rays can be referred to as a plurality of original light rays, that is, a set of original light rays. Then, the set of original light paths formed between the target object and the light source is the set of original light paths between the target object and the light source.
402. And resampling the light path between the target object and the light source based on the original light path between the target object and the light source to obtain a new light path, wherein the new light path takes the light source as a starting point and takes the target object as an end point.
After the original light path between the target object and the light source is obtained, since the original light path between the target object and the light source takes the target object as a starting point and takes the light source as an end point, the two end points of the target object and the light source can be determined, and the properties of the two end points are exchanged, so that the light path between the target object and the light source is reconstructed in a certain way, namely, the light path between the two end points is subjected to first resampling, so that a first light path (also called as an indirect light path) between the target object and the light source is obtained, the first light path takes the light source as a starting point, takes the target object as an end point, and passes through other objects in the middle.
Specifically, since the first resampling may be performed for only one round or may be repeated for a plurality of rounds, the first light path between the light paths between the target object and the light source may be acquired in a plurality of ways:
(1) And performing first resampling based on a group of original light ray paths between the target object and the light source, and correspondingly obtaining a group of first light ray paths between the target object and the light source, wherein a plurality of original light ray paths contained in the group of original light ray paths are in one-to-one correspondence with a plurality of first light ray paths contained in the group of first light ray paths. In the plurality of original light ray paths, for any one of the original light ray paths, the original light ray path is set to start from a certain surface point of the target object, pass through a certain surface point of one other object in the middle, and end at a certain surface point of the light source. For convenience of explanation, hereinafter, the certain surface point of the target object through which the original light path passes will be referred to as a first surface point, the certain surface point of the rest of the objects through which the original light path passes will be referred to as a second surface point, and the certain surface point of the light source through which the original light path passes will be referred to as a third surface point. The original ray path may then be divided into two paths, the first path being the ray path between the first surface point and the second surface point, and the second path being the ray path between the second surface point and the third surface point. In the first resampling of the first path segment, a surface area may be marked with the second surface points of the remaining objects as the center, and a surface point may be selected (for example, may be selected randomly or may be selected according to some preset rule, and is not limited herein) in the surface area, which is called a fourth surface point (the fourth surface point may be the same surface point as the second surface point or may be a surface point around the second surface point, and is not limited herein), and the fourth surface point is connected with the first surface point, so as to form a new path segment from the fourth surface point to the first surface point. Similarly, when the first resampling is performed on the second path, a surface area may be marked about the third surface point of the light source, and one surface point may be selected from the surface area, which is called a fifth surface point (the fifth surface point may be the same surface point as the third surface point or may be a surface point around the third surface point, which is not limited herein), and the fifth surface point is connected to the fourth surface point, so as to form a second new path from the fifth surface point to the fourth surface point. In this way, the first new path and the second new path form a first light path corresponding to the original light path, the first light path takes the fifth surface point as a starting point, passes through the fourth surface point in the middle, and takes the first surface point as an ending point. For the rest of the original light paths except the original light path, the same operation as that performed on the original light path can be performed on the rest of the original light paths, so that a plurality of first light paths corresponding to the original light paths one by one, namely a group of first light paths between the target object and the light source, can be obtained.
It should be understood that the foregoing example is only schematically presented by passing through one other object in the middle of the original light path, and is not limited to the number of other objects passing through the middle of the original light path, and the first resampling operation performed on the original light path is similar if the original light path passes through a plurality of other objects in the middle of the original light path, which is not repeated herein. Accordingly, the intermediate may also pass through the plurality of remaining objects based on the first ray path obtained by the original ray path passing through the plurality of remaining objects in the intermediate. To further understand the foregoing first resampling process, a further description is provided below in conjunction with the example shown in fig. 6. Fig. 6 is a schematic diagram of resampling a light path according to an embodiment of the present application, and as shown in fig. 6, when a certain room is subjected to light ray tracing, a camera 100 is set to emit a set of original light rays toward a table 200, and one of the original light rays is taken for schematic description. Assuming that the original light beam from the camera 100 first hits the surface point a of the table 200, hits the surface point B of the wall 300 after reflection, hits the surface point C of the lamp cover 400 after reflection, hits the surface point D of the light source 500 after reflection again, the original light beam path between the table 200 and the light source 500 comprises three paths, wherein the first path 601 is directed from the surface point a to the surface point B, the second path 602 is directed from the surface point B to the surface point C, and the third path 603 is directed from the surface point C to the surface point D. Then, surface point E around surface point B may be taken on wall 300 and a first new path 604 directed from surface point E to surface point A may be constructed, then surface point C on lamp housing 400 may be left unchanged and a second new path 605 directed from surface point C to surface point E may be constructed, then surface point F around surface point D may be taken on light source 500 and a third new path 606 directed from surface point F to surface point C may be constructed. Thus, the three new paths form a first light path. It will be appreciated that similar operations may be performed with respect to the original ray paths formed by the remaining original rays of the camera 100 directed toward the table 200, so that a plurality of first ray paths between the table 200 and the light source 500, i.e., a set of first ray paths between the table 200 and the light source 500, may be obtained.
(2) Repeatedly executing the step (1) for a plurality of rounds, thereby obtaining a plurality of groups of first light ray paths between the target object and the light source, namely, after the first resampling is carried out based on a group of original light ray paths between the target object and the light source, a group of first light ray paths between the target object and the light source are correspondingly obtained, the first resampling can be carried out again based on a group of original light ray paths between the target object and the light source, another group of first light ray paths between the target object and the light source can be correspondingly obtained, and the like, and after the first resampling for a plurality of rounds is carried out, a plurality of groups of first light ray paths between the target object and the light source can be obtained.
In addition, after the original light paths between the target object and the light source are obtained, the light paths between the target object and the light source are reconstructed in another mode based on the original light paths of the target object and the light source, namely, the light paths between the target object and the light source are subjected to first resampling, so that a second light path (also called as a direct light path) between the target object and the light source is obtained, the second light path takes the light source as a starting point, takes the target object as an ending point, and does not pass through other objects in the middle.
Specifically, since the second resampling may be performed either one or multiple times, the second ray path between the ray paths between the target object and the light source may be acquired in a variety of ways:
(1) And performing second resampling based on a group of original light ray paths between the target object and the light source, and correspondingly obtaining a group of second light ray paths between the target object and the light source, wherein a plurality of original light ray paths contained in the group of original light ray paths correspond to a plurality of second light ray paths contained in the group of second light ray paths one by one. Among the plurality of original ray paths, for any one of the original ray paths, it is assumed that the original ray path starts at a certain surface point of the target object and ends at a certain surface point of the light source. For convenience of explanation, the certain surface point of the target object through which the original light path passes is hereinafter referred to as a first surface point, and the certain surface point of the light source through which the original light path passes is hereinafter referred to as a second surface point. Then, in the second resampling for the original light path, a surface area may be marked with the second surface point of the light source as the center, and a surface point may be selected (for example, may be selected randomly or may be selected according to some preset rule, and is not limited herein) in the surface area, which is called a third surface point (the third surface point may be the same surface point as the second surface point or may be a surface point around the second surface point, and is not limited herein), and the third surface point is connected with the first surface point, so as to form a second light path from the third surface point to the first surface point. The second light path starts from the third surface point, does not pass through the surface points of the rest of the objects in the middle, and starts from the first surface point. For the rest of the original light paths except the original light path, the same operation as that performed on the original light path can be performed on the rest of the original light paths, so that a plurality of second light paths corresponding to the original light paths one by one, namely a group of second light paths between the target object and the light source, can be obtained.
(2) Repeatedly executing the step (1) for multiple rounds, so as to obtain multiple groups of second light paths between the target object and the light source, namely, after performing second resampling based on one group of original light paths between the target object and the light source, correspondingly obtaining one group of second light paths between the target object and the light source, performing second resampling based on one group of original light paths between the target object and the light source again, correspondingly obtaining another group of second light paths between the target object and the light source, and the like, and after performing the second resampling for multiple rounds, obtaining multiple groups of second light paths between the target object and the light source.
403. A cuvette is acquired based on a new ray path between the target object and the light source, the cuvette being used to indicate the rays directed towards the target object, the new ray path being formed by the rays.
The first light path between the target object and the light source is obtained, which is equivalent to obtaining the first light (also called indirect light) between the target object and the light source, and the first light is the light forming the first light path, and can be regarded as the light emitted from the light source, hitting other objects in the middle, hitting the target object after being reflected, and being received by the camera after being reflected.
Then information relating to the first ray may be obtained, whereby this information is used to construct a first sample pool. For example, the intensity value of the first light after hitting the remaining object and before hitting the target object (e.g., the brightness and color of the first light, etc.) may be obtained and stored in the first sample pool. As such, the first sample pool may be used to indicate a first light ray directed to the target object.
Specifically, the first sample pool may be obtained in a number of ways:
(1) If only one set of first light paths between the target object and the light source is obtained, the set of first light paths is formed by one set of first light since one first light path is formed by one first light. Because the set of first ray shapes includes a plurality of first rays, information related to the plurality of first rays may be utilized to construct a first sample pool (e.g., the light intensity values of the plurality of first rays after hitting the remaining object and before hitting the target object are stored in a first sample pool), the first sample pool may be used to indicate the set of first rays directed to the target object. Still further, as in the example above, after obtaining a set of first light paths between the target object and the light source, for a first light path formed by the first new path 604, the second new path 605 and the third new path 606, the light intensity value of the first light forming the first light path after hitting the lamp housing and the wall and before hitting the table may be obtained and stored in the first sample pool. Likewise, in the set of first light ray paths between the table and the light source, in addition to the first light ray paths, the light intensity values of the remaining first light rays forming the remaining first light ray paths after hitting the remaining object and before hitting the table may be acquired and stored in the first sample cell. In this way, a complete first sample cell is obtained, which is used to indicate a set of first light rays directed towards the table.
Further, the phenomenon generated by the first light ray after hitting the target object can be controlled to be one of the following: diffuse, specular, or specular reflection, and accordingly, the first sample cell corresponds to a rendering effect that may appear as one of: diffuse reflection, specular reflection, or specular reflection. For example, if the phenomenon generated by the first light ray after hitting the target object is diffuse reflection, the rendering effect corresponding to the first sample cell is diffuse reflection, and so on.
(2) If multiple groups of first light paths between the target object and the light source are obtained, the multiple groups of first light paths are formed by multiple groups of first light (corresponding to the multiple groups of first light paths one by one). For any one of the plurality of first light rays, since the set of first light rays includes a plurality of first light rays, information related to the plurality of first light rays may be utilized to construct a first sample pool (e.g., the light intensity values of the plurality of first light rays after hitting the remaining object and before hitting the target object are stored in a first sample pool). Similarly, for the remaining set of first rays, the same operations as performed for the set of first rays may be performed, so that eventually a plurality of first sample pools may be obtained, one first sample pool being used to indicate a set of first rays directed to the target object.
Further, for any one of the first light beams, the phenomenon generated by the first light beam after hitting the target object can be controlled to be one of the following: diffuse reflection, specular reflection, or specular reflection. It should be noted that the phenomena generated by the multiple groups of first light rays after hitting the target object may be the same or different, which is not limited herein. Accordingly, for any one of a plurality of first sample pools, the first sample pool corresponds to a rendering effect that may be presented as one of: diffuse reflection, specular reflection, or specular reflection. Note that, rendering effects corresponding to the plurality of first sample pools may be the same or different, and are not limited herein. For example, one conventional sample cell configuration may be: two first sample pools are constructed, rendering effects corresponding to the first sample pools are specular reflection, and rendering effects corresponding to the second first sample pools are specular reflection.
Still further, in either case (1) or case (2), for any one of the first sample cells (which may also be understood as a first sample image), the first sample cell may contain the light intensity values of a set of first light rays after hitting the rest of the object and before hitting the target object, and then the size of the first sample cell (the resolution of the first sample image) is the number of these light intensity values, i.e. the number of the set of first light rays (i.e. the size of the first sample cell (the resolution of the first sample image), i.e. the number of first light rays indicated by the first sample cell). It is noted that the size of the first cuvette is smaller than the resolution of the screen (i.e. the number of pixels displayed by the screen) used to display the image of the final target object.
In addition, obtaining the second light path between the target object and the light source is equivalent to obtaining the second light (also referred to as direct light) between the target object and the light source, where the second light is the light forming the second light path, and may be regarded as the light emitted from the light source, not hitting other objects in the middle, directly hitting the target object, and being received by the camera after being reflected.
Then, information related to the second ray may be acquired, whereby the second sample cell is constructed using this information. For example, the intensity value of the second light (e.g., the brightness and color of the second light, etc.) before hitting the target object may be obtained and stored in the second sample cell. In this way, the second cuvette may be used to indicate a second light ray directed to the target object.
Specifically, the second sample cell may be obtained in a variety of ways:
(1) If only one set of second light paths between the target object and the light source is obtained, the set of second light paths is formed by one set of second light since one second light path is formed by one second light. Since the set of second ray shapes includes a plurality of second rays, information related to the plurality of second rays may be utilized to construct a second sample cell (e.g., the intensity values of the plurality of second rays prior to hitting the target object are stored in a second sample cell), the second sample cell may be used to indicate the set of second rays directed toward the target object.
(2) If multiple groups of second light paths between the target object and the light source are obtained, the multiple groups of second light paths are formed by multiple groups of second light (corresponding to the two groups of second light). For any one of the plurality of sets of second light rays, since the set of second light rays includes a plurality of second light rays, information related to the plurality of second light rays may be utilized to construct a second sample cell (e.g., the intensity values of the plurality of second light rays prior to hitting the target object are stored in a second sample cell). Similarly, for the remaining set of second light rays, the same operations as performed for the set of second light rays may be performed, so that a plurality of second sample cells, one for indicating a set of second light rays directed to the target object, may be eventually obtained.
Further, in either case (1) or case (2), for any one of the second sample cells (which may also be understood as one second sample image), the second sample cell may contain a set of light intensity values of the second light before hitting the target object, and then the size of the second sample cell (the resolution of the second sample image) is the number of these light intensity values, that is, the number of the set of second light rays (that is, the size of the second sample cell (the resolution of the second sample image)), that is, the number of the second light rays indicated by the second sample cell. It is noted that the size of the second sample cell is equal to the resolution of the screen (i.e. the number of pixels displayed by the screen) used to display the final image of the target object.
404. Rendering the target object based on the sample pool to obtain an image of the target object.
After the first sample pool is obtained, the target object can be rendered by using the first sample pool, so that an image of the target object is obtained. Then, the image of the target object is the transmitted image, and the transmitted image can be displayed on a screen for the user to watch and use.
Specifically, an image of a target object may be acquired by:
(1) Rendering the target object based on the first sample pool to obtain an indirect illumination image of the target object. It should be noted that, since at least one first sample pool may be obtained (for example, only one first sample pool is obtained in the foregoing case (1), and, for example, a plurality of first sample pools are obtained in the foregoing case (2)), at least one indirect illumination image of the target object may be obtained by respectively coloring (for example, illumination operation or the like) the at least one first sample pool (for example, one indirect illumination image of the target object may be obtained corresponding to the foregoing case (1), and, for example, a plurality of indirect illumination images of the target object may be obtained corresponding to the foregoing case (2)).
(2) Rendering the target object based on the second sample pool to obtain a direct illumination image of the target object. It should be noted that, since at least one second sample cell may be obtained (for example, only one second sample cell is obtained in the case (1) described above, and for example, a plurality of second sample cells are obtained in the case (2) described above), at least one direct illumination image of the target object may be obtained by respectively coloring (for example, illumination operation or the like) the at least one second sample cell (for example, one direct illumination image of the target object may be obtained corresponding to the case (1) described above, and for example, a plurality of direct illumination images of the target object may be obtained corresponding to the case (2) described above).
(3) And fusing the indirect illumination image and the direct illumination image to obtain an image of the target object. After obtaining at least one indirect illumination image of the target object and at least one direct illumination image of the target object, the multiple images can be regarded as multiple image layers, so that the at least one indirect illumination image of the target object and the at least one direct illumination image of the target object can be superimposed to obtain an image of the target object which can be finally sent and displayed.
Further, in order to acquire an image of a higher quality target object, a certain process may be performed in advance on each layer for synthesizing the image of the target object:
(1) And acquiring a noise-free component of the material information of the target object. Before each layer is processed, the noiseless component of the material information of the target object may be obtained from the table.
(2) And performing first processing on the indirect illumination image of the target object based on the noiseless component of the material information of the target object to obtain a processed indirect illumination image, wherein the first processing comprises demodulation processing, noise reduction processing and modulation processing. After obtaining the noiseless component of the material information of the target object, the at least one indirect illumination image of the target object may be divided by the noiseless component to obtain the at least one demodulated indirect illumination image of the target object. After the demodulated at least one indirect illumination image of the target object is obtained, the demodulated at least one indirect illumination image of the target object can be subjected to noise reduction, so that the at least one noise-reduced indirect illumination image of the target object is obtained. After obtaining the at least one indirect illumination image of the target object after noise reduction, the at least one indirect illumination image of the target object after noise reduction can be multiplied by the noiseless component to obtain the at least one indirect illumination image of the target object after modulation.
Since the size of the at least one first sample cell is smaller than the resolution of the screen, the resolution of the modulated at least one indirect illumination image of the target object is also smaller than the resolution of the screen. In order to achieve better fusion of each image layer, after the modulated at least one indirect illumination image of the target object is obtained, super-division processing can be further performed on the modulated at least one indirect illumination image of the target object, so that the processed at least one indirect illumination image of the target object is obtained, and at the moment, the resolution of the processed at least one indirect illumination image of the target object is equal to the resolution of the screen.
(3) And performing second processing on the direct illumination image of the target object based on the noiseless component of the material information of the target object to obtain a processed direct illumination image, wherein the second processing comprises demodulation processing, noise reduction processing and modulation processing. After obtaining the noiseless component of the material information of the target object, the at least one direct illumination image of the target object may be divided by the noiseless component to obtain the at least one demodulated direct illumination image of the target object. After the demodulated at least one direct illumination image of the target object is obtained, the demodulated at least one direct illumination image of the target object can be subjected to noise reduction, so that the at least one noise-reduced direct illumination image of the target object is obtained. After obtaining the at least one noise-reduced direct illumination image of the target object, the noise-reduced at least one direct illumination image of the target object may be multiplied by the noise-free component to obtain a modulated at least one direct illumination image of the target object, i.e. a processed at least one direct illumination image of the target object.
(4) And fusing the processed indirect illumination image and the processed direct illumination image to obtain an image of the target object. After the processed at least one indirect illumination image of the target object and the processed at least one direct illumination image of the target object are obtained, the processed at least one indirect illumination image of the target object and the processed at least one direct illumination image of the target object can be superimposed, so that an image of the target object which can be finally sent and displayed is obtained.
It should be noted that, in this embodiment, the indirect illumination image is the first illumination image, the direct illumination image is the second illumination image, the processed indirect illumination image is the first illumination image, and the processed direct illumination image is the second illumination image.
In order to further understand the image rendering method provided in the embodiments of the present application, a specific application example is provided below to further describe the method. Fig. 7 is an application schematic diagram of an image rendering method according to an embodiment of the present application, where, as shown in fig. 7, the application example is mainly used for displaying a virtual vehicle span for a user, and the application example includes:
(1) Resampling the ray path between an object (e.g., vehicle, floor, etc.) within the vehicle interior and the light source results in an indirect ray path and a direct ray path.
(2) The first type of sample cell is constructed based on a direct light path, the first type of sample cell comprises 1 sample cell, the rendering effect corresponding to the sample cell is direct illumination, the size of the sample cell is as large as the size (resolution) of a screen, for example, the resolution of the screen is 1920 x 1080, the size of the sample cell is 1920 x 1080, and the screen is used for displaying an image of a virtual vehicle display. The method comprises the steps of constructing a second type of sample cells based on an indirect light path, wherein the second type of sample cells comprise n sample cells, the rendering effect corresponding to the 1 st sample cell is specular reflection, the rendering effect corresponding to the 2 nd sample cell is specular reflection, the rendering effect corresponding to the 3 rd sample cell is diffuse reflection, the rendering effect corresponding to the nth sample cell is specular reflection, and the size of each sample cell in the n sample cells is 1/4 of the size of a screen, namely the size of each sample cell is 480 x 270.
(3) Rendering is performed based on the first type of sample pool, and a direct illumination image is obtained and is used for presenting objects directly visible by the camera.
(4) Rendering is carried out based on the second type of sample, and an indirect illumination image is obtained, wherein the image is used for further presenting effects of specular reflection, high light reflection and the like on the basis of a visible object.
(5) The image obtained in the step (3) is noisy and needs to be subjected to noise reduction processing. The noise reduction process mainly includes 3 steps, as shown in fig. 8 (fig. 8 is a schematic diagram of the noise reduction process provided in the embodiment of the present application): a. calculating a noise-free component of material information of an object in a vehicle interior; b. dividing the image obtained in step (3) by the noise-free component; c. and then denoising the result after the division, and multiplying the result after the denoising by the noiseless component to obtain a direct illumination image after the denoising.
(6) The image obtained in the step (4) is also noisy, and can be processed according to the noise reduction process in the step (5) to obtain an indirect illumination image after noise removal.
(7) The resolution of the direct illumination image after noise removal obtained in the step (5) is identical to that of the screen, the resolution of the indirect illumination image after noise removal obtained in the step (6) is 1/4 of that of the screen, so that the indirect illumination image after noise removal can be subjected to super-division treatment, and finally the super-divided indirect illumination image and the direct illumination image after noise removal are overlapped to obtain a virtual vehicle display image and displayed on the screen.
In addition, the image rendering method provided in the embodiment of the present application may be compared with the image rendering method provided in the related art, where the comparison results are shown in fig. 9 to 14 (fig. 9 is a schematic diagram of the comparison results provided in the embodiment of the present application, fig. 10 is another schematic diagram of the comparison results provided in the embodiment of the present application, fig. 11 is another schematic diagram of the comparison results provided in the embodiment of the present application, fig. 12 is another schematic diagram of the comparison results provided in the embodiment of the present application, fig. 13 is another schematic diagram of the comparison results provided in the embodiment of the present application, and fig. 14 is another schematic diagram of the comparison results provided in the embodiment of the present application).
As can be seen from fig. 9, in the image rendered by the related art, the vehicle body has no ground reflection, and as can be seen from fig. 10, in the image rendered by the multi-pool technique in the embodiment of the application, the vehicle body presents a clear ground reflection.
As can be seen from fig. 11, after the image is noise reduced by the rendering of the related art, the details of the saddle are lost, and as can be seen from fig. 12, the details of the saddle can be better preserved after the image is noise reduced by the embodiment of the present application.
Based on fig. 13, it can be seen that after the related art performs the super-processing on the image, the obtained image has obvious aliasing, and based on fig. 14, according to the embodiment of the application, the aliasing can be significantly reduced by performing differential powder frying on different illumination (direct illumination and indirect illumination) effects, and the obtained image is smooth and clear.
In this embodiment of the present application, after an original light path between a target object and a light source is obtained, a light path between the target object and the light source may be resampled based on the original light path, so as to obtain a new light path, where the original light path is obtained by light tracing, and the original light path uses the target object as a starting point, uses the light source as an ending point, uses the light source as a starting point, and uses the target object as an ending point. A cuvette may then be acquired based on the new ray paths, the cuvette being used to indicate the rays directed to the target object, the new ray paths being formed by the rays. Finally, the target object can be rendered based on the sample pool, and an image of the target object is obtained. Based on the foregoing process, the embodiment of the present application provides a new light path resampling manner, which is opposite to the direction of the effective light path in the light tracking (taking the light source as a starting point and taking the target object as an end point), so as to purposefully resample the light path, and improve the success rate of sampling the effective new light path, i.e. the effective new light path between the target object and the light source can be easily resampled, so that no matter what phenomenon is generated by the light forming the new light path is direct illumination, diffuse reflection, specular reflection and specular reflection on the target object, a sample pool with good quality and enough can be successfully obtained, thereby enabling the rendering effect on the target object to be good enough (i.e. the image of the rendered target object has high enough quality).
Further, embodiments of the present application provide a multi-sample cell-based ray tracing technique, where a multi-sample cell refers to at least one first sample cell and at least one second sample cell. In a common configuration, two first sample pools and one second sample pool can be constructed, rendering effects corresponding to the two first sample pools are specular reflection and specular reflection respectively, and rendering effects corresponding to the second sample pool are direct illumination, so that even if a target object is an object of a special material, based on images of the target object obtained by rendering of the plurality of sample pools, not only can direct illumination effects be presented, but also indirect illumination effects such as specular reflection effects and specular reflection effects can be presented, the obtained images are more real and exquisite, and the user experience is improved.
Furthermore, in the embodiment of the present application, the size of the first sample pool is smaller than the resolution of the screen, so that the efficiency of obtaining the first sample pool can be improved, and in the process of rendering the target object based on the multi-sample pool technology, the super-resolution processing can be matched, so that the calculation amount required to be paid in the process is reduced to a certain extent, the realization of the ray tracing technology based on the multi-sample pool is facilitated to be accelerated, and the user experience is further improved.
Furthermore, the embodiment of the application provides a new image noise reduction mode, which can realize demodulation, noise reduction and modulation of each image layer based on the noise-free classification of the material information of the target object, so that each image layer can keep the details of the target object as much as possible, the image of the target object obtained based on the superposition of each image layer is more real and clear, the viewing of a user is facilitated, and the user experience is further improved.
The foregoing is a detailed description of the image rendering method provided in the embodiment of the present application, and the image rendering apparatus provided in the embodiment of the present application will be described below. Fig. 15 is a schematic structural diagram of an image rendering device according to an embodiment of the present application, as shown in fig. 15, where the device includes:
the resampling module 1501 is configured to resample a light path between a target object and a light source based on an original light path between the target object and the light source to obtain a new light path, where the original light path is obtained by light tracing, the original light path takes the target object as a starting point, takes the light source as an ending point, and the new light path takes the light source as a starting point and takes the target object as an ending point;
A first acquisition module 1502 for acquiring a sample cell for indicating light directed to a target object based on a new light path formed by the light;
and a rendering module 1503, configured to render the target object based on the sample pool, so as to obtain an image of the target object.
In this embodiment of the present application, after an original light path between a target object and a light source is obtained, a light path between the target object and the light source may be resampled based on the original light path, so as to obtain a new light path, where the original light path is obtained by light tracing, and the original light path uses the target object as a starting point, uses the light source as an ending point, uses the light source as a starting point, and uses the target object as an ending point. A cuvette may then be acquired based on the new ray paths, the cuvette being used to indicate the rays directed to the target object, the new ray paths being formed by the rays. Finally, the target object can be rendered based on the sample pool, and an image of the target object is obtained. Based on the foregoing, the embodiment of the present application provides a new light path resampling method, which is opposite to the direction of the effective light path in light tracking (with the light source as the starting point and the target object as the end point), and specifically resamples the light path, so that the success rate of sampling the effective new light path can be improved, that is, the effective new light path between the target object and the light source can be easily resampled, so that no matter what phenomenon is generated when the light forming the new light path is emitted on the target object is direct illumination, diffuse reflection, specular reflection and specular reflection, a sample pool with high quality and enough can be successfully obtained, thereby enabling the rendering effect on the target object to be high enough.
In one possible implementation, the new light path includes a first light path starting from the light source and ending with the target object and passing through the remaining objects, and a second light path starting from the light source and ending with the target object and not passing through the remaining objects; the sample cell includes a first sample cell for indicating a first light ray directed to the target object, the first light ray path being formed by the first light ray, and a second sample cell for indicating a second light ray directed to the target object, the second light ray path being formed by the second light ray.
In one possible implementation, the phenomenon generated by the first light beam impinging on the target object is specular reflection or specular reflection.
In one possible implementation, the first light indicated by the first sample cell is less than the number of pixels displayed by the screen, and the second light indicated by the second sample cell is equal to the number of pixels displayed by the screen, which is used to display an image of the target object.
In one possible implementation, the rendering module 1503 is configured to: rendering the target object based on the first sample pool to obtain a first illumination image of the target object; rendering the target object based on the second sample pool to obtain a second illumination image of the target object; and fusing the first illumination image and the second illumination image to obtain an image of the target object.
In one possible implementation, the rendering module 1503 is further configured to: acquiring a noise-free component of material information of a target object; performing first processing on the first illumination image based on the noise-free component to obtain a processed first illumination image, wherein the first processing comprises demodulation processing, noise reduction processing and modulation processing; performing a second process on the second illumination image based on the noise-free component to obtain a processed second illumination image, the second process including demodulation process, noise reduction process, and modulation process; the rendering module 1503 is configured to fuse the processed first illumination image and the processed second illumination image to obtain an image of the target object.
In one possible implementation, the first processing further includes an over-division processing for making the resolution of the processed first illumination image equal to the resolution of the screen.
It should be noted that, because the content of information interaction and execution process between the modules/units of the above-mentioned apparatus is based on the same concept as the method embodiment of the present application, the technical effects brought by the content are the same as the method embodiment of the present application, and specific content may refer to the description in the foregoing illustrated method embodiment of the present application, which is not repeated herein.
Embodiments of the present application also relate to a circuitry comprising processing circuitry configured to perform the various steps of the embodiments shown in fig. 4 described above.
Embodiments of the present application also relate to a chip system including a processor for invoking a computer program or computer instructions stored in a memory to cause the processor to perform the steps of the embodiments described above in connection with fig. 4.
In one possible implementation, the processor is coupled to the memory through an interface.
In one possible implementation, the system on a chip further includes a memory having a computer program or computer instructions stored therein.
The present embodiment also relates to a computer storage medium in which a program for performing signal processing is stored, which when run on a computer, causes the computer to perform the steps as in the embodiment shown in fig. 4 described above.
Embodiments of the present application also relate to a computer program product having stored thereon instructions that, when executed by a computer, cause the computer to perform the steps of the embodiment as described in the previous fig. 4.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (19)

1. An image rendering method, the method comprising:
resampling a light path between a target object and a light source based on an original light path between the target object and the light source to obtain a new light path, wherein the original light path is obtained through light ray tracing, the original light path takes the target object as a starting point, takes the light source as an ending point, and the new light path takes the light source as a starting point and takes the target object as an ending point;
acquiring a sample cell based on the new light path, wherein the sample cell is used for indicating light rays emitted to the target object, and the new light path is formed by the light rays;
and rendering the target object based on the sample pool to obtain an image of the target object.
2. The method of claim 1, wherein the new light path comprises a first light path starting from the light source and ending with the target object and passing through the remaining objects, and a second light path starting from the light source and ending with the target object and not passing through the remaining objects;
The sample cell includes a first sample cell for indicating a first light ray directed toward the target object, the first light ray path being formed by the first light ray, and a second sample cell for indicating a second light ray directed toward the target object, the second light ray path being formed by the second light ray.
3. The method of claim 2, wherein the phenomenon generated by the first light striking the target object comprises at least one of: specular reflection or specular high light reflection.
4. A method according to claim 2 or 3, wherein the first sample cell indicates a first number of light rays smaller than a number of pixels displayed by a screen, and the second sample cell indicates a second number of light rays equal to the number of pixels displayed by the screen, the screen being used to display an image of the target object.
5. The method according to any one of claims 2 to 4, wherein rendering the target object based on the sample pool to obtain an image of the target object comprises:
rendering the target object based on the first sample pool to obtain a first illumination image of the target object;
Rendering the target object based on the second sample pool to obtain a second illumination image of the target object;
and fusing the first illumination image and the second illumination image to obtain an image of the target object.
6. The method of claim 5, wherein prior to fusing the first illumination image and the second illumination image to obtain the image of the target object, the method further comprises:
acquiring a noise-free component of material information of a target object;
performing first processing on the first illumination image based on the noise-free component to obtain a processed first illumination image, wherein the first processing comprises demodulation processing, noise reduction processing and modulation processing;
performing the second processing on the second illumination image based on the noise-free component to obtain a processed second illumination image, wherein the second processing comprises demodulation processing, noise reduction processing and modulation processing;
the fusing the first illumination image and the second illumination image to obtain an image of the target object includes:
and fusing the processed first illumination image and the processed second illumination image to obtain an image of the target object.
7. The method of claim 6, wherein the first process further comprises an over-process for making the resolution of the processed first illumination image equal to the resolution of the screen.
8. An image rendering apparatus, the apparatus comprising:
the resampling module is used for resampling the light path between the target object and the light source based on the original light path between the target object and the light source to obtain a new light path, wherein the original light path is obtained through light ray tracing, the original light path takes the target object as a starting point, takes the light source as an end point, and the new light path takes the light source as a starting point and takes the target object as an end point;
a first acquisition module configured to acquire a sample cell for indicating light directed to the target object based on the new light path, the new light path being formed by the light;
and the rendering module is used for rendering the target object based on the sample pool so as to obtain an image of the target object.
9. The apparatus of claim 8, wherein the new light path comprises a first light path starting from the light source and ending with the target object and passing through the remaining objects, and a second light path starting from the light source and ending with the target object and not passing through the remaining objects;
The sample cell includes a first sample cell for indicating a first light ray directed toward the target object, the first light ray path being formed by the first light ray, and a second sample cell for indicating a second light ray directed toward the target object, the second light ray path being formed by the second light ray.
10. The apparatus of claim 9, wherein the phenomenon generated by the first light impinging on the target object is specular or specular.
11. The apparatus of claim 9 or 10, wherein the first sample cell indicates a first number of light rays that is less than a number of pixels displayed by a screen, and the second sample cell indicates a second number of light rays that is equal to the number of pixels displayed by the screen, the screen being configured to display an image of the target object.
12. The apparatus according to any one of claims 9 to 11, wherein the rendering module is configured to:
rendering the target object based on the first sample pool to obtain a first illumination image of the target object;
rendering the target object based on the second sample pool to obtain a second illumination image of the target object;
And fusing the first illumination image and the second illumination image to obtain an image of the target object.
13. The apparatus of claim 12, wherein the rendering module is further configured to:
acquiring a noise-free component of material information of a target object;
performing first processing on the first illumination image based on the noise-free component to obtain a processed first illumination image, wherein the first processing comprises demodulation processing, noise reduction processing and modulation processing;
performing the second processing on the second illumination image based on the noise-free component to obtain a processed second illumination image, wherein the second processing comprises demodulation processing, noise reduction processing and modulation processing;
the rendering module is used for fusing the processed first illumination image and the processed second illumination image to obtain an image of the target object.
14. The apparatus of claim 13, wherein the first process further comprises an over-process for making the resolution of the processed first illumination image equal to the resolution of the screen.
15. An electronic device comprising a memory and a processor; the memory stores code, the processor being configured to execute the code, when executed, the electronic device performing the method of any of claims 1 to 7.
16. Circuitry, characterized in that it comprises processing circuitry configured to perform the method according to any of claims 1 to 7.
17. A chip system comprising a processor for invoking a computer program or computer instructions stored in memory to cause the processor to perform the method of any of claims 1 to 7.
18. A computer storage medium storing one or more instructions which, when executed by one or more computers, cause the one or more computers to implement the method of any one of claims 1 to 7.
19. A computer program product, characterized in that it stores instructions that, when executed by a computer, cause the computer to implement the method of any one of claims 1 to 7.
CN202210752953.8A 2022-06-29 2022-06-29 Image rendering method and related equipment thereof Pending CN117351134A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210752953.8A CN117351134A (en) 2022-06-29 2022-06-29 Image rendering method and related equipment thereof
PCT/CN2023/103064 WO2024002130A1 (en) 2022-06-29 2023-06-28 Image rendering method and related device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210752953.8A CN117351134A (en) 2022-06-29 2022-06-29 Image rendering method and related equipment thereof

Publications (1)

Publication Number Publication Date
CN117351134A true CN117351134A (en) 2024-01-05

Family

ID=89367832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210752953.8A Pending CN117351134A (en) 2022-06-29 2022-06-29 Image rendering method and related equipment thereof

Country Status (2)

Country Link
CN (1) CN117351134A (en)
WO (1) WO2024002130A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8243070B1 (en) * 2008-04-18 2012-08-14 Adobe Systems Incorporated Triangulation for accelerated rendering of polygons
CN107067455B (en) * 2017-04-18 2019-11-19 腾讯科技(深圳)有限公司 A kind of method and apparatus of real-time rendering
CN110599579B (en) * 2019-09-20 2023-02-24 山东师范大学 Photon resampling-based random asymptotic photon mapping image rendering method and system
CN112396684A (en) * 2020-11-13 2021-02-23 贝壳技术有限公司 Ray tracing method, ray tracing device and machine-readable storage medium
CN114549730A (en) * 2020-11-27 2022-05-27 华为技术有限公司 Light source sampling weight determination method for multi-light source scene rendering and related equipment
CN113298925B (en) * 2021-04-14 2023-07-11 江苏理工学院 Dynamic scene rendering acceleration method based on ray path multiplexing

Also Published As

Publication number Publication date
WO2024002130A1 (en) 2024-01-04

Similar Documents

Publication Publication Date Title
US7212207B2 (en) Method and apparatus for real-time global illumination incorporating stream processor based hybrid ray tracing
WO2022111619A1 (en) Image processing method and related apparatus
Max et al. Rendering trees from precomputed Z-buffer views
US20080143720A1 (en) Method for rendering global illumination on a graphics processing unit
US20060176302A1 (en) Visible surface determination system & methodology in computer graphics using interval analysis
US9508191B2 (en) Optimal point density using camera proximity for point-based global illumination
JP2005071368A (en) Method and device for capturing self-shadowing and self-interreflection light
CN113674389B (en) Scene rendering method and device, electronic equipment and storage medium
US8698799B2 (en) Method and apparatus for rendering graphics using soft occlusion
WO2022143367A1 (en) Image rendering method and related device therefor
CN113808244A (en) Ray tracing hardware acceleration supporting motion blur and motion/morphing geometries
Woo et al. Shadow algorithms data miner
US10089796B1 (en) High quality layered depth image texture rasterization
JP2016510473A (en) Method and device for enhancing depth map content
CN111161398A (en) Image generation method, device, equipment and storage medium
US9401044B1 (en) Method for conformal visualization
Kroes et al. Smooth probabilistic ambient occlusion for volume rendering
CN117351134A (en) Image rendering method and related equipment thereof
US11423618B2 (en) Image generation system and method
Döllner Geovisualization and real-time 3D computer graphics
Stemkoski et al. Developing Graphics Frameworks with Java and OpenGL
Mora et al. Visualization and computer graphics on isotropically emissive volumetric displays
Meyer et al. Real-time reflection on moving vehicles in urban environments
Alanko Dynamic Benchmark for Graphics Rendering
Shen THE PURDUE UNIVERSITY GRADUATE SCHOOL STATEMENT OF DISSERTATION APPROVAL

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination