CN114820910A - Rendering method and device - Google Patents

Rendering method and device Download PDF

Info

Publication number
CN114820910A
CN114820910A CN202110080552.8A CN202110080552A CN114820910A CN 114820910 A CN114820910 A CN 114820910A CN 202110080552 A CN202110080552 A CN 202110080552A CN 114820910 A CN114820910 A CN 114820910A
Authority
CN
China
Prior art keywords
rendered
rendering
patches
content
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110080552.8A
Other languages
Chinese (zh)
Inventor
余洲
孙涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Priority to CN202110080552.8A priority Critical patent/CN114820910A/en
Priority to PCT/CN2021/139426 priority patent/WO2022156451A1/en
Publication of CN114820910A publication Critical patent/CN114820910A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The application provides a rendering method and device. After receiving applied contents to be rendered, the method obtains a first set of patches to be rendered and a second set of patches to be rendered from the contents to be rendered, renders the first set of patches to be rendered based on a first number of tracking rays, and renders the second set of patches to be rendered based on a second number of tracking rays, wherein the first number of tracking rays is higher than the second number of tracking rays, and obtains rendering results of the first set of patches to be rendered and rendering results of the second set of patches to be rendered. According to the rendering method, the content to be rendered is divided into the first set of patches to be rendered and the second set of patches to be rendered, ray tracing with different ray tracing numbers is performed on the first set of patches to be rendered and the second set of patches to be rendered, and the ray tracing rendering efficiency is effectively improved.

Description

Rendering method and device
Technical Field
The present application relates to the field of graphics rendering, and in particular, to a rendering method and apparatus.
Background
Ray tracing rendering technology has been the basic technology in the field of computer graphics, and up to now, the technology is the most important technology for realizing high-quality, realistic and high-quality images. However, the technology always needs a long calculation time to complete a large number of Monte Carlo integral calculation processes to generate a final calculation result. Therefore, the technology is always applied to the field of off-line rendering of scenes, such as movies and animations. With the computer hardware power upgrading, in recent years, the demand for ray tracing rendering technology is becoming stronger as some rendering business fields (games, virtual reality) with strong real-time requirements begin to appear.
The greater the number of rays emanating from a virtual viewpoint, the higher the quality of the rendered image. To complete a high quality rendering, millions of rays need to be emitted from the virtual viewpoint, which is very costly in computing resources.
Therefore, how to improve the efficiency of ray tracing rendering without reducing the image quality becomes a problem of important attention in the industry.
Disclosure of Invention
The application provides a rendering method which can improve the efficiency of ray tracing rendering.
A first aspect of the present application provides a rendering method, the method including: content to be rendered for an application is received, the content to be rendered including at least one model, each model including at least one patch. And acquiring a first set of patches to be rendered and a second set of patches to be rendered from the content to be rendered. Rendering the first set of patches to be rendered based on a first number of tracking rays, and rendering the second set of patches to be rendered based on a second number of tracking rays. Wherein, the first tracking light number is higher than the second tracking light number. And obtaining a rendering result of the first set of patches to be rendered and a rendering result of the second set of patches to be rendered.
In some possible designs, the method further comprises: and obtaining rendering results corresponding to a plurality of pieces of historical rendering content of the application, wherein each piece of historical rendering content comprises at least one model. And determining the high-attention model included in the plurality of historical rendering contents according to the occurrence frequency of each model in the plurality of historical rendering contents. The obtaining a first set of patches to be rendered and a second set of patches to be rendered from the content to be rendered includes: and determining the high-attention model in the content to be rendered according to the high-attention models included in the plurality of pieces of historical rendering content. Determining the first set of patches to be rendered from the high-attention model in the content to be rendered.
In some possible designs, the method further comprises: and obtaining rendering results corresponding to a plurality of pieces of historical rendering content of the application, wherein each piece of historical rendering content comprises at least one model. And determining the high-attention model included in the plurality of historical rendering contents according to the number of stay frames of each model in the plurality of historical rendering contents. The obtaining a first set of patches to be rendered and a second set of patches to be rendered from the content to be rendered includes: and determining the high-attention model in the content to be rendered according to the high-attention models included in the plurality of pieces of historical rendering content. Determining the first set of patches to be rendered from the high-attention model in the content to be rendered.
In some possible designs, the method further comprises: and obtaining rendering results corresponding to a plurality of pieces of historical rendering content of the application, wherein each piece of historical rendering content comprises at least one model. And determining the high-attention model included in the plurality of historical rendering contents according to the number of stay frames and/or the number of occurrences of each model in the plurality of historical rendering contents. And determining a salient patch in a high-attention model included by the plurality of pieces of historical rendering content based on a saliency detection method. The determining the first set of patches to be rendered from the high-attention model in the content to be rendered includes: and determining a salient patch in the high-attention model in the contents to be rendered as the first set of patches to be rendered according to the salient patch in the high-attention model included in the plurality of historical rendering contents.
In some possible designs, the method further comprises: and obtaining rendering results corresponding to a plurality of historical rendering contents of the application. And determining a moving patch in the model included in the plurality of pieces of historical rendering content based on a moving target detection method. Determining the moving patch in the content to be rendered as a first set of patches to be rendered. And determining the second set of patches to be rendered according to the first set of patches to be rendered. Determining a moving patch in a model included in the plurality of pieces of historical rendering content based on a moving object detection method includes: and determining the moving pixel according to the difference value of the same pixel in the rendering results corresponding to the two pieces of rendering content and the detection threshold value. And determining the moving patch according to the moving pixel.
In some possible designs, the number of tracking rays for each patch in the second set of patches to be rendered is determined based on the distance between the patch and the patch in the first set of patches to be rendered.
In some possible designs, the second set of patches to be rendered is determined from the content to be rendered and the first set of patches to be rendered.
A second aspect of the present application provides an apparatus for rendering, the apparatus comprising a communication unit, a processing unit, and a storage unit. The communication unit is used for receiving the content to be rendered of the application. The storage unit is used for storing the content to be rendered. The processing unit is used for acquiring a first set of patches to be rendered and a second set of patches to be rendered from the content to be rendered; rendering the first set of patches to be rendered based on the first number of tracking rays. Rendering the second set of patches to be rendered based on a second number of tracking rays. Wherein, the first tracking light number is higher than the second tracking light number. And obtaining a rendering result of the first set of patches to be rendered and a rendering result of the second set of patches to be rendered.
In some possible designs, the processing unit is further configured to: and obtaining rendering results corresponding to a plurality of pieces of historical rendering content of the application, wherein each piece of historical rendering content comprises at least one model. And determining the high-attention model included in the plurality of historical rendering contents according to the occurrence frequency of each model in the plurality of historical rendering contents. The obtaining a first set of patches to be rendered and a second set of patches to be rendered from the content to be rendered includes: and determining the high-attention model in the content to be rendered according to the high-attention models included in the plurality of pieces of historical rendering content. Determining the first set of patches to be rendered from the high-attention model in the content to be rendered.
In some possible designs, the processing unit is further configured to: and obtaining rendering results corresponding to a plurality of pieces of historical rendering content of the application, wherein each piece of historical rendering content comprises at least one model. And determining the high-attention model included in the plurality of historical rendering contents according to the number of stay frames of each model in the plurality of historical rendering contents. The obtaining a first set of patches to be rendered and a second set of patches to be rendered from the content to be rendered includes: and determining the high-attention model in the content to be rendered according to the high-attention models included in the plurality of pieces of historical rendering content. Determining the first set of patches to be rendered from the high-attention model in the content to be rendered.
In some possible designs, the processing unit is further configured to: and obtaining rendering results corresponding to a plurality of pieces of historical rendering content of the application, wherein each piece of historical rendering content comprises at least one model. And determining the high-attention model included in the plurality of historical rendering contents according to the number of stay frames and/or the number of occurrences of each model in the plurality of historical rendering contents. And determining a salient patch in a high-attention model included by the plurality of pieces of historical rendering content based on a saliency detection method. The determining the first set of patches to be rendered from the high-attention model in the content to be rendered includes: and determining the salient patch in the high-attention model in the content to be rendered as the first set of patches to be rendered according to the salient patch in the high-attention model included in the plurality of pieces of historical rendering content.
In some possible designs, the processing unit is further configured to: and obtaining rendering results corresponding to a plurality of historical rendering contents of the application. And determining a moving patch in the model included in the plurality of pieces of historical rendering content based on a moving target detection method. Determining the moving patch in the content to be rendered as a first set of patches to be rendered. And determining the second set of patches to be rendered according to the first set of patches to be rendered. The method for detecting the moving target comprises the following steps of: and determining the moving pixel according to the difference value of the same pixel in the rendering results corresponding to the two pieces of rendering content and the detection threshold value. And determining the moving patch according to the moving pixel.
A third aspect of the present application provides a cluster of computing devices comprising at least one computing device, each computing device comprising a processor and a memory. The processor of the at least one computing device is to execute instructions stored in the memory of the at least one computing device to cause the cluster of computing devices to perform the method of the first aspect or any of the first aspects.
A fourth aspect of the present application provides a computer program product comprising instructions which, when executed by a cluster of computer devices, cause the cluster of computer devices to perform the method as claimed in the first aspect or any one of the first aspects.
A fifth aspect of the present application provides a computer-readable storage medium comprising computer program instructions which, when executed by a cluster of computing devices, perform the method as either the first aspect or the first aspect.
Drawings
In order to more clearly illustrate the technical method of the embodiments of the present application, the drawings used in the embodiments will be briefly described below.
Fig. 1(a) is a schematic view of a rendering structure under a single viewpoint according to an embodiment of the present application;
fig. 1(b) illustrates a facet division according to an embodiment of the present application;
fig. 1(c) is a schematic diagram of a correspondence relationship between a pixel and a patch according to an embodiment of the present disclosure;
FIG. 1(d) is a schematic diagram of a pixel projection area according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a rendering method according to an embodiment of the present application;
fig. 3 is a flowchart of a rendering method according to an embodiment of the present application;
FIG. 4 is an architecture of a rendering engine according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a computing device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a computing device cluster according to an embodiment of the present application;
fig. 7 is a schematic diagram of a connection manner of a computing device cluster according to an embodiment of the present application;
fig. 8 is a schematic diagram of a connection manner of a computing device cluster according to an embodiment of the present application.
Detailed Description
The terms "first" and "second" in the embodiments of the present application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature.
Some technical terms referred to in the embodiments of the present application will be first described.
Patch (tile): a patch refers to the smallest planar constituent unit in two-dimensional or three-dimensional space. In rendering, generally, a model in a space needs to be divided into an infinite number of minute planes. These planes, also called patches, can be any polygon, commonly triangles and quadrilaterals. The intersection of the edges of these patches is the vertex of each patch. The patches may be randomly divided according to information such as the material or color of the model. In addition, it is contemplated that each panel will have both sides, and typically only one side will be visible. Therefore, an operation of back face rejection of the wafer is required in some cases.
Number of tracking rays per pixel (SPP): the number of tracking rays per pixel refers to the number of rays passing through each pixel. Where a pixel is the smallest unit in the viewing plane. Generally, a screen is seen to be formed by arranging pixels one by one. The color of a pixel is calculated from the color (red, green, blue, RGB) of the light that passes through the pixel during ray tracing. In ray tracing, the size of the number of tracing rays per patch may affect the result of rendering. The larger the number of rays traced per patch means that more rays are cast into the model in three-dimensional space from the viewpoint. The greater the number of rays projected on each pixel, the more accurate the color value calculation of each pixel can be.
Ray tracing (ray tracing): ray tracing, also known as track tracing or ray tracing, is a common technique from geometric optics that models the path a ray has traveled by tracing the ray that has interacted with an optical surface. It is used in optical system design, such as camera lens, microscope, telescope, binoculars, etc. When used for rendering, rays from the eye are traced instead of rays from the light source, and a mathematical model of the composed scene is generated by such a technique and is visualized. The results obtained are similar to those of the ray casting and scan line rendering method, but this method has a better optical effect. For example, more accurate simulation of reflection and refraction and very high efficiency, so this method is often used when pursuing such high quality results. Specifically, the ray tracing method first calculates the distance, direction and new position reached by a ray of light traveling in a medium before the ray is absorbed by the medium or changes direction. Then a new light ray is generated from the new position, and a complete light ray propagating path in the medium is finally calculated by using the same processing method. Since the algorithm is a complete simulation of the imaging system, complex pictures can be generated by simulation.
Ray tracing rendering is becoming a focus of the industry as computer computing power increases and industry development demands.
In ray tracing rendering, light may be emitted from a viewpoint, and after contacting a model in the content to be rendered, it may return to the light source after a limited number of folds and reflections. For ray-tracing rendering, the greater the number of rays emitted from a viewpoint, the higher the quality of an image obtained by rendering.
At present, the optimization of the rendering technology is mainly to optimize a sampling method. For example, a monte carlo-based per-pixel sampling method, a super sampling method, a distributed super sampling method, and the like. There are also methods by changing or combining the propagation directions of the light rays. Such as bi-directional ray tracing and hybrid ray tracing. The above-described methods are all methods in which some pixels can be completely repeatedly performed. In practice, in a frame, not all the models corresponding to pixels are of high interest to the user. For example, during a game, the user may be concerned with moving patterns in the screen and some vivid patterns, etc.
In view of this, the present application provides a rendering method 400 based on attention. The method may be performed by the rendering engine 500. Specifically, for content to be rendered in a frame of picture, the content to be rendered is divided into a first set of patches to be rendered and a second set of patches to be rendered based on a attention model. Further, ray tracing rendering of a first tracing ray number is conducted on the first set of patches to be rendered. And simultaneously, performing ray tracing rendering of a second tracing ray number on the second surface patch set to be rendered. Wherein, the first tracking light number is higher than the second tracking light number.
During the formation of one frame, the number of rays for sampling the entire view plane should not exceed the number of rays corresponding to the maximum sampling capability of the rendering engine 500. And finally, obtaining a rendering result according to the two sampling results.
According to the method, the content to be rendered is divided by establishing the attention degree model. Sampling of the first tracking ray number is carried out on the patches in the first set of patches to be rendered, so that the rendering quality of the patches is guaranteed. The purpose of outputting high-quality rendering images is achieved under the condition that the total sampling light quantity is not increased or reduced.
In order to make the technical solution of the present application clearer and easier to understand, before describing the rendering method 400 provided by the present application, three basic concepts related to the rendering technology, namely, relationships between patches, vertices and pixels, are described.
Fig. 1(a) shows a rendering structure diagram under a single viewpoint. The rendering structure comprises at least a virtual viewpoint 100, a virtual view plane 200, a model 600 and a light source 302.
The virtual viewpoint 100 is an eye or eyes of a human simulated in space for perceiving a three-dimensional structure. Wherein, each frame corresponds to a space. The virtual viewpoint 100 may be divided into a monocular viewpoint, a binocular viewpoint, and a multiview viewpoint according to the number of viewpoints. Specifically, the binocular viewpoint or the multi-view viewpoint refers to acquiring two or more images from two or more different viewpoints to reconstruct a 3D structure or depth information of a target model.
The virtual viewing plane 200 is a simulated display screen in space. The construction of the virtual view plane 200 is mainly determined by two factors, i.e., the distance from the virtual viewpoint 100 to the virtual view plane 200 and the screen resolution.
Here, the distance from the virtual viewpoint 100 to the virtual viewing plane 200 refers to a vertical distance from the virtual viewpoint 100 to the virtual viewing plane 200. Further, the distance may be set as desired.
The screen resolution refers to the number of pixels contained in the virtual view plane 200. In other words, the virtual view plane 200 includes one or more pixels. For example, in fig. 1a, the virtual viewing plane 200 comprises 9 pixels (3 x 3).
In some possible implementations, results obtained through the rendering operation may be used for output. In one ray tracing, the rendering result of each pixel in the virtual view plane 200 together constitutes one frame of picture. That is, in one ray tracing, one virtual view plane 200 corresponds to one frame of picture.
Corresponding to the virtual view plane is a display screen on the user side for outputting the final result. The screen resolution of the display screen is not necessarily equal to the screen resolution of the virtual view plane.
When the screen resolutions of the display screen and the virtual view plane 200 are equal, the rendering result on the virtual view plane 200 may be expressed by 1: the ratio of 1 is output to the display screen.
And when the screen resolutions of the display screen and the virtual view plane 200 are different, outputting the rendering result on the virtual view plane 200 to the display screen according to a certain proportion. The calculation of the specific proportion belongs to the prior art, and is not described herein again.
One or more models 600 may be contained in the space. Which models 600 can be included in the rendering result corresponding to the virtual viewing plane 200 is determined by the relative position between the corresponding virtual viewing point 100 and each model 600.
Before the rendering operation is performed, the model surface typically needs to be divided into a plurality of patches. The sizes and shapes of the patches can be consistent or inconsistent. Specifically, the method for dividing a patch belongs to the prior art, and is not described herein again.
Fig. 1(b) shows the tiling of one face of model 600. As shown in fig. 1b, one face of model 600 is divided into 6 triangular patches of different sizes.
All vertices in space include not only the intersection points of the various faces of model 600 (e.g., D1, D2, D4, D6), but also the vertices of the various patches (e.g., D0, D3, D5).
Fig. 1(c) shows a schematic diagram of a pixel-to-patch correspondence. The bold box in fig. 1c is the projection of one pixel in the virtual view plane 200 in fig. 1a onto the model 600. It can be seen that the pixel projection areas cover partial areas of patches 1 to 6, respectively. The pixel projection area indicates an area surrounded by the projection of the pixel on the model.
One pixel projection area may cover a plurality of patches or may cover only one patch. When a pixel projection area covers only one patch, the entire area of the patch may be covered, or a partial area of the patch may be covered.
For example, as shown in fig. 1(d), one pixel projection area covers a partial area of the patch 6. That is, patch 6 may cover multiple pixel projection areas simultaneously.
In summary, the surface of each model in space may be divided into a plurality of polygon patches, and all the vertices in space are the set of vertices of each polygon patch. And a pixel projection area corresponding to one pixel may cover one or more patches, and a patch may also cover one or more pixel projection areas corresponding to one or more pixels.
The light source 302 is a virtual light source arranged in a space for generating a lighting environment in the space. The type of light source 302 may be any of the following: point light sources, area light sources, line light sources, and the like. Further, one or more light sources 302 may be included in the space. Further, when there are multiple light sources 302 in a space, the different light source types may be different.
The operations of setting a virtual viewpoint, setting a virtual view plane, building a model, and dividing a patch in the space are generally completed before the rendering operation is performed. These steps may be performed by a rendering engine 500, such as a movie rendering engine 500 or a game rendering engine 500. Such as a game rendering engine 500(unity) or a fantasy engine (unity), etc.
After the relative position relationships between the virtual viewpoint, the virtual viewing plane, the light source, and the models are set, the rendering engine 500 may receive the relative position relationships and the related information. Specifically, the information includes the type and number of virtual viewpoints, the distance and screen resolution from the virtual view plane to the virtual viewpoint, the lighting environment, the relative positional relationship between each model and the virtual viewpoint, the patch division condition of each model, patch number information, patch quality information, and the like. After obtaining the above information, the rendering engine 500 may further perform the rendering method 400 below.
The rendering method 400 provided by the embodiment of the present application is described below with reference to the drawings.
As shown in fig. 2, an embodiment of a rendering method 400 provided in the embodiment of the present invention includes: after the content to be rendered is analyzed and judged by the rendering system, a first set of patches to be rendered with high attention attributes and a second set of patches to be rendered with low attention attributes in the content to be rendered are determined. Wherein the set of patches includes one or more patches. And performing ray tracing rendering on the surface patches in the first surface patch set to be rendered by adopting a first tracing ray number to obtain a rendering result of the first surface patch set to be rendered. And meanwhile, ray tracing rendering is carried out on the surface patches in the second surface patch set to be rendered by adopting a second ray tracing number, and rendering results of the second surface patch set to be rendered are obtained.
And obtaining a rendering result of the content to be rendered according to the rendering result of the first set of the patches to be rendered and the rendering result of the second set of the patches to be rendered.
Specifically, when the content to be rendered only includes the first set of patches to be rendered and the second set of patches to be rendered, the rendering result of the patches included in one or more models in the content to be rendered may be obtained according to the rendering result of the first set of patches to be rendered and the rendering result of the second set of patches to be rendered. Further, based on the correspondence between one or more models in the content to be rendered and the pixels in the view plane, the color value of each pixel in the view plane may be determined, thereby obtaining a rendering result of the content to be rendered.
Optionally, when the content to be rendered includes other patches besides the first set of patches to be rendered and the second set of patches to be rendered. Ray tracing may be performed on the other patches by a number of tracing rays. In particular, the light tracing method belongs to the prior art, and is not described again. Similarly, after obtaining rendering results of patches included in one or more models in the content to be rendered, color values of pixels in the view plane of may be removed, so as to obtain rendering results of the content to be rendered.
The judgment of the content to be rendered by the rendering engine 500 includes the division of the attention of the model, the identification of the significant patch in the high-attention model and the identification of the moving patch.
Next, from the perspective of the rendering engine 500, the rendering method 400 provided in the embodiment of the present application is described in detail.
Referring to the flow diagram of the rendering method 400 shown in fig. 3, the method includes determining both high attention patches and ray tracing. Wherein determining the high-attention patch portion includes S100 to S110.
S100: rendering engine 500 collects historical rendering results.
The rendering engine 500 collects rendering results within the acquisition time. Wherein, one frame of rendering result corresponds to one frame of picture. Therefore, the rendering engine 500 needs to collect all rendering results within the capture time. Specifically, all rendering results within the acquisition time include rendering results generated under all or part of the virtual viewpoints within the acquisition time. Wherein, the start and stop time nodes of the acquisition time can be set according to requirements.
It should be noted that the rendering results in the above-mentioned acquisition time all belong to different processes (viewpoints) in the same application. As described above, one frame of rendering content corresponds to one or more models, and thus the multi-frame rendering result also corresponds to a plurality of pieces of rendering content within the collection time. Wherein each piece of rendered content includes at least one model.
For example, for a map in a game, the rendering engine 500 needs to collect rendering results generated by multiple players running the game map during the capture time. Wherein one player can generate a multi-frame rendering result.
The above-described capture operation may be performed by the rendering engine 500, and the captured historical rendering results may be stored in the rendering engine 500.
S102: the rendering engine 500 counts the number of occurrences and average dwell time of each model in the historical rendering results.
After the historical rendering results are collected, the rendering engine 500 may count the occurrence times and the average staying time of the rendering results corresponding to each model within the collection time.
In some possible implementations, after the collection of the rendering results within the acquisition time is completed, the number of occurrences needs to be counted in units of models. The times indicate the frequency of rendering results corresponding to each model appearing in the collected rendering results in a frame unit. Alternatively, when the rendering result includes only the rendering result of a partial region of the model, the model may be considered not to be present in the rendering result.
In some possible implementations, after the collection of the rendering results within the acquisition time is completed, the average retention time needs to be counted by taking the model as a unit. The dwell time indicates the continuous occurrence time of the rendering results of the same model corresponding to the same virtual viewpoint. I.e. the number of consecutive dwell frames. In the continuous frames, the same model can correspond to rendering results of different models.
For example, in consecutive frames, a certain model always remains within the viewing plane, but at different angles with respect to the virtual viewpoint. In other words, the model rendering result of the model always exists in consecutive frames, and the model rendering result is not completely the same in the frames. In this case, the model may be considered to continue to appear in the rendering results.
Each model may correspond to one or more dwell times during the acquisition time. Wherein the plurality of dwell-time periods may be mutually different. The average dwell time for each model may be obtained by calculating the average of one or more dwell times for each model. The average may be an arithmetic average or a weighted average.
The above described statistics of the number of occurrences and the average dwell time may be performed by the rendering engine 500. Meanwhile, the counted result will be stored in the rendering engine 500. In addition, the respective models are also stored in the rendering engine 500.
S104: the rendering engine 500 determines a high-attention model according to the number of occurrences of each model and the average dwell time.
The rendering engine 500 may determine the high-interest model based on one or more of the following parameters: the number of occurrences of each model and the average residence time of each model.
In some possible implementations, the high-interest model may be determined according to a threshold number of times and the number of occurrences of each model obtained in S102. Wherein, the time threshold value can be set according to the requirement. Specifically, when the number of occurrences of the model is greater than the number threshold, the model is a high-attention model.
In some possible implementations, the high-interest model may be determined based on the dwell time threshold and the average dwell time of each model obtained in S100. Wherein, the stay time threshold value can be set according to the requirement. Specifically, when the average dwell time of the model is greater than the dwell time threshold, the model is a high attention model.
In some possible implementations, the high-attention model may be determined based on a number threshold, a dwell-time threshold, and the number of occurrences and average dwell-time for each model. Specifically, the model is a high attention model when the number of occurrences of the model is greater than a number threshold and the average dwell time is greater than a dwell time threshold.
Optionally, the high-attention model may be determined according to a product of the number of occurrences of the model and the average dwell time and a frequency threshold. Wherein, the frequency-time threshold value can be set according to the requirement. Specifically, when the product of the number of occurrences of the model and the average dwell time is greater than the frequency-time threshold, the model is a high-attention model.
After determining which type of model is a high interest model, the high interest model in the historical rendering results needs to be marked.
Optionally, for the model already marked, repeated marking may not be performed in marking the subsequent history rendering results.
In S104, a high-attention model is determined according to the number of occurrences and the average stay time of each model in the historical rendering result. Next, a significant patch in the high-attention model needs to be determined, and the specific determination of the significant patch portion includes S106 to S110.
S106: the rendering engine 500 detects the high-attention model and determines a significant patch.
Considering that the historical rendering results comprise multi-frame rendering results, the rendering results in the historical rendering results need to be selected frame by frame, and the high-attention model rendering results in one-frame rendering results are detected. Specifically, the position of the high-attention model rendering result in the frame rendering result, that is, the pixels covered by the high-attention model rendering result on the corresponding virtual view plane, is determined. And determining the significant pixels in the high-attention model rendering result by taking the pixels as units, thereby determining the corresponding significant patches in the high-attention model.
Rendering engine 500 may determine a salient patch using a saliency detection method. The significance detection method may be a boolean-based significance detection method (BMS). Alternatively, the saliency detection method may also be a color saliency method or the like.
It should be noted that the saliency detection method used in the detection of the high attention model rendering result in S106 may be a combination of multiple methods. Specifically, the detection may be performed by using a boolean-graph-based saliency detection method and a color saliency method at the same time.
The significance detection method based on the boolean diagram will be described as an example.
And converting the color value of the pixel in each frame of picture in the historical rendering result into a Boolean value based on a Boolean graph saliency detection method. Specifically, depending on the color of the pixel and the conversion threshold, conversion of the boolean value may be achieved. Wherein the switching threshold can be set as required.
When the pixel color is less than the conversion threshold, the pixel color is modified to 0 or 1. When the pixel color is greater than or equal to the conversion threshold, the pixel color is modified to 1 or 0.
After converting the color of the pixel to be converted into a boolean value, the salient pixel may be determined based on the boolean values of the respective pixels. Specifically, when the boolean values of two adjacent pixels are different, the two pixels are considered to be a part of the salient pixels. Wherein the adjacency indicates two pixels which are above and below or left and right of each other in the virtual view plane.
After determining pixels belonging to a significant patch in each frame of picture, a patch corresponding to the pixels is a potential significant patch. It should be noted that the salient patch needs to be part of the high-attention model.
Therefore, the salient patches can be obtained by eliminating patches in the set belonging to the potential salient patches, which are different from the patches in the set of patches included in the high-attention model.
S108: the rendering engine 500 detects the historical rendering result and determines the moving patch.
From the historical rendering result obtained in step S100, a moving patch can be determined using the moving object detection method. The moving object detection method may be an inter-frame difference method. Alternatively, the moving object detection method may be a background subtraction method, an optical flow method, or the like. The following description will be made by taking the frame difference method as an example.
As described above, the history rendering result obtained in step S100 includes multi-frame rendering results corresponding to a plurality of virtual viewpoints. Wherein one virtual viewpoint may correspond to a multi-frame rendering result. When detecting a moving patch, it is necessary to detect virtual viewpoints one by one in units of virtual viewpoints.
Firstly, the multi-frame rendering results corresponding to one virtual viewpoint are arranged according to a time sequence. Specifically, the arrangement may be made in time series from far to near. Alternatively, the arrangement may be performed in time series from near to far.
Second, the number of pixels of the virtual view plane is fixed for the same virtual viewpoint. In other words, the number of pixels of each of multiple frames of pictures corresponding to the same virtual viewpoint is fixed. Therefore, the difference value of the colors of the pixels of the adjacent two frames is calculated in units of pixels.
Then, according to the difference value and the inter-frame difference threshold, whether each pixel belongs to the pixel corresponding to the moving patch can be determined. Wherein, the frame difference threshold value can be set according to the requirement. Specifically, when the difference value is smaller than the inter-frame difference threshold, the pixel in the next frame is considered not to belong to the pixel corresponding to the moving patch. And when the difference value is greater than or equal to the frame difference threshold value, the pixel in the next frame is considered to belong to the pixel corresponding to the moving patch.
And finally, according to all pixels corresponding to the rendering result of the moving patch in the next frame, determining a pixel set corresponding to the rendering result of the moving patch. And determining the moving patch according to the pixel set corresponding to the rendering result of the moving patch and the model corresponding to each pixel in the pixel set. In particular, details on how to determine the moving patch will be described later.
It should be noted that step S108 does not necessarily need to occur after step S102, and only needs to occur after step S108 finishes collecting the history rendering result in step S100. In other words, step S108 may occur at any time after step S100 but before S110.
Further, since the execution of step S108 is not dependent on the execution of step S104, the patch in the low-attention model may be included in the moving patches. That is, the pixels covered by the moving patch rendering result may include pixels covered by the low-attention model rendering result.
S110: the rendering engine 500 determines a high-attention patch and a low-attention patch according to the high-attention model, the salient patch, and the moving patch.
From the high-attention model determined in step S104, the salient patch determined in step S106, and the moving patch determined in step S108, a high-attention patch can be determined. Further, low-attention patches may be determined.
Step S104 is a high-interest model determined by counting all the historical rendering results. And the salient patches in step S106 are performed frame by frame in the historical rendering results based on the determination of the high-interest model. In addition, the determination of the moving patch in step S108 is also determined on a frame-by-frame basis. The determination of the high-attention patch in step S110 is also performed on a frame-by-frame basis.
In some possible implementations, the high-interest patches may be determined from patches included in the high-interest model. In other words, all patches included in the high-interest model may be determined as high-interest patches.
In this class of possible implementations, the low-attention patches may be patches other than the high-attention patches included in the content to be rendered.
In some possible implementations, the high-interest patches may be determined from significant patches in the high-interest model. That is, significant patches in the high-interest model may all be determined as high-interest patches.
In this class of possible implementations, the low-attention patches may be patches other than the high-attention patches included in the content to be rendered.
Alternatively, the low-attention patches may be patches other than the high-attention patches included in the high-attention model. And labeling is not needed for the patches which are not included in the high-attention model in the patches included in the content to be rendered, and conventional ray tracing is performed in the subsequent ray tracing according to the prior art.
In some possible implementations, the moving patches may all be determined to be high-interest patches.
In this class of possible implementations, the low-attention patches may be patches other than the high-attention patches included in the content to be rendered.
In some possible implementations, a patch belonging to the high-interest model among the moving patches may be determined as a high-interest patch.
In this class of possible implementations, the low-attention patches may be patches other than the high-attention patches included in the content to be rendered.
Alternatively, the low-interest patches may be patches of the content to be rendered other than the patches included in the high-interest model.
Alternatively, the low-attention patches may be patches other than the moving patches in the patches included in the high-attention model.
Alternatively, the low-attention patches may be patches of the moving patches other than the high-attention model.
In some possible implementations, the significant patches of the moving patches may be determined as high-interest patches.
In this class of possible implementations, the low-attention patches may be patches other than the high-attention patches included in the content to be rendered.
Optionally, the low interest patches may be one or more of the following patches: the method includes the steps of moving patches other than the patch included in the high-interest model, moving patches other than the salient patch and the moving patch in the high-interest model, and moving patches which belong to the patch included in the high-interest model but do not belong to the salient patch.
After determining the high-attention model and the high-attention patch corresponding to each frame in the historical rendering picture, the high-attention patch in each model can be obtained. Further, low-interest patches in each model can be obtained.
The above-mentioned determination operation for the first set of patches to be rendered is performed by the rendering engine 500, and the information of the high/low attention patches in each model will also be stored in the rendering engine 500.
After the high attention patches and the low attention patches in each model of the application are determined in step S110, ray tracing can be performed on the current content to be rendered belonging to the same application. The specific ray tracing section includes S112 and S114.
S112: the rendering engine 500 obtains a first set of to-be-rendered patches and a second set of to-be-rendered patches in the to-be-rendered content according to the high-attention patch and the low-attention patch, and performs ray tracing respectively.
First, the content to be rendered and the plurality of pieces of rendering content within the acquisition time involved in step S100 belong to different processes (viewpoints) within the same application. That is, one or more models included in the content to be rendered may appear in part in the plurality of pieces of historical rendering content.
After the content to be rendered is obtained, judging the model in the content to be rendered, and determining the high-attention model in the content to be rendered. Further, a first set of to-be-rendered patches of the content to be rendered is determined, and ray tracing rendering is performed on the first set of to-be-rendered patches of the content to be rendered based on the first number of tracing rays. Meanwhile, a second set of patches to be rendered may be determined from the low-attention patches. Further, ray-tracing rendering may be performed on a second set of patches to be rendered in the content to be rendered based on a second number of tracing rays.
Specifically, the coordinates of the high-interest region determined by the high-interest patch may be determined by obtaining the coordinates of the first set of patches to be rendered in space. Further, by establishing a connection line between the virtual viewpoint and the high attention area, an imaging area corresponding to the high attention area may be determined on the virtual viewpoint plane, and high SPP ray tracing may be performed on pixels covered by the imaging area corresponding to the high attention area.
Similarly, the coordinates of the low-attention area determined by the low-attention patch can also be determined by obtaining the coordinates of the second set of patches to be rendered in space. Further, by establishing a connection line between the virtual viewpoint and the low attention area, an imaging area corresponding to the low attention area may be determined on the virtual viewpoint plane, and high SPP ray tracing may be performed on pixels covered by the imaging area corresponding to the low attention area.
Alternatively, ray tracing with a high number of ray tracing per patch (SPM) may be directly performed on the first/second set of patches to be rendered in space.
In the prior art, performing ray tracing of a certain number of SPPs on pixels in a virtual view plane is not described herein again. Ray tracing to perform a certain amount of SPM on each patch in space is described below as an example.
First, no matter the number of rays emitted from a viewpoint is counted in units of SPM or SPP, the number of rays that a single viewpoint can track simultaneously is limited by hardware level to have a certain upper limit. In other words, when ray tracing is performed on the first set of patches to be rendered and the second set of patches to be rendered in the space, there is a certain upper limit to the total ray quantity.
In some possible implementations, the SPM for ray tracing the first set of patches to be rendered is greater than the SPM for ray tracing the second set of patches to be rendered. Optionally, ray tracing for the same number of rays may be performed on each first set of patches to be rendered.
In some possible implementations, the content to be rendered may be divided into two parts according to the high-interest patches: a high attention area and a low attention area. Ray tracing is performed for the higher number of rays allocated to the high attention area as a whole, and thus ray tracing is performed by allocating the lower number of rays from the low attention area.
In the two possible implementations, the number of rays emitted from the viewpoint may be smaller than the upper limit of the number of rays that can be emitted, so that the ray tracing efficiency can be further improved.
In some possible implementations, a smooth light-following may be performed on the second set of patches to be rendered or the low-interest regions. Wherein smooth ray tracing indicates that the number of ray traces for the aforementioned patch or region may be inversely proportional to the distance of the aforementioned patch or region from the first set of patches to be rendered or the high interest region. In particular, a higher number of ray traces are performed on regions that are near the first set of patches to be rendered or regions of high interest than regions that are far from the first set of patches to be rendered or regions of high interest.
In this possible implementation manner, smooth light tracking may enable a more natural transition from the high-attention region rendering result to the low-attention region rendering result, so as to further improve the image quality of the rendering result.
The ray tracing operation is performed by the rendering engine 500, and the rendering result of the first set of patches to be rendered and the rendering result of the second set of patches to be rendered are stored in the rendering engine 500.
In some possible implementations, it is also necessary to perform conventional ray tracing rendering on patches, which do not belong to the first set of patches to be rendered or the second set of patches to be rendered, in the content to be rendered.
S114: rendering engine 500 obtains and stores ray trace rendering results.
After the rendering result of the first set of patches to be rendered and the rendering result of the second set of patches to be rendered are obtained in step S112, a ray tracing rendering result of the content to be rendered may be obtained.
In some possible implementation manners, the ray tracing rendering result of the content to be rendered is further required to be obtained according to the ray tracing rendering result of the patch in the content to be rendered, which does not belong to the first set of patches to be rendered or the second set of patches to be rendered.
After obtaining the ray tracing rendering result of the content to be rendered corresponding to the current frame, the rendering result may be stored in the historical rendering result in the rendering engine 500.
Next, an architecture of the rendering engine 500 in the present application will be described.
Fig. 4 illustrates an architecture of a rendering engine 500. Specifically, rendering engine 500 includes a communication unit 502, a storage unit 504, and a processing unit 506.
And the storage unit 504 is used for storing the occurrence frequency and the average stay time of each model in the historical rendering result in the step S102. Meanwhile, the storage unit 504 is also used for storing information of each model in the space. In step S104, the storage unit 504 is configured to store high/low attention information of each model. The information of the first set of patches to be rendered determined in step S110 will also be stored in the storage unit 504.
Optionally, the storage unit 504 is further configured to store a significant patch in each frame history rendering result in step S106, and store a moving patch in each frame history rendering result in step S108.
And the processing unit 506 is configured to collect a history rendering result and store the history rendering result in the storage unit 504. In step S100, the processing unit 506 is configured to acquire a history rendering result from the storage unit 504. In step S102, the processing unit 506 is configured to count the number of occurrences and the average staying time of each model in the historical rendering result. The processing unit 506 is further configured to determine a high-attention model according to the number of times of occurrence of each model and the average staying time in step S104.
In step S106, the processing unit 506 is configured to determine a significant patch. The processing unit 506 is further configured to detect a moving patch in step S108. The operation of determining the first set of patches to be rendered in step S110 according to the high-attention model, the salient patch, and the moving patch is also performed by the processing unit 506.
The processing unit 506 is further configured to perform the ray-tracing rendering on the content to be rendered according to the first set of patches to be rendered in step S112. In step S114, the processing unit 506 is configured to obtain a ray tracing rendering result of the content to be rendered. Further, the processing unit 506 is further configured to store the obtained ray tracing rendering result of the content to be rendered to the storage unit 204.
Wherein the processing unit may comprise a determination unit 508 and a light tracking unit 510.
Specifically, the determining unit 508 is configured to collect a history rendering result, and store the history rendering result in the storage unit 504. In step S100, the determination unit 508 is configured to acquire a history rendering result from the storage unit 504. In step S102, the determining unit 508 is configured to count the number of occurrences and the average staying time of each model in the historical rendering result. The determining unit 508 is further configured to determine a high-attention model according to the number of times of occurrence of each model and the average staying time in step S104.
In step S106, the determining unit 508 is configured to determine a significant patch. The determining unit 508 is further configured to detect a moving patch in step S108. In step S110, an operation of determining the first set of patches to be rendered is performed according to the high-attention model, the salient patch, and the moving patch, and is also performed by the determining unit 508.
The ray tracing unit 510 is configured to perform the ray tracing rendering operation on the content to be rendered according to the first set of patches to be rendered in step S112. In step S114, the ray tracing unit 510 is configured to obtain a ray tracing rendering result of the content to be rendered. Further, the ray tracing unit 510 is further configured to store the obtained ray tracing rendering result of the content to be rendered in the storage unit 204.
A storage unit 504 for storing the history rendering result and the ray tracing rendering result obtained in step S114.
A communication unit 502, configured to receive the content to be rendered in step S112. Optionally, the method is further configured to send the ray tracing rendering result obtained in step S114.
It is noted that the communication unit 502, the storage unit 504, and the processing unit 506 in the rendering engine 500 may be respectively disposed on the cloud device or the local device. The communication unit 502, the storage unit 504, and the processing unit 506 may also be respectively disposed on different cloud devices or local devices.
Fig. 5 provides a schematic diagram of a computing device 600. As shown in fig. 5, computing device 600 includes: a bus 602, a processor 604, a memory 606, and a communication interface 608. The processor 604, memory 606, and communication interface 608 communicate over the bus 602.
The bus 602 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus. Bus 602 may include a pathway to transfer information between components of computing device 600 (e.g., memory 606, processor 604, communication interface 608).
The processor 604 may be any one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Micro Processor (MP), a Digital Signal Processor (DSP), and the like.
The memory 606 may include volatile memory (volatile memory), such as Random Access Memory (RAM). The memory 604 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory, a Hard Disk Drive (HDD), or a Solid State Drive (SSD). The memory 606 stores executable program codes, and the processor 604 executes the executable program codes to realize the functions of the rendering engine 500 or perform the rendering method 400 described in the foregoing embodiment.
The communication interface 608 enables communication between the computing device 600 and other devices or communication networks using transceiver modules, such as, but not limited to, transceivers. For example, content to be rendered, etc. may be obtained through the communication interface 608.
The embodiment of the application also provides a computing device cluster. As shown in fig. 6, the cluster of computing devices includes at least one computing device 600. The computing device cluster included in the computing device cluster may be all terminal devices, may be all cloud servers, may be part of cloud servers, and may be part of terminal devices.
In the three deployments described above with respect to a cluster of computing devices, the memory 606 in one or more computing devices 600 in the cluster of computing devices may have stored therein the same instructions used by the rendering engine 500 to perform the rendering method 400.
In some possible implementations, one or more computing devices 600 in the cluster of computing devices may also be used to execute portions of the instructions used by the rendering engine 500 to perform the rendering method 400. In other words, a combination of one or more computing devices 600 may collectively execute the instructions used by the rendering engine 500 to perform the rendering method 400.
It is noted that the memory 306 in different computing devices 600 in a cluster of computing devices may store different instructions for performing portions of the functionality of the rendering method 400.
Fig. 7 shows one possible implementation. As shown in FIG. 7, two computing devices 600A and 600B are connected via a communication interface 608. The memory in the computing device 600A has instructions stored thereon for performing the functions of the communication unit 202, the determination unit 208, and the optical line tracking unit 510. Memory in computing device 600B has stored thereon instructions for performing the functions of storage unit 804. In other words, the memory 606 of the computing devices 600A and 600B collectively store instructions for the rendering engine 500 to perform the rendering method 400 or the rendering method 700.
The connection between clusters of computing devices shown in fig. 7 may be in a manner that takes into account the fact that the rendering method 400 or the rendering method 700 provided herein requires a large amount of storage of historical rendering results for tiles in historical frames. Thus, consider that a storage function is being performed by computing device 600B.
It is to be appreciated that the functionality of computing device 600A illustrated in fig. 7 may also be performed by multiple computing devices 600. Likewise, the functionality of computing device 600B may be performed by multiple computing devices 600.
In some possible implementations, one or more computing devices in a cluster of computing devices may be connected over a network. Wherein the network may be the internet or a local area network, etc. Fig. 8 shows one possible implementation. As shown in fig. 8, two computing devices 600C and 600D are connected via a network. In particular, connections are made to the network through communication interfaces in the respective computing devices. In this class of possible implementations, the memory 606 in the computing device 600C holds instructions to execute the communication unit 502 and the determination unit 508. Meanwhile, the memory 606 in the computing device 600D stores instructions to execute the storage unit 504 and the light tracking unit 510.
The connection manner between the computing device clusters shown in fig. 8 may be considered to be that, considering that the rendering method 400 or the rendering method 700 provided in the present application needs to perform ray tracing for a large number of computations and store historical rendering results of patches in a large number of historical frames, the functions implemented by the ray tracing unit 510 and the storage unit 504 are executed by the computing device 600D.
It is to be appreciated that the functionality of computing device 600C illustrated in fig. 8 may also be performed by multiple computing devices 600. Likewise, the functionality of computing device 600D may be performed by multiple computing devices 600.
The embodiment of the application also provides a computer readable storage medium. The computer-readable storage medium can be any available medium that a computing device can store or a data storage device, such as a data center, that contains one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), among others. The computer-readable storage medium includes instructions that direct a computing device to perform the rendering method 400 described above as applied to the rendering engine 500.
The embodiment of the application also provides a computer program product containing instructions. The computer program product may be a software or program product containing instructions capable of being run on a computing device or stored in any available medium. The computer program product, when run on at least one computer device, causes the at least one computer device to perform the rendering method 400 as described above as applied to the rendering engine 500.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (15)

1. A method of rendering, the method comprising:
receiving content to be rendered of an application, the content to be rendered comprising at least one model, each model comprising at least one patch;
acquiring a first set of patches to be rendered and a second set of patches to be rendered from the content to be rendered;
rendering the first set of patches to be rendered based on a first number of tracking rays;
rendering the second set of patches to be rendered based on a second number of tracking rays, wherein the first number of tracking rays is higher than the second number of tracking rays;
and obtaining a rendering result of the first set of patches to be rendered and a rendering result of the second set of patches to be rendered.
2. The method of claim 1, wherein the method further comprises:
obtaining rendering results corresponding to a plurality of pieces of history rendering content of the application, wherein each piece of history rendering content comprises at least one model;
determining a high-attention model included in the plurality of historical rendering contents according to the occurrence frequency of each model in the plurality of historical rendering contents;
the obtaining a first set of patches to be rendered and a second set of patches to be rendered from the content to be rendered includes:
determining a high-attention model in the content to be rendered according to the high-attention model included in the plurality of historical rendering contents;
determining the first set of patches to be rendered from a high-interest model in the content to be rendered.
3. The method of claim 1, wherein the method further comprises:
obtaining rendering results corresponding to a plurality of pieces of history rendering content of the application, wherein each piece of history rendering content comprises at least one model;
determining a high-attention model included in the plurality of historical rendering contents according to the number of stay frames of each model in the plurality of historical rendering contents;
the obtaining a first set of patches to be rendered and a second set of patches to be rendered from the content to be rendered includes:
determining a high-attention model in the content to be rendered according to the high-attention model included in the plurality of historical rendering contents;
determining the first set of patches to be rendered from a high-interest model in the content to be rendered.
4. The method of claim 2 or 3, wherein the method further comprises:
obtaining rendering results corresponding to a plurality of pieces of history rendering content of the application, wherein each piece of history rendering content comprises at least one model;
determining a high-attention model included in the plurality of historical rendering contents according to the number of stay frames and/or the number of occurrences of each model in the plurality of historical rendering contents;
determining a salient patch in a high-attention model included in the plurality of pieces of historical rendering content based on a saliency detection method;
the determining the first set of patches to be rendered from the high-attention model in the content to be rendered includes:
and determining a salient patch in the high-attention model in the contents to be rendered as the first set of patches to be rendered according to a salient patch in the high-attention model included in the plurality of historical rendering contents.
5. The method of claim 1, wherein the method further comprises:
obtaining rendering results corresponding to a plurality of pieces of history rendering content of the application;
determining a moving patch in a model included in the plurality of pieces of historical rendering content based on a moving target detection method;
determining the moving patch in the content to be rendered as a first set of patches to be rendered;
determining the second set of patches to be rendered according to the first set of patches to be rendered;
the method for detecting the moving target comprises the following steps of:
determining a moving pixel according to a difference value of the same pixel in rendering results corresponding to the two pieces of rendering content and a detection threshold value;
and determining the moving patch according to the moving pixel.
6. The method of any of claims 1 to 5, wherein the number of ray traced for each patch in the second set of patches to be rendered is determined based on the distance between that patch and a patch in the first set of patches to be rendered.
7. The method of any of claims 1 to 6, wherein the second set of tiles to be rendered is determined from the content to be rendered and the first set of tiles to be rendered.
8. An apparatus for rendering, the apparatus comprising a communication unit, a processing unit, and a storage unit:
the communication unit is used for receiving the content to be rendered of the application;
the storage unit is used for storing the content to be rendered;
the processing unit is used for acquiring a first set of patches to be rendered and a second set of patches to be rendered from the content to be rendered; rendering the first set of patches to be rendered based on a first number of tracking rays; rendering the second set of patches to be rendered based on a second number of tracking rays, wherein the first number of tracking rays is higher than the second number of tracking rays; and obtaining a rendering result of the first set of patches to be rendered and a rendering result of the second set of patches to be rendered.
9. The apparatus as recited in claim 8, said processing unit to further:
obtaining rendering results corresponding to a plurality of pieces of history rendering content of the application, wherein each piece of history rendering content comprises at least one model;
determining a high-attention model included in the plurality of historical rendering contents according to the occurrence frequency of each model in the plurality of historical rendering contents;
the obtaining a first set of patches to be rendered and a second set of patches to be rendered from the content to be rendered includes:
determining a high-attention model in the content to be rendered according to the high-attention model included in the plurality of historical rendering contents;
determining the first set of patches to be rendered from a high-interest model in the content to be rendered.
10. The apparatus as recited in claim 8, said processing unit to further:
obtaining rendering results corresponding to a plurality of pieces of history rendering content of the application, wherein each piece of history rendering content comprises at least one model;
determining a high-attention model included in the plurality of historical rendering contents according to the number of stay frames of each model in the plurality of historical rendering contents;
the obtaining a first set of patches to be rendered and a second set of patches to be rendered from the content to be rendered includes:
determining a high-attention model in the content to be rendered according to the high-attention model included in the plurality of historical rendering contents;
determining the first set of patches to be rendered from a high-interest model in the content to be rendered.
11. The apparatus as recited in claim 8, said processing unit to further:
obtaining rendering results corresponding to a plurality of pieces of history rendering content of the application, wherein each piece of history rendering content comprises at least one model;
determining a high-attention model included in the plurality of historical rendering contents according to the number of stay frames and/or the number of occurrences of each model in the plurality of historical rendering contents;
determining a salient patch in a high-attention model included in the plurality of pieces of historical rendering content based on a saliency detection method;
the determining the first set of patches to be rendered from the high-attention model in the content to be rendered includes:
and determining a salient patch in the high-attention model in the contents to be rendered as the first set of patches to be rendered according to a salient patch in the high-attention model included in the plurality of pieces of historical rendering contents.
12. The apparatus as recited in claim 10 or 11, said processing unit to further:
obtaining rendering results corresponding to a plurality of pieces of history rendering content of the application;
determining a moving patch in a model included in the plurality of pieces of historical rendering content based on a moving target detection method;
determining the moving patch in the content to be rendered as a first set of patches to be rendered;
determining the second set of patches to be rendered according to the first set of patches to be rendered;
the method for detecting the moving target comprises the following steps of:
determining a moving pixel according to a difference value of the same pixel in rendering results corresponding to the two pieces of rendering content and a detection threshold value;
and determining the moving patch according to the moving pixel.
13. A cluster of computing devices comprising at least one computing device, each computing device comprising a processor and a memory;
the processor of the at least one computing device is to execute instructions stored in the memory of the at least one computing device to cause the cluster of computing devices to perform the method of any of claims 1-5.
14. A computer program product comprising instructions which, when executed by a cluster of computer devices, cause the cluster of computer devices to perform the method of any one of claims 1 to 5.
15. A computer-readable storage medium comprising computer program instructions that, when executed by a cluster of computing devices, perform the method of any of claims 1 to 5.
CN202110080552.8A 2021-01-21 2021-01-21 Rendering method and device Pending CN114820910A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110080552.8A CN114820910A (en) 2021-01-21 2021-01-21 Rendering method and device
PCT/CN2021/139426 WO2022156451A1 (en) 2021-01-21 2021-12-18 Rendering method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110080552.8A CN114820910A (en) 2021-01-21 2021-01-21 Rendering method and device

Publications (1)

Publication Number Publication Date
CN114820910A true CN114820910A (en) 2022-07-29

Family

ID=82524303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110080552.8A Pending CN114820910A (en) 2021-01-21 2021-01-21 Rendering method and device

Country Status (2)

Country Link
CN (1) CN114820910A (en)
WO (1) WO2022156451A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024027286A1 (en) * 2022-08-04 2024-02-08 荣耀终端有限公司 Rendering method and apparatus, and device and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9892549B2 (en) * 2016-06-15 2018-02-13 Pixar Adaptive rendering with linear predictions
CN107067455B (en) * 2017-04-18 2019-11-19 腾讯科技(深圳)有限公司 A kind of method and apparatus of real-time rendering
CN107330966B (en) * 2017-06-21 2021-02-02 杭州群核信息技术有限公司 Rapid rendering method, device and equipment for high-dimensional spatial feature regression
CN111429557B (en) * 2020-02-27 2023-10-20 网易(杭州)网络有限公司 Hair generating method, hair generating device and readable storage medium
CN111538557B (en) * 2020-07-09 2020-10-30 平安国际智慧城市科技股份有限公司 Barrage rendering method based on cascading style sheet and related equipment
CN112116693B (en) * 2020-08-20 2023-09-15 中山大学 CPU-based biomolecule visual ray tracing rendering method
CN112184873A (en) * 2020-10-19 2021-01-05 网易(杭州)网络有限公司 Fractal graph creating method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024027286A1 (en) * 2022-08-04 2024-02-08 荣耀终端有限公司 Rendering method and apparatus, and device and storage medium

Also Published As

Publication number Publication date
WO2022156451A1 (en) 2022-07-28

Similar Documents

Publication Publication Date Title
US10430995B2 (en) System and method for infinite synthetic image generation from multi-directional structured image array
US9843776B2 (en) Multi-perspective stereoscopy from light fields
WO2018107910A1 (en) Method and device for fusing panoramic video images
EP3534336B1 (en) Panoramic image generating method and apparatus
CN109660783A (en) Virtual reality parallax correction
EP3367334B1 (en) Depth estimation method and depth estimation apparatus of multi-view images
WO2021018093A1 (en) Stereo matching method, image processing chip, and moving carrier
US20200273239A1 (en) Systems and methods for ray-traced shadows of transparent objects
JP2016537901A (en) Light field processing method
CN109510975B (en) Video image extraction method, device and system
US11810248B2 (en) Method for processing image data to provide for soft shadow effects using shadow depth information
US20230230311A1 (en) Rendering Method and Apparatus, and Device
CN110782507A (en) Texture mapping generation method and system based on face mesh model and electronic equipment
US20240037856A1 (en) Walkthrough view generation method, apparatus and device, and storage medium
WO2022156451A1 (en) Rendering method and apparatus
KR101289283B1 (en) A holographic display method using a hybrid image acquisition system
KR101208767B1 (en) Stereoscopic image generation method, device and system using circular projection and recording medium for the same
TW201516960A (en) Apparatus and computer-implemented method for generating a three-dimensional scene
Mieloch et al. Graph-based multiview depth estimation using segmentation
Liao et al. Stereo matching and viewpoint synthesis FPGA implementation
WO2023029424A1 (en) Method for rendering application and related device
JP2020004053A (en) Generation apparatus, generation method, and program
CN117173314B (en) Image processing method, device, equipment, medium and program product
US11830140B2 (en) Methods and systems for 3D modeling of an object by merging voxelized representations of the object
KR20220071935A (en) Method and Apparatus for Deriving High-Resolution Depth Video Using Optical Flow

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination