CN107527322B - Rendering method, device, engine and storage medium combined with convolutional neural network - Google Patents

Rendering method, device, engine and storage medium combined with convolutional neural network Download PDF

Info

Publication number
CN107527322B
CN107527322B CN201710890960.3A CN201710890960A CN107527322B CN 107527322 B CN107527322 B CN 107527322B CN 201710890960 A CN201710890960 A CN 201710890960A CN 107527322 B CN107527322 B CN 107527322B
Authority
CN
China
Prior art keywords
resolution
rendering
low
target
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710890960.3A
Other languages
Chinese (zh)
Other versions
CN107527322A (en
Inventor
叶青
唐睿
张骏飞
黄羽众
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qunhe Information Technology Co Ltd
Original Assignee
Hangzhou Qunhe Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Qunhe Information Technology Co Ltd filed Critical Hangzhou Qunhe Information Technology Co Ltd
Priority to CN201710890960.3A priority Critical patent/CN107527322B/en
Publication of CN107527322A publication Critical patent/CN107527322A/en
Application granted granted Critical
Publication of CN107527322B publication Critical patent/CN107527322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the invention discloses a rendering method, a device, an engine and a storage medium combined with a convolutional neural network, wherein the rendering method comprises the following steps: acquiring a low-resolution rendering map, a target resolution texture map and a low-resolution texture map of a target scene, wherein the low-resolution texture map and the low-resolution rendering map have the same resolution; extracting light sensation distribution change information of the low-resolution rendering graph relative to the low-resolution texture graph to generate a light sensation distribution change graph; performing super-resolution restoration on the light sensation distribution change graph to generate a target resolution light sensation distribution change graph, wherein the target resolution light sensation distribution change graph has the same resolution as the target resolution texture graph; and fusing the target resolution light sensation distribution change graph and the target resolution texture graph to obtain a result rendering graph. The problem that time consumption for generating a high-resolution result rendering graph is long in the prior art is solved, and the technical effect of reducing rendering time while ensuring the rendering effect is achieved.

Description

Rendering method, device, engine and storage medium combined with convolutional neural network
Technical Field
The embodiment of the invention relates to the technical field of rendering, in particular to a rendering method, a rendering device, an engine and a storage medium which are combined with a convolutional neural network.
Background
The graph can visually express various information, has large capacity and is easy to obtain by people. With the rapid development of computer software and hardware, the application of computer graphics in various industries has been rapidly popularized and advanced.
However, graphics with realistic effects often need to be rendered, and the rendering time of the prior art is often proportional to the image pixels. For example, the monte carlo method based on ray tracing is an extension of the traditional ray tracing algorithm, and a rendering equation is approximated by a probability statistics method, so that more surface material effects and a simulated global illumination effect can be supported, and a better rendering effect can be obtained. Specifically, if a user adds a light source at one or more positions in a scene, and sets attributes such as color, brightness, and illumination angle of the light source, illumination rendering is performed based on the settings. The rendering mode is that the brightness influence of each light source on each pixel point is calculated according to the light source attributes, so that the pixel value of the pixel point is adjusted, and illumination rendering is completed.
However, in the lighting rendering method of monte carlo ray tracing, a large number of sampling rays need to be calculated for each pixel point, rendering time is extremely long, several minutes are required for generating a high-resolution rendering graph with the resolution of 4000 x 3000, and the resolution is about equal to 4 times of the rendering time with the resolution of 2000 x 1500, so that the rendering cost of the high-resolution rendering graph is always high, and a user often needs to make a trade-off between rendering speed and rendering quality.
Disclosure of Invention
The embodiment of the invention provides a rendering method, a device, an engine and a storage medium combined with a convolutional neural network, and solves the problem that time consumption for generating a high-resolution result rendering graph in the prior art is long.
In a first aspect, an embodiment of the present invention provides a rendering method in combination with a convolutional neural network, including:
acquiring a low-resolution rendering map, a target-resolution texture map and a low-resolution texture map of a target scene, wherein the resolution of the low-resolution texture map is the same as that of the low-resolution rendering map, and the resolution of the low-resolution texture map and the low-resolution rendering map is lower than that of the target-resolution texture map;
extracting light sensation distribution change information of the low-resolution rendering graph relative to the low-resolution texture graph to generate a light sensation distribution change graph;
performing super-resolution restoration on the light sensation distribution change graph to generate a target resolution light sensation distribution change graph, wherein the target resolution light sensation distribution change graph has the same resolution as the target resolution texture graph;
and fusing the target resolution light sensation distribution change graph and the target resolution texture graph to obtain a result rendering graph.
Further, the acquiring a low-resolution rendering map of the target scene includes:
and rendering the target scene based on a ray tracing algorithm based on a preset low resolution value to generate a low resolution rendering map.
Further, obtaining the low resolution texture map comprises:
generating a low-resolution texture map according to the target scene based on a preset low-resolution value; or
Generating a target resolution texture map according to the target scene based on a preset target resolution value;
and reducing the resolution of the target resolution texture map based on a preset resolution reduction rule to generate a low resolution texture map.
Further, before obtaining the low-resolution texture map, the method further includes:
judging whether the preset target resolution value is higher than a preset resolution threshold value or not;
if yes, triggering to generate a low-resolution texture map;
if not, rendering the target scene according to a preset target resolution value to generate a result rendering graph with the target resolution.
Further, the reducing the resolution of the target resolution texture map according to a preset resolution reduction rule, and generating a low resolution texture map includes:
and reducing the resolution of the target resolution texture map through nearest point sampling to generate a low resolution texture map.
Further, the extracting light sensation distribution change information of the low-resolution rendering map relative to the low-resolution texture map to generate a light sensation distribution change map includes:
and calculating the pixel value of each pixel of the low-resolution rendering image and the pixel value of the pixel corresponding to the low-resolution texture image according to a set calculation rule, and taking the calculation result as light sensation distribution information in the low-resolution rendering image to generate a light sensation distribution change image.
Further, the set calculation rule is:
calculating the sum of each pixel value of the low-resolution texture map and a set offset, and updating each pixel value of the low-resolution texture map;
and calculating the quotient of each pixel value of the low-resolution rendering image and the corresponding pixel value of the updated low-resolution texture image as a calculation result.
Further, the super-resolution restoration of the light sensing distribution change diagram to generate a target resolution light sensing distribution change diagram includes:
and performing super-resolution restoration on the light sensation distribution change diagram based on a deep convolutional neural network to generate a target resolution light sensation distribution change diagram.
Further, the target resolution light sensation distribution change graph is fused with the target resolution texture graph to obtain a result rendering graph, and the method comprises the following steps:
solving the sum of each pixel value of the target resolution texture map and a set offset, and updating each pixel value of the target resolution texture map; and accumulating or multiplying the pixel values of all the pixel points of the target resolution light sensation distribution change graph to the pixel values of the pixel points corresponding to the updated target resolution texture graph to form a result rendering graph.
Further, the method is performed by a rendering engine.
In a second aspect, an embodiment of the present invention further provides a rendering apparatus incorporating a convolutional neural network, including:
the image acquisition module is used for acquiring a low-resolution rendering map, a target-resolution texture map and a low-resolution texture map of a target scene, wherein the resolution of the low-resolution texture map is the same as that of the low-resolution rendering map, and the resolution of the low-resolution texture map and the low-resolution rendering map is lower than that of the target-resolution texture map;
the light sensation distribution change graph extraction module is used for extracting light sensation distribution change information of the low-resolution rendering graph relative to the low-resolution texture graph to generate a light sensation distribution change graph;
the super-resolution restoration module is used for carrying out super-resolution restoration on the light sensation distribution change graph to generate a target resolution light sensation distribution change graph, and the target resolution light sensation distribution change graph and the target resolution texture graph have the same resolution;
and the image fusion module is used for fusing the target resolution light sensation distribution change graph and the target resolution texture graph to obtain a result rendering graph.
In a third aspect, an embodiment of the present invention further provides a rendering engine, including:
a display, a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the rendering method in combination with a convolutional neural network as described in the first aspect when executing the program, the display being configured to display the resulting rendering map.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the rendering method in combination with the convolutional neural network according to the first aspect.
According to the technical scheme of the rendering method combined with the convolutional neural network, super-resolution restoration is carried out on the light-sensing distribution change diagram of the low-resolution rendering diagram relative to the low-resolution texture diagram, so that the generated target resolution light-sensing distribution change diagram has the same resolution as the target resolution texture diagram, and then the target resolution light-sensing distribution change diagram and the target resolution texture diagram are fused to generate a result rendering diagram; the light sensation distribution change graph carries light sensation distribution information of the low-resolution rendering graph relative to the low-resolution texture graph, after super-resolution restoration is carried out on the light sensation distribution change graph, the generated target resolution texture graph and the target resolution light sensation distribution change graph are fused to obtain a result rendering graph, the result rendering graph carries high-resolution rendering information, direct rendering of a target scene is avoided to generate the target resolution rendering graph, and rendering time is greatly reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flowchart of a rendering method incorporating a convolutional neural network according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for extracting a low-resolution texture map according to a second embodiment of the present invention;
FIG. 3 is a flowchart of a rendering method incorporating a convolutional neural network according to a third embodiment of the present invention;
FIG. 4A is a rendering of a low resolution field provided by a third embodiment of the present invention;
FIG. 4B is a field texture map of the target resolution provided by the third embodiment of the present invention;
FIG. 4C is a low resolution texture map provided by a third embodiment of the present invention;
FIG. 4D is a diagram illustrating the variation of light sensation distribution provided by the third embodiment of the present invention;
FIG. 4E is a diagram illustrating the variation of the light sensation distribution with the target resolution according to the third embodiment of the present invention;
FIG. 4F is a rendering diagram of the results provided by the third embodiment of the present invention;
FIG. 4G is a direct result rendering graph provided by the third embodiment of the present invention;
fig. 5 is a schematic structural block diagram of a rendering apparatus incorporating a convolutional neural network according to a fourth embodiment of the present invention;
fig. 6 is a schematic structural block diagram of a rendering engine according to a fifth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described through embodiments with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Fig. 1 is a flowchart of a rendering method incorporating a convolutional neural network according to an embodiment of the present invention. The technical solution of this embodiment is suitable for a situation where a scene is rendered to generate a rendered image, and the method is executed by software or hardware configured in a smart device, for example, a mobile phone, a computer, a PAD, or another smart device with a graphics processing function. As shown in fig. 1, the method specifically includes the following steps:
s101, obtaining a low-resolution rendering map, a target resolution texture map and a low-resolution texture map of a target scene, wherein the resolutions of the low-resolution texture map and the low-resolution rendering map are the same, and the resolutions of the low-resolution texture map and the low-resolution rendering map are lower than that of the target resolution texture map.
The target scene is a scene that a user wants to render, the low-resolution rendering map is a low-resolution rendering map generated by rendering the target scene according to a preset low-resolution value, and the embodiment preferably renders the target scene through a pixel-by-pixel rendering method such as a ray tracing algorithm to generate the low-resolution rendering map; the target resolution texture map is generated by generating a target scene into a target resolution texture map according to a preset target resolution value; the low-resolution texture map can generate a target scene into a low-resolution texture map according to a preset low-resolution value; or generating a target resolution texture map from the target scene according to a preset target resolution value, and then reducing the resolution of the target resolution texture map according to a preset resolution reduction rule to generate a low resolution texture map.
The target resolution value is higher than the low resolution value in the embodiment, and the specific target resolution value is not limited, and can be understood as a relative value, i.e. a relative value between the processing speed of the image processing apparatus and the image rendering time acceptable to the user. The high resolution texture map may have a higher resolution than the low resolution texture map after the resolution is reduced.
In this embodiment, the low-resolution texture map is preferably generated by sampling the closest point to reduce the resolution of the target-resolution texture map. The resolution of the generated low-resolution texture map is preferably 1/16-3/4 of the resolution of the target resolution texture map, and the specific resolution reduction ratio can be selected according to the size of the target resolution value, for example, the lower the resolution of the target resolution value is, the larger the resolution ratio is. Of course, one skilled in the art will appreciate that other resolution reduction rules or algorithms may be implemented.
And S102, extracting light sensation distribution change information of the low-resolution rendering graph relative to the low-resolution texture graph to generate a light sensation distribution change graph.
The low-resolution rendering map contains rendered rendering information, the operation is mainly to extract pixel differences of the low-resolution rendering map relative to the low-resolution texture map, and the operation can be to acquire the difference pixel values of each pixel point one by one or acquire regional pixel value differences through some algorithms.
And S103, performing super-resolution restoration on the light sensation distribution change graph to generate a target resolution light sensation distribution change graph, wherein the target resolution light sensation distribution change graph has the same resolution as the target resolution texture graph.
In order to make the resolution of the light sensation distribution change map consistent with the resolution of the target resolution texture map, the embodiment performs super-resolution restoration on the light sensation distribution change map, so that the generated target resolution light sensation distribution change map has the same resolution as the target resolution texture map, and thus the resolution of the target resolution light sensation distribution change map and the resolution of the target resolution texture map have a one-to-one correspondence relationship.
The embodiment preferably performs super-resolution restoration on the light sensation distribution change map based on the deep convolutional neural network to generate a target resolution light sensation distribution change map. The deep convolution neural network carries out super-resolution recovery on the light sensation distribution change diagram by utilizing the conversion relation from the pre-learned low-resolution light sensation distribution change diagram to the target resolution light sensation distribution change diagram. Compared with the traditional interpolation algorithm (for example, the average pixel value of adjacent pixels is calculated, and new pixels inserted between the adjacent pixels have the average pixel value), the super-resolution recovery method based on the deep convolutional neural network is a relatively more complex pixel insertion method, and can obtain a more natural target resolution light sensing distribution change diagram.
And S104, fusing the target resolution light sensation distribution change graph and the target resolution texture graph to obtain a result rendering graph.
The operation fuses the target resolution light sensation distribution change graph with the rendering information and the target resolution texture graph, so that the rendering information is added to the target resolution texture graph. The specific fusion mode is selectable: solving the sum of each pixel value of the target resolution texture map and the set offset, and updating each pixel value of the target resolution texture map; and accumulating or multiplying the pixel values of all the pixel points of the target resolution light sensation distribution change diagram to the pixel values of the pixel points corresponding to the updated target resolution texture map to form a result rendering diagram.
Because the resolution of the target resolution light-sensitive distribution change map and the resolution of the target resolution texture map have a one-to-one correspondence relationship, the light-sensitive distribution information in the result rendering map and the resolution in the target resolution texture map correspond to each other one to enable the result rendering map to have the same effect as the result rendering map with the target resolution generated by directly rendering the target scene, moreover, the embodiment renders the target scene to generate the low-resolution rendering map, extracts the light-sensitive distribution change maps of the low-resolution rendering map and the low-resolution texture map, then performs super-resolution restoration on the light-sensitive distribution change map to generate the target resolution light-sensitive distribution change map, and the time for fusing the target resolution light-sensitive distribution change map and the target texture resolution map is far shorter than the time for directly rendering the target scene to generate the result rendering map with the target resolution, therefore, the rendering time is greatly reduced while the effect of rendering the graph by the result is ensured.
The technical scheme of the rendering method combined with the convolutional neural network provided by the embodiment of the invention is that the super-resolution restoration is carried out on the light-sensitive distribution change diagram of the low-resolution rendering graph relative to the low-resolution texture graph, so that the generated target resolution light-sensitive distribution change diagram has the same resolution as the target resolution texture graph, then the target resolution light-sensitive distribution change diagram and the target resolution texture graph are fused to generate a result rendering graph, the light-sensitive distribution change graph carries the light-sensitive distribution information of the low-resolution rendering graph relative to the low-resolution texture graph, and then the target resolution texture map is fused with the target resolution light sensation distribution change map to obtain a result rendering map, so that the result rendering map carries high-resolution rendering information, the target scene is not directly rendered to generate the target resolution rendering map, and the rendering time is greatly reduced.
Example two
Fig. 2 is a flowchart of a method for extracting a low-resolution texture map in a rendering method combined with a convolutional neural network according to a second embodiment of the present invention. In this embodiment, before the low-resolution texture map is obtained, the preset target resolution value is further determined. As shown in fig. 2, the method for acquiring a low-resolution texture map includes:
and S1001, judging whether the preset target resolution value is higher than a preset resolution threshold value, if so, triggering to execute S1002, and otherwise, executing S1003.
The preset resolution threshold in this embodiment may be set according to actual conditions, such as an effect requirement of a result rendering graph, performance of an execution rendering device, a requirement of a rendering speed, and the like.
And S1002, triggering to generate a low-resolution texture map.
In this embodiment, the generation method of the low resolution texture map includes: generating a low-resolution texture map according to a preset low-resolution value and a target scene; or generating a target resolution texture map according to a preset target resolution value and a target scene; and reducing the resolution of the target resolution texture map according to a preset resolution reduction rule to generate a low resolution texture map.
S1003, rendering the target scene according to a preset target resolution value, and generating a result rendering graph with the target resolution.
When the preset target resolution value is lower than the preset resolution threshold value, it is indicated that the time for directly rendering the target scene is within the acceptable range, and at the moment, the target scene is directly rendered.
In the embodiment of the invention, whether a low-resolution texture map needs to be generated is judged according to the relation between the preset target resolution value and the preset resolution threshold value, if so, the low-resolution texture map is generated, the result rendering map with the target resolution is generated by the rendering method combining the convolutional neural network, and if not, the target scene is directly rendered to generate the result rendering map with the target resolution, so that a user can conveniently set the preset resolution threshold value according to actual requirements, and the practicability is high.
EXAMPLE III
Fig. 3 is a flowchart of a rendering method incorporating a convolutional neural network according to a third embodiment of the present invention. On the basis of the embodiment, the embodiment of the invention generates optimization of the light sensation distribution change diagram by extracting the light sensation distribution change information of the low-resolution rendering diagram relative to the low-resolution texture diagram. As shown in fig. 3, the rendering method combined with the convolutional neural network includes:
s101, obtaining a low-resolution rendering map, a target resolution texture map and a low-resolution texture map of a target scene, wherein the resolutions of the low-resolution texture map and the low-resolution rendering map are the same, and the resolutions of the low-resolution texture map and the low-resolution rendering map are lower than that of the target resolution texture map.
And S102, calculating the pixel value of each pixel of the low-resolution rendering image and the pixel value of the corresponding pixel of the low-resolution texture image according to a set calculation rule, and taking the calculation result as light sensation distribution information in the low-resolution rendering image to generate a light sensation distribution change image.
The set calculation rule is as follows: calculating the sum of each pixel value of the low-resolution texture map and a set offset, and updating each pixel value of the low-resolution texture map; and calculating the quotient of each pixel value of the low-resolution rendering image and the corresponding pixel value of the updated low-resolution texture image as a calculation result. . Alternatively, the difference between the pixel values or other calculation rules between the pixel values may be obtained, as long as the difference between the pixel values is reflected.
The embodiment preferably, but not limited to, directly divides the low-resolution rendering map by the low-resolution texture map to extract the light sensation distribution information in the low-resolution rendering map, thereby generating the light sensation distribution variation map.
And S103, performing super-resolution restoration on the light sensation distribution change graph to generate a target resolution light sensation distribution change graph, wherein the target resolution light sensation distribution change graph has the same resolution as the target resolution texture graph.
And S104, fusing the target resolution light sensation distribution change graph and the target resolution texture graph to obtain a result rendering graph.
Furthermore, in this embodiment, the target scene is rendered to generate a low-resolution rendering map, and when the target scene is rendered, a small amount of sampling may be performed on each pixel point, and a large amount of sampling may also be performed. The quality of the effect of rendering the result graph is approximately proportional to the number of samplings.
Illustratively, when the result rendering graph is found to be not in accordance with the expected requirement, the sampling times of pixel points can be increased for the target scene, and the rendering is performed again to generate a low-resolution rendering graph; when it is found that new rendering information needs to be added to the result rendering map, the target scene may be re-rendered after adding new lighting information to generate a low-resolution rendering map so as to update the low-resolution rendering map, and then the result rendering map of the target resolution may be generated based on the new low-resolution rendering map and the rendering method that combines the convolutional neural network described in the foregoing embodiment.
Exemplarily, rendering the target scene according to a preset low resolution value to generate a low resolution rendering map, which takes 30s and has a resolution of 800 × 600, as shown in fig. 4A; generating a target resolution texture map according to a preset target resolution value and a target scene, wherein the time consumption is 1.5s, and the target resolution is 1600 × 1200, as shown in fig. 4B; reducing the resolution of the target resolution texture map to 800 × 600, generating a low resolution texture map, which takes 0.5s, as shown in fig. 4C; extracting light sensation distribution change information of the low-resolution rendering graph relative to the low-resolution texture graph to generate a light sensation distribution change graph, wherein the time consumption is 0.1s, and the resolution is 800 × 600, as shown in fig. 4D; performing super-resolution restoration on the light sensation distribution change graph to generate a target resolution light sensation distribution change graph, wherein the time consumption is 3s, and the resolution is 1600 x 1200, as shown in fig. 4E, fusing the target resolution light sensation distribution change graph and the target resolution texture graph to obtain a result rendering graph, wherein the time consumption is 0.1s, the resolution is 1600 x 1200, as shown in fig. 4F, and the total time consumption is 35.2 s.
According to the preset target resolution value, the target scene is directly rendered, and a result rendering graph with the target resolution is generated, that is, the result rendering graph with the resolution of 1600 × 1200 takes 150 seconds, as shown in fig. 4G.
Comparing fig. 4F and fig. 4G, the two are not visually different, but the total time of the rendering method according to the present invention is 35.2s, which is far less than the time of directly rendering the target scene in the prior art to generate the result rendering map with the target resolution.
In the embodiment, the light sensation distribution change map in the low-resolution rendering map is extracted to separate the light sensation distribution information from other information such as textures and the like in the low-resolution rendering map, so that the light sensation distribution change map and the target resolution texture map are fused to realize the pixel-by-pixel correspondence of the rendering information and the target resolution texture map, and the result rendering map has the same visual effect as that of the result rendering map generated based on the prior art and reduces the rendering time.
The rendering method combined with the convolutional neural network provided by the embodiments of the present invention is preferably executed by a rendering engine. And acquiring a target scene to be rendered by an interface of the rendering engine, and displaying a result rendering graph after rendering is finished.
Example four
Fig. 5 is a schematic structural block diagram of a rendering apparatus incorporating a convolutional neural network according to a fourth embodiment of the present invention. The rendering device combined with the convolutional neural network is used for executing the rendering method combined with the convolutional neural network provided by any embodiment, and can be configured in an intelligent device. As shown in fig. 5, the rendering apparatus incorporating a convolutional neural network includes:
an image obtaining module 11, configured to obtain a low-resolution rendering map, a target-resolution texture map, and a low-resolution texture map of a target scene, where the low-resolution texture map and the low-resolution rendering map have the same resolution, and the resolutions of the low-resolution texture map and the low-resolution rendering map are lower than the resolution of the target-resolution texture map;
a light sensation distribution change map extraction module 12, configured to extract light sensation distribution change information of the low-resolution rendering map relative to the low-resolution texture map to generate a light sensation distribution change map;
the super-resolution restoration module 13 is configured to perform super-resolution restoration on the light sensation distribution change map to generate a target resolution light sensation distribution change map, where the target resolution light sensation distribution change map has the same resolution as the target resolution texture map;
and the image fusion module 14 is used for fusing the target resolution light sensation distribution change map and the target resolution texture map to obtain a result rendering map.
The technical scheme of the rendering device combined with the convolutional neural network provided by the embodiment of the invention is that the super-resolution restoration is carried out on the light-sensitive distribution change diagram of the low-resolution rendering graph relative to the low-resolution texture graph, so that the generated target resolution light-sensitive distribution change diagram has the same resolution as the target resolution texture graph, then the target resolution light-sensitive distribution change diagram and the target resolution texture graph are fused to generate a result rendering graph, the light-sensitive distribution change graph carries the light-sensitive distribution information of the low-resolution rendering graph relative to the low-resolution texture graph, and then the target resolution texture map is fused with the target resolution light sensation distribution change map to obtain a result rendering map, so that the result rendering map carries high-resolution rendering information, the target scene is not directly rendered to generate the target resolution rendering map, and the rendering time is greatly reduced.
The rendering device combined with the convolutional neural network provided by the embodiment of the invention can execute the rendering method combined with the convolutional neural network provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
Fig. 6 is a schematic structural diagram of a rendering engine according to a fifth embodiment of the present invention, as shown in fig. 6, the apparatus includes a display 300, a processor 301, and a memory 302; the number of processors 301 in the engine may be one or more, and one processor 301 is taken as an example in fig. 6; the display, the processor 301 and the memory 302 in the device may be connected by a bus or other means, and fig. 6 illustrates the connection by a bus as an example.
The memory 302 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the rendering method in combination with the convolutional neural network in the embodiment of the present invention (for example, the image acquisition module 11, the light sensation distribution change map extraction module 12, the super-resolution restoration module 13, and the image fusion module 14). The processor 301 executes various functional applications of the device and data processing, i.e., implements the above-described rendering method in conjunction with the convolutional neural network, by executing software programs, instructions, and modules stored in the memory 302.
The memory 302 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 302 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 302 may further include memory located remotely from the processor 301, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The display 300 may include a display device such as a display screen, for example, of a user terminal.
EXAMPLE six
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a rendering method in conjunction with a convolutional neural network, the method including:
acquiring a low-resolution rendering map, a target-resolution texture map and a low-resolution texture map of a target scene, wherein the resolution of the low-resolution texture map is the same as that of the low-resolution rendering map, and the resolution of the low-resolution texture map and the low-resolution rendering map is lower than that of the target-resolution texture map;
extracting light sensation distribution change information of the low-resolution rendering graph relative to the low-resolution texture graph to generate a light sensation distribution change graph;
performing super-resolution restoration on the light sensation distribution change graph to generate a target resolution light sensation distribution change graph, wherein the target resolution light sensation distribution change graph has the same resolution as the target resolution texture graph;
and fusing the target resolution light sensation distribution change graph and the target resolution texture graph to obtain a result rendering graph.
Of course, the storage medium containing the computer-executable instructions provided by the embodiments of the present invention is not limited to the method operations described above, and may also perform related operations in the rendering method combined with the convolutional neural network provided by any embodiments of the present invention.
Based on the understanding that the technical solutions of the present invention can be embodied in the form of software products, such as floppy disks, Read-Only memories (ROMs), Random Access Memories (RAMs), flash memories (F L ASH), hard disks or optical disks of a computer, etc., and include instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method for rendering in conjunction with a convolutional neural network according to the embodiments of the present invention.
It should be noted that, in the embodiment of the rendering apparatus in combination with the convolutional neural network, each included unit and module are only divided according to functional logic, but are not limited to the above division, as long as the corresponding function can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (12)

1. A rendering method in conjunction with a convolutional neural network, comprising:
acquiring a low-resolution rendering map, a target-resolution texture map and a low-resolution texture map of a target scene, wherein the resolution of the low-resolution texture map is the same as that of the low-resolution rendering map, and the resolution of the low-resolution texture map and the low-resolution rendering map is lower than that of the target-resolution texture map;
extracting light sensation distribution change information of the low-resolution rendering graph relative to the low-resolution texture graph to generate a light sensation distribution change graph;
performing super-resolution restoration on the light sensation distribution change graph based on a deep convolutional neural network to generate a target resolution light sensation distribution change graph, wherein the target resolution light sensation distribution change graph has the same resolution as the target resolution texture graph;
and fusing the target resolution light sensation distribution change graph and the target resolution texture graph to obtain a result rendering graph.
2. The method of claim 1, wherein obtaining a low resolution rendering of a target scene comprises:
and rendering the target scene according to a ray tracing algorithm based on a preset low resolution value to generate a low resolution rendering map.
3. The method of claim 1, wherein obtaining a low resolution texture map comprises:
generating a low-resolution texture map according to the target scene based on a preset low-resolution value; or
Generating a target resolution texture map according to the target scene based on a preset target resolution value;
and reducing the resolution of the target resolution texture map based on a preset resolution reduction rule to generate a low resolution texture map.
4. The method of claim 3, wherein prior to obtaining the low resolution texture map, further comprising:
judging whether the preset target resolution value is higher than a preset resolution threshold value or not;
if yes, triggering to generate a low-resolution texture map;
if not, rendering the target scene according to a preset target resolution value to generate a result rendering graph with the target resolution.
5. The method of claim 3, wherein the reducing the resolution of the target resolution texture map based on a preset resolution reduction rule, and generating a low resolution texture map comprises:
and reducing the resolution of the target resolution texture map through nearest point sampling to generate a low resolution texture map.
6. The method of claim 1, wherein the extracting of the light sensation distribution variation information of the low-resolution rendering map relative to the low-resolution texture map to generate the light sensation distribution variation map comprises:
and calculating the pixel value of each pixel of the low-resolution rendering image and the pixel value of the pixel corresponding to the low-resolution texture image according to a set calculation rule, and taking the calculation result as light sensation distribution information in the low-resolution rendering image to generate a light sensation distribution change image.
7. The method according to claim 6, wherein the set calculation rule is:
calculating the sum of each pixel value of the low-resolution texture map and a set offset, and updating each pixel value of the low-resolution texture map;
and calculating the quotient of each pixel value of the low-resolution rendering image and the corresponding pixel value of the updated low-resolution texture image as a calculation result.
8. The method of claim 1, wherein fusing the target-resolution light sensation distribution variation graph with the target-resolution texture graph to obtain a result rendering graph comprises:
calculating the sum of each pixel value of the target resolution texture map and a set offset so as to update each pixel value of the target resolution texture map;
and accumulating or multiplying the pixel values of all the pixel points of the target resolution light sensation distribution change graph to the pixel values of the pixel points corresponding to the updated target resolution texture graph to form a result rendering graph.
9. The method of any of claims 1-8, wherein the method is performed by a rendering engine.
10. A rendering apparatus incorporating a convolutional neural network, comprising:
the image acquisition module is used for acquiring a low-resolution rendering map, a target-resolution texture map and a low-resolution texture map of a target scene, wherein the resolution of the low-resolution texture map is the same as that of the low-resolution rendering map, and the resolution of the low-resolution texture map and the low-resolution rendering map is lower than that of the target-resolution texture map;
the light sensation distribution change graph extraction module is used for extracting light sensation distribution change information of the low-resolution rendering graph relative to the low-resolution texture graph to generate a light sensation distribution change graph;
the super-resolution restoration module is used for carrying out super-resolution restoration on the light sensation distribution change map based on a depth convolution neural network to generate a target resolution light sensation distribution change map, and the target resolution light sensation distribution change map has the same resolution as the target resolution texture map;
and the image fusion module is used for fusing the target resolution light sensation distribution change graph and the target resolution texture graph to obtain a result rendering graph.
11. A rendering engine comprising a display, a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements a method of rendering in conjunction with a convolutional neural network as claimed in any one of claims 1 to 9, and wherein the display is adapted to display the resulting rendering map.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a rendering method in combination with a convolutional neural network as claimed in any one of claims 1 to 9.
CN201710890960.3A 2017-09-27 2017-09-27 Rendering method, device, engine and storage medium combined with convolutional neural network Active CN107527322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710890960.3A CN107527322B (en) 2017-09-27 2017-09-27 Rendering method, device, engine and storage medium combined with convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710890960.3A CN107527322B (en) 2017-09-27 2017-09-27 Rendering method, device, engine and storage medium combined with convolutional neural network

Publications (2)

Publication Number Publication Date
CN107527322A CN107527322A (en) 2017-12-29
CN107527322B true CN107527322B (en) 2020-08-04

Family

ID=60737624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710890960.3A Active CN107527322B (en) 2017-09-27 2017-09-27 Rendering method, device, engine and storage medium combined with convolutional neural network

Country Status (1)

Country Link
CN (1) CN107527322B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738626B (en) * 2019-10-24 2022-06-28 广东三维家信息科技有限公司 Rendering graph optimization method and device and electronic equipment
CN111696188B (en) * 2020-04-26 2023-10-03 杭州群核信息技术有限公司 Rendering graph rapid illumination editing method and device and rendering method
CN112419467B (en) * 2020-11-05 2023-10-03 杭州群核信息技术有限公司 Method, device and system for improving rendering efficiency based on deep learning
CN112801878A (en) * 2021-02-08 2021-05-14 广东三维家信息科技有限公司 Rendering image super-resolution texture enhancement method, device, equipment and storage medium
CN112905293B (en) * 2021-03-26 2023-07-07 贝壳找房(北京)科技有限公司 Graphics loading method and system and graphics rendering method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1584667A (en) * 2004-06-15 2005-02-23 南京大学 Self adaptable photon density controlling method driven by importance grade
CN105453139A (en) * 2013-07-17 2016-03-30 微软技术许可有限责任公司 Sparse GPU voxelization for 3D surface reconstruction
CN105676470A (en) * 2016-03-24 2016-06-15 清华大学 3D scene vision spatial resolution enhancing method and system
CN106127684A (en) * 2016-06-22 2016-11-16 中国科学院自动化研究所 Image super-resolution Enhancement Method based on forward-backward recutrnce convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1584667A (en) * 2004-06-15 2005-02-23 南京大学 Self adaptable photon density controlling method driven by importance grade
CN105453139A (en) * 2013-07-17 2016-03-30 微软技术许可有限责任公司 Sparse GPU voxelization for 3D surface reconstruction
CN105676470A (en) * 2016-03-24 2016-06-15 清华大学 3D scene vision spatial resolution enhancing method and system
CN106127684A (en) * 2016-06-22 2016-11-16 中国科学院自动化研究所 Image super-resolution Enhancement Method based on forward-backward recutrnce convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于体分割重建的增强现实交互系统;李佳宁 等;《光电子技术》;20140630;第34卷(第6期);第140-143页 *
基于联合双边滤波的纹理合成上采样算法;肖春霞;《计算机学报》;20090228;第32卷(第2期);第241-251页 *

Also Published As

Publication number Publication date
CN107527322A (en) 2017-12-29

Similar Documents

Publication Publication Date Title
CN107680042B (en) Rendering method, device, engine and storage medium combining texture and convolution network
CN107527322B (en) Rendering method, device, engine and storage medium combined with convolutional neural network
CN107742317B (en) Rendering method, device and system combining light sensation and convolution network and storage medium
US11344806B2 (en) Method for rendering game, and method, apparatus and device for generating game resource file
CN110990516B (en) Map data processing method, device and server
CN110533594B (en) Model training method, image reconstruction method, storage medium and related device
CN111133472B (en) Method and apparatus for infrastructure design using 3D reality data
US9454803B1 (en) System and method for scene dependent multi-band blending
CN108830923B (en) Image rendering method and device and storage medium
CN112652046B (en) Game picture generation method, device, equipment and storage medium
CN108921798B (en) Image processing method and device and electronic equipment
CN105550973B (en) Graphics processing unit, graphics processing system and anti-aliasing processing method
CN116310046B (en) Image processing method, device, computer and storage medium
CN114913067A (en) Rendering method and device for dynamic resolution, electronic equipment and readable storage medium
CN113110731B (en) Method and device for generating media content
CN112891946A (en) Game scene generation method and device, readable storage medium and electronic equipment
CN112700547B (en) Map making method and related equipment
CN107578476B (en) Visual effect processing method and device for three-dimensional model of medical instrument
CN109729285B (en) Fuse grid special effect generation method and device, electronic equipment and storage medium
FR3004881A1 (en) METHOD FOR GENERATING AN OUTPUT VIDEO STREAM FROM A WIDE FIELD VIDEO STREAM
CN114283267A (en) Shadow map determination method and device
CN115049572A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
CN113345066B (en) Method, device, equipment and computer-readable storage medium for rendering sea waves
CN115239895B (en) Mass data loading and optimal rendering method for GIS water environment 3D map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Ye Qing

Inventor after: Huang Xiaohuang

Inventor after: Tang Rui

Inventor after: Zhang Junfei

Inventor after: Huang Yuzhong

Inventor before: Ye Qing

Inventor before: Tang Rui

Inventor before: Zhang Junfei

Inventor before: Huang Yuzhong

CB03 Change of inventor or designer information